forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
rkg6sJHYDr
Intrinsically Motivated Discovery of Diverse Patterns in Self-Organizing Systems
[ "Chris Reinke", "Mayalen Etcheverry", "Pierre-Yves Oudeyer" ]
In many complex dynamical systems, artificial or natural, one can observe self-organization of patterns emerging from local rules. Cellular automata, like the Game of Life (GOL), have been widely used as abstract models enabling the study of various aspects of self-organization and morphogenesis, such as the emergence of spatially localized patterns. However, findings of self-organized patterns in such models have so far relied on manual tuning of parameters and initial states, and on the human eye to identify interesting patterns. In this paper, we formulate the problem of automated discovery of diverse self-organized patterns in such high-dimensional complex dynamical systems, as well as a framework for experimentation and evaluation. Using a continuous GOL as a testbed, we show that recent intrinsically-motivated machine learning algorithms (POP-IMGEPs), initially developed for learning of inverse models in robotics, can be transposed and used in this novel application area. These algorithms combine intrinsically-motivated goal exploration and unsupervised learning of goal space representations. Goal space representations describe the interesting features of patterns for which diverse variations should be discovered. In particular, we compare various approaches to define and learn goal space representations from the perspective of discovering diverse spatially localized patterns. Moreover, we introduce an extension of a state-of-the-art POP-IMGEP algorithm which incrementally learns a goal representation using a deep auto-encoder, and the use of CPPN primitives for generating initialization parameters. We show that it is more efficient than several baselines and equally efficient as a system pre-trained on a hand-made database of patterns identified by human experts.
[ "deep learning", "unsupervised Learning", "self-organization", "game-of-life" ]
Accept (Talk)
https://openreview.net/pdf?id=rkg6sJHYDr
https://openreview.net/forum?id=rkg6sJHYDr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "zrbA8uGEx6", "rJgB3xcBjB", "S1gprgqSor", "rkeklxqBir", "BkgcIJ9HiB", "Hye27k9HsB", "Hyez9WiNcr", "BJxu9RJAYH", "ryg4ZdOhtH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798736189, 1573392556528, 1573392452678, 1573392358886, 1573392210102, 1573392163537, 1572282762323, 1571843727970, 1571747835743 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1931/Authors" ], [ "ICLR.cc/2020/Conference/Paper1931/Authors" ], [ "ICLR.cc/2020/Conference/Paper1931/Authors" ], [ "ICLR.cc/2020/Conference/Paper1931/Authors" ], [ "ICLR.cc/2020/Conference/Paper1931/Authors" ], [ "ICLR.cc/2020/Conference/Paper1931/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1931/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1931/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Talk)\", \"comment\": \"The authors introduce a framework for automatically detecting diverse, self-organized patterns in a continuous Game of Life environment, using compositional pattern producing networks (CPPNs) and population-based Intrinsically Motivated Goal Exploration Processes (POP-IMGEPs) to find the distribution of system parameters that produce diverse, interesting goal patterns.\\n\\nThis work is really well-presented, both in the paper and on the associated website, which is interactive and features source code and demos. Reviewers agree that it\\u2019s well-written and seems technically sound. I also agree with R2 that this is an under-explored area and thus would add to the diversity of the program.\\n\\nIn terms of weaknesses, reviewers noted that it\\u2019s quite long, with a lengthy appendix, and could be a bit confusing in areas. Authors were responsive to this in the rebuttal and have trimmed it, although it\\u2019s still 29 pages. My assessment is well-aligned with those of R2 and thus I\\u2019m recommending accept. In the rebuttal, the authors mentioned several interesting possible applications for this work; it\\u2019d be great if these could be included in the discussion. \\n\\nGiven the impressive presentation and amazing visuals, I think it could make for a fun talk.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Answer to Official Blind Review #2\", \"comment\": \"We would like to thank Reviewer 2 for his time to provide us feedback, and for providing encouraging comments.\\n\\nFirst, as also stated in our response to R1, we agree (and apologize) that the Appendix in our initial submission was too long, not sufficiently well structured, and mixed materials that usefully complemented the main paper with materials that were much less useful. We have made a substantial rewrite and reduction of the Appendix, as detailed in response to R1.\\n\\nSecond, we agree that our results can be evaluated from two perspectives: 1) using quantitative metrics with statistical comparisons (as done in the main part of the paper); 2) using the human eye (which can be the eye of the scientist end-user of such a system, or a more \\u201cartistic\\u201d human eye). In order to enable readers to explore with their eyes, even more, the discoveries of the algorithms we study, we have built an interactive website to navigate all the patterns discovered by all goal exploration algorithms over many runs: https://projector.tensorflow.org/?config=https://raw.githubusercontent.com/intrinsically-motivated-discovery/intrinsically-motivated-discovery.github.io/master/assets/media/tensorboard/projector_config.json\\n\\nOur perspective is that the quantitative metrics are in practice very useful and relevant for several end-user applications we are now working on as next steps of this project. We are indeed right now working on using the methodology presented in this paper to enable \\n1) bio-chemists to map the space of behaviours of certain complex biochemistry systems for which they do not have a good model (and even poor intuitive understanding), and then to optimize for target properties by leveraging the diversity of patterns found in the unsupervised discovery phase (previous papers on IMGEPs cited in our paper have shown that, in more traditional contexts of robotic control with hand-defined goals, finding a high diversity of behaviors enables to bootstrap very sample efficient optimization of a target behavior afterward)\\n2) neuroscientists to map the space of behaviours of complex neuro-muscular models and leverage the discovered diversity for later optimization of target behaviors.\\n\\n\\n> The paper refers to hand-designed goal spaces and talks, on p28, about \\u201cthe statistical measures used to define the goal space\\u201d. At the same time, the analytic behavior space is [...] it is *not* referred to as hand-designed. At this point, the profusion of spaces and measures means that I am no longer sure what counts as hand-crafted or not. Please clarify. \\n\\nAs we compare algorithms using different forms of goal spaces, and in addition we use an evaluation space (analytic behaviour space) for the evaluation and comparisons, there are indeed many spaces and our terminology in the initial text may indeed have complicated precise understanding. The analytic behavior space, as explained in 4.2 and detailed in section B.7.2 of the new Appendix, is the concatenation of a set of learned features (VAE over an \\u201coracle\\u201d database of 42500 Lenia patterns) and of hand-designed features. The different kinds of spaces used by algorithms are described in section 4.3.\\n\\n\\n> The hypothesis on p34, sec E.4.2 that the VAE\\u2019s 8-dim bottleneck helps focus on animals rather than non-animals (which are differentiated more in terms of textures and details) is important and should be checked. \\n\\nAs stated in the paper, this is indeed a speculative hypothesis we formulated as a result of analyzing the discoveries of the algorithms using learned VAEs. We formulated it in the Appendix because from our perspective it addresses a question that goes beyond the scope of the main 4 scientific questions we formulate in the main paper (section 5). We have begun thinking about how to make it more precise and test it, however, this is challenging as the bottleneck interacts with other factors (e.g. RGS with the same bottleneck does not incentivize animal discoveries, showing a bottleneck is not sufficient in itself + the VAEs learn representations that tend to produce blurred decoding). For this reason, we removed this specific hypothesis and only kept the discussion of the potential role of VAE\\u2019s difficulty in encoding sharp details. \\n\\n\\n> Some of the decisions about what to check and vary are unclear. For example, section E.1 considers the effect of different initializations (\\u201cpytorch\\u201d, \\u201cxavier\\u201d and \\u201ckaiming\\u201d) [...]\\n\\nThe reason we initially included a comparison for different initializations was to ensure the RGS algorithm we used (randomly initialized VAE with no learning) was fairly compared to PGL and OGL (a poor initialization may project most data on the same embedding, e.g. through saturation). However, we agree this material can be omitted, which we have done in the new version of the Appendix, together with removing several other parts (e.g. VAE variants and HGS variants).\"}", "{\"title\": \"Answer to Official Blind Review #1 - Part 2\", \"comment\": \"We thank R4 for the detailed suggestions. We updated several parts of our paper accordingly. In detail:\\n\\n>Section 3.1: It is not clear how the initial system state is established. In Section 3.1. the text states that 'parameters are randomly sampled and explored' before the process starts, but it is not clear why a random sampling is used and what this means for the subsequent sampling. Later in the text (3.3) it becomes more clear, but here this appears too unclear. \\n\\nWe made some slight changes which hopefully improve readability.\\n\\n\\n>Section 3.1: \\\"distribution over a hypercube in \\\\mathcal{T} chosen to be large enough to bias exploration towards the frontiers of known goals to incentivize diversity.\\\" This sentence is not clear and needs more details. How is the distribution chosen exactly?\", \"adapted_the_paragraph_to_be_more_exact\": \"\\u201cDifferent goal and parameter sampling mechanisms can be used within this architecture (Baranes & Oudeyer, 2013; Forestier & Oudeyer, 2016). In the experiments below, goals are sampled uniformly over a hyperrectangle defined in T. The hyperrectangle is chosen large enough to allow a sampling of a large goal diversity. The parameters are sampled by 1) given a goal, selecting the parameter from the history whose corresponding outcome is most similar in the goal space; 2) then mutating it by a random process.\\u201d\\n\\n\\n>Section 3.2 appears a bit repetitive and could be more concise. I don't think it is necessary >here to contrast manual vs learned features of the goal space. \\n\\nAs unsupervised learning of goal spaces is one of the key points of the paper, we would like to emphasize it in this part by contrasting it to manually defined goal spaces.\\n\\n\\n>Section 3.2 (P3): the last sentence of this paragraph reads as if there exists no approaches for >VEAs in online settings. This should be toned down or backed up by a reference. \\n\\nWe are unsure how the last sentence is implying this. Maybe this comment refers to a sentence in another section? We state in the beginning of Paragraph 3 of Section 3.2, that previous IMGEP approaches are not using online trained VAEs. We are not aware of any IMGEP approaches that use online trained VAEs. Yet, we mention some other methods that use online learned VAEs under the \\u201cIntrinsically motivated learning\\u201d paragraph of the related work section (Sec. 2).\\n\\n\\n>Section 3.2: (last sentence): it is not clear how the history is used exactly to train the network. Which strategy is used to sample from the history of observations?\", \"we_added_extra_information\": \"\\u201cImportance sampling is used to give more weight to recently discovered patterns by using a weighted random sampler. It samples for 50% of the training batch samples patterns from the last K iterations and for the other 50% patterns from all other previous iterations\\u201d\\n\\n\\n> Section 3.3: What is meant by \\\"The CPPNs are used of the parameters \\\\{theta}\\\"? The details provided after this sentence are not clear and need more details. \\n\\nAdapted the paragraph to give more context.\\n\\n\\n> Section 4.2: Please provide more details what \\\"very large\\\" dataset means.\", \"added_the_number_of_patterns_in_the_dataset\": \"\\u201c... over a large dataset of 42500 Lenia patterns ...\\u201d\\n\\n\\n> Section 4.2: 'HGS algorithm' is not defined.\\n\\nAdapted to not using the HGS abbreviation here.\\n\\n\\n> Section 5: It seems unnecessary to explain what t-SNE does as a method.\\n\\nRemoved.\"}", "{\"title\": \"Answer to Official Blind Review #1 - Part 1\", \"comment\": \"We thank reviewer 1 for his time and effort, as well as for the encouraging comments. We especially appreciate the positive view on our introduction of a new problem framework, that may stimulate new and further research in machine learning, as it is for us a main objective (together with the study and comparison of particular algorithms).\\n\\nWe would also like to apologize if the reading has been made difficult due to the length and/or structure of our Appendix. This was not intended, and in particular, it was not at all our aim to circumvent page limits. While with R1's review we realize we could have better organized and selected the material presented, we would like to explain our initial aim in structuring and building the paper, which was:\\n- Write a main paper where essential explanations (including presentation of a problem and context new to the readers) and main contributions were in the main paper, such that readers could understand their core aspects without reading the Appendix.\\n- Provide an Appendix that:\\n 1) includes full-page figures that provide complements to the main quantitative results (Figs. 3 and 4, main paper) through qualitative visual illustrations of examples of runs of the algorithm (Figs. 5-9, new version). \\n 2) give all details enabling to reproduce all experiments (complementing the code)\\n 3) give all details enabling to understand all the techniques we use without needing to read the papers in the literature from which we reused them (e.g. Lenia\\u2019s complex system dynamics in section A, IMGEP implementation details, explanation of CPPNs, structure, and training of VAEs).\\n 4) show additional experimental results to show the robustness of our findings (e.g. showing that our results are robust to changes in the parameters of our diversity measure; or showing that the choice of hand-defined features used in HGS is fair by showing how it compares to other possible choices).\\n 5) show negative results for other algorithm variants we tried (e.g. different initialization methods for the randomized VAE (IMGEP-RGS) ), so that readers who would try to build on this work can benefit from this information.\\n\\nAs R1 and R2 remark, in the end this made a very long Appendix. As some papers accepted in previous editions of ICLR included similarly long Appendices, we did not try to reduce the Appendix at submission time. However, we agree that this should be improved. As a result, we updated significantly the Appendix by:\\n\\n1) Removing large parts of the Appendix (we are thinking of providing this information rather on the Github of the code): \\n- parts which were rather tutorials and summaries of other papers (e.g. non-essential explanations of the Lenia system, CPPNs or VAEs)\\n- parts presenting algorithm variants and hyperparameters we tried but which did not show good performances (HGS variants, VAE variants)\\n- some parts presenting an analysis redundant with the main paper\\n2) Summarizing many other parts to keep only the essential information\\n3) Structuring the Appendix in a clearer way:\", \"section_a\": \"Additional figures and results\", \"section_b\": \"Implementation details and hyperparameters (with a table of contents)\\n\\nAs a result, the new Appendix is now 19 pages shorter.\\n\\nWe did not make significant modifications to the main paper as we think it already provides the main results (we updated links to the Appendix trying to enable a more fluid reading and made several changes according to the suggestions of R4). We are of course open to suggestions from the reviewers if they think a particular additional figure or result is missing in the new Appendix.\"}", "{\"title\": \"Answer to Official Blind Review #4 - Part 2\", \"comment\": \"> With regard to animal forms, it appears to me that Online goal learning harms the diversity of animal forms considerably compared to PGL and perhaps HGS. High-frequency spatial structure seems to be lost there.\\n> why non-animal types differ in PGL vs HGS, and why high-frequency spatial structure is lost in OGL\\n\\nWe agree that a difference between PGL and OGL is suggested by visual inspection of the example patterns in Figs. 27-31 (now Figs. 5-9). However, quantitative measures on Fig. 3 show that the diversity measure of OGL is as good as the one of PGL for animals (OGL and PGL use learned features), and way better than HGS (hand-engineered features). A qualitative analysis of more patterns (these can be accessed through the database we provide now on the webpage) also shows this and that high frequency spatial structure in OGL is not lost. \\n\\n\\n> why RGS produces the same kind of red linear patterns\\n\\nIt is true that the RGS shows a high abundance of \\u201cred linear patterns\\u201d. This is a result of the goal space of the RGS. It is random and discovered patterns are uniformly distributed in it (Fig. 10, new version). Thus, a goal exploration will result in a random selection of previous patterns and a mutation of them. Moreover, the initial random exploration of 1000 patterns results mainly in \\u201cred linear patterns\\u201d (this is visible from the patterns of the random exploration in Fig. 13 (new version)). Thus, during the goal exploration phase the RGS will mainly choose \\u201cred linear patterns\\u201d to mutate them. This produces in most cases again \\u201cred linear patterns\\u201d. As a result, \\u201cred linear patterns\\u201d are so abundant for RGS. This confirms that using a random latent representation for goals does not enable to organize exploration, as analyzed in Section 5. \\n\\n\\n> Initial inspection reveals that hand-designed goal states produce the most interesting non-animal patterns. \\n> Why HGS produces the distribution of pattern types in Figure 29, and why non-animal types differ in PGL vs HGS\\n\\nWe agree that subjectively non-animals found by HGS may have some dimensions of diversity not covered as well by the other algorithms. This is a result of their different goal spaces as pointed out under \\u201cHow do goal space representations differ?\\u201d in Section 5 (p.9). The goal spaces of PGL and OGL represent well the form of small activity patterns (which are often animals). They do not represent well larger structures with interesting textures as seen in the HGS results. Thus they do not explore types of these patterns often resulting in a lower diversity of them.\\n\\n\\n> The results should NOT be shown just for the first repetition of the experiment but for all independent runs of the experiments, e.g averaged over 30 independent CPPN evolutions, for PGL, OGL, Random, and HGS! \\n\\nThe core results (Fig. 3) include averages and standard deviations over 10 repetitions of each algorithm. We added now statistical results to the Figure showing that the algorithms produce statistically significant different diversities (Welch\\u2019s t-test, p < 0.01).\"}", "{\"title\": \"Answer to Official Blind Review #4 - Part 1\", \"comment\": \"We thank Reviewer 4 for his time and efforts to review our paper. We appreciate R4s comments and interest in our exploration results. There are some aspects of R4's comments we are not sure we fully understand, so we will be pleased to develop further our answers in case R4 would like us to address other points.\\n\\n\\n> The fitness is the closeness of a generated set of latents to a set of latents produced through one of several possible processes; hand-design, pretraining, or online training on previously generated CA settings.\", \"we_would_like_to_concisely_provide_two_precisions\": \"1) The goal exploration algorithms we study generate a target uniform distribution of goals in a space of latent pattern features. These features are either learned or hand-engineered. From this generated distribution of goals, they try to find a distribution of parameters of the complex system (starting state+rules) that produces patterns covering well the target distribution of goal patterns. This is achieved through the dynamics of the POP-IMGEP algorithms, by iteratively sampling a goal (= a latent vector in case of learned goal features) and searching for the system parameters that approach that goal closest, leveraging all discoveries made so far. \\n\\n2) The quantitative measure used to evaluate our algorithms is a measure of diversity defined as the number of bins discovered in an evaluation space only known by the experimenter\\n. The dimensions of this evaluation space are a concatenation of hand-defined features and features of a learned embedding. The embedding is learned using a database with a large number of patterns found by all algorithms during all experiments. This measure of evaluation is only known and used by us, but it is not known by the individual exploration algorithms (as it uses a form of oracle knowledge to assess the discoveries the algorithms could make in principle).\\n\\n\\n> The core results are in Figures 27 to 31 in an appendix. \\n\\nFigs. 27-31 (now Figs. 5-9) are visualizations of particular examples of patterns discovered by the algorithms. For us the core results of the paper are the systematic quantitative measures presented in Fig. 3 and Fig. 4 (p. 8-9). Fig. 3, in particular, shows the average and standard deviation of the evolution of the diversity measure for several classes of patterns (all, animals and non-animals). These averages and standard deviations show the high-robustness of IMGEP-OGL and IMGEP-PGL to achieve the highest diversity in all classes (the low value of the standard deviation shows the high stability of these algorithms). \\nTherefore, we believe that Figs. 27-31 (now Figs. 5-9), like the video on the accompanying web site, are complements to help readers visualize the kind of patterns that are discovered.\\n\\n\\n> ... in Figures 27 to 31 in an appendix. Initial inspection reveals that hand-designed goal states produce ... high frequency spatial structure is lost in OGL.\\n\\nThis qualitative analysis from R4 is made from looking at the examples of Figs. 27-31 (now Figs. 5-9). As explained above, the aim of these figures is to enable readers to have a visual sense of what \\\"animals\\\", \\\"non-animals\\\" and \\\"dead\\\" patterns look like, but they do not aim to be a way to quantitatively rank and compare the algorithms (Figs. 3 and 4 do this instead with objective statistical measures). As we aimed to introduce a novel scientific problem with this paper, we decided to focus on robust quantitative macroscopic measures of diversity within these different classes, which could be used as a basis for further investigations and comparisons with other algorithms (Figs 3 and 4). \\n\\nHowever, we agree that visual intuitions like the ones formulated by R4 through observing the discovered patterns with \\\"human eyes\\\" could help guiding the design of novel quantitative measures in future work. In order to enable readers to forge their own intuitions and possibly design new measures from them, we have now released a dataset of all discovered patterns:\", \"https\": \"//drive.google.com/file/d/1ZhVG2_uTLaT4SMqj0wKTKn568Y2XaypU/view?usp=sharing\\nMoreover, we released an interactive website enabling to view all discovered patterns projected into their goal spaces for all goal exploration algorithms and experimental repetitions: https://projector.tensorflow.org/?config=https://raw.githubusercontent.com/intrinsically-motivated-discovery/intrinsically-motivated-discovery.github.io/master/assets/media/tensorboard/projector_config.jsonLINK\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper describes an algorithm to find diverse patterns in Lenia (a continuous CA system) by using a CPPN to generate initial states, and a stochastic exploration algorithm to mutate parameters of the CPPN + CA parameters. The fitness is the closeness of a generated set of latents to a set of latents produced through one of several possible processes; hand-design, pretraining, or online training on previously generated CA settings.\\n\\nThe core results are in Figures 27 to 31 in an appendix. Initial inspection reveals that handdesigned goal states produce the most interesting non-animal patterns. With regard to animal forms, it appears to me that Online goal learning harms the diversity of animal forms considerably compared to PGL and perhaps HGS. High frequency spatial structure seems to be lost there. \\n\\nI would like to see a further analysis of maybe 10000s of such images generated, and an understanding of exactly why RGS produces the same kind of red linear patterns, and why HGS produces the distribution of pattern types in Figure 29, and why non-animal types differ in PGL vs HGS, and why high frequency spatial structure is lost in OGL. How robust are these over many runs? The results should NOT be shown just for the first repetition of the experiment but for all independent runs of the experiments, e.g averaged over 30 independent CPPN evolutions, for PGL, OGL, Random, and HGS!\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The focus of the presented paper is on formulating the automated discovery of self-organized patterns in high-dimensional dynamic systems. The introduced framework uses cellular automata (game of life) as a testbed for experimentation and evaluation and existing machine learning algorithms (POP-IMGEPs). The goal of the paper is to show that these algorithms can be used to discover and represent features of patterns. Moreover, an extension of SOTA algorithms is introduced and several approaches to define goal space representations are compared.\\n\\nOverall, I have the impression this is an interesting paper that could be accepted to ICLR. The idea of applying IMGEPs to explore parameters of a dynamic system is novel and interesting, which could also simulate further research in this field. Furthermore, the paper well-written, technically sound, and the results are interesting. The overall contribution of the paper is in applying IMGEP algorithms to exploring parameters of dynamic systems and in comparing different algorithms along with an extensive set of experiments. As a point of criticism, a lot of (interesting) material was pushed to the Appendix. Resolving the references makes reading the paper harder. Moreover, given that this paper has more than 35 pages appendix material, it seems this work would better be suited for a journal as for a conference. There is a reason for papers to have a page limit and this work circumvents this limit by presenting a lot of additional material. Therefore, I am not willing to strongly support this work.\", \"specific_comments\": [\"Section 3.1: It is not clear how the initial system state is established. In Section 3.1. the text states that 'parameters are randomly sampled and explored' before the process starts, but it is not clear why a random sampling is used and what this means for the subsequent sampling. Later in the text (3.3) it becomes more clear, but here this appears too unclear.\", \"Section 3.1: \\\"distribution over a hypercube in \\\\mathcal{T} chosen to be large enough to bias exploration towards the frontiers of known goals to incentivize diversity.\\\" This sentence is not clear and needs more details. How is the distribution chosen exactly?\", \"Section 3.2 appears a bit repetitive and could be more concise. I don't think it is necessary here to contrast manual vs learned features of the goal space.\", \"Section 3.2 (P3): the last sentence of this paragraph reads as if there exists no approaches for VEAs in online settings. This should be toned down or backed up by a reference.\", \"Section 3.2: (last sentence): it is not clear how the history is used exactly to train the network. Which strategy is used to sample from the history of observations?\", \"Section 3.3: What is meant by \\\"The CPPNs are used of the parameters \\\\{theta}\\\"? The details provided after this sentence are not clear and need more details.\", \"Section 4.2: Please provide more details what \\\"very large\\\" dataset means.\", \"Section 4.2: 'HGS algorithm' is not defined.\", \"Section 5: It seems unnecessary to explain what t-SNE does as a method.\"]}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper uses the continuous Game of Life as a testing ground for algorithms that discover diverse behaviors. The problem is interesting, under-explored, and rich. The combines a variety of interesting ideas including compositional pattern producing networks (CPPNs) to learn structured primitives. Although the authors do propose formal measures of behavioral diversity and so show performance improvements, at the end of the day this work, like much empirical work on generative adversarial networks, is drifting towards art -- where performance is ultimately judged by human eyes rather than quantiative metrics.\", \"comments\": \"The paper refers to hand-designed goal spaces and talks, on p28, about \\u201cthe statistical measures used to define the goal space\\u201d. At the same time, the analytic behavior space is also defined in terms of statistical measures, but it is *not* referred to as hand-designed. At this point, the profusion of spaces and measures means that I am no longer sure what counts as hand-crafted or not. Please clarify.\\nThe hypothesis on p34, sec E.4.2 that the VAE\\u2019s 8-dim bottleneck helps focus on animals rather than non-animals (which are differentiated more in terms of textures and details) is important and should be checked. \\nSome of the decisions about what to check and vary are unclear. For example, section E.1 considers the effect of different initializations (\\u201cpytorch\\u201d, \\u201cxavier\\u201d and \\u201ckaiming\\u201d). The choice of initialization is important mostly to do with improving gradients to improve the rate of convergence (or convergence at all) in deep nets. It\\u2019s not clear why initializations are an parameter to vary when considering diversity of solutions. Or, rather, why initializations are more interesting to consider various other architectural considerations. More broadly, looking at Fig 17, the x-axis doesn\\u2019t make much sense. The experiments along the x-axis vary according to initialization, but also according to the nature of the goal space and other features. It seems a bit incoherent. \\n\\nOverall I think this is a good paper. The results are novel and even better, they are fun. However, the paper is extremely long, and it feels as though the authors have to some extent lost control of the material. I could add more comments but TL;DR it needs a lot of editing and pruning.\"}" ] }
rJe2syrtvS
The Ingredients of Real World Robotic Reinforcement Learning
[ "Henry Zhu", "Justin Yu", "Abhishek Gupta", "Dhruv Shah", "Kristian Hartikainen", "Avi Singh", "Vikash Kumar", "Sergey Levine" ]
The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning. In this work, we discuss the required elements of a robotic system that can continually and autonomously improve with data collected in the real world, and propose a particular instantiation of such a system. Subsequently, we investigate a number of challenges of learning without instrumentation -- including the lack of episodic resets, state estimation, and hand-engineered rewards -- and propose simple, scalable solutions to these challenges. We demonstrate the efficacy of our proposed system on dexterous robotic manipulation tasks in simulation and the real world, and also provide an insightful analysis and ablation study of the challenges associated with this learning paradigm.
[ "Reinforcement Learning", "Robotics" ]
Accept (Spotlight)
https://openreview.net/pdf?id=rJe2syrtvS
https://openreview.net/forum?id=rJe2syrtvS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "Xsw8PQb0R", "r1xnxPghjS", "HylKcAZijH", "Skgx4yZqoH", "S1x6e1-5sB", "SJgaARl5oH", "r1gseP0nKH", "Hyl29EPXtr", "r1l7Uvpn_r" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798736158, 1573811955934, 1573752464811, 1573682983955, 1573682933119, 1573682900632, 1571772146932, 1571153043610, 1570719563072 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1930/Authors" ], [ "ICLR.cc/2020/Conference/Paper1930/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1930/Authors" ], [ "ICLR.cc/2020/Conference/Paper1930/Authors" ], [ "ICLR.cc/2020/Conference/Paper1930/Authors" ], [ "ICLR.cc/2020/Conference/Paper1930/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1930/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1930/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This is a very interesting paper which discusses practical issues and solutions around deploying RL on real physical robotic systems, specifically involving questions on the use of raw sensory data, crafting reward functions, and not having resets at the end of episodes.\\n\\nMany of the issues raised in the reviews and discussion were concerned with experimental details and settings, as well as relation to different areas of related work. These were all sufficiently handled in the rebuttal, and all reviewers were in favour of acceptance.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Updates\", \"comment\": \"\\u201cI would recommend for more improvements for publication (or a future submission) you increase the number of trials, and/or use the bootstrap method Henderson employs to make better confidence intervals.\\u201d\\n-> We have attempted to run more random seeds since your comment to address these concerns. Due to limited time before the end of the rebuttal period, we have been able to complete 5 additional seeds on the bead manipulation task, but additional seeds for the other two tasks have not finished. We have updated Fig 7 accordingly. We will add in the remaining seeds once they have completed running. To make it more clear that our method provides statistically significant results, we have also updated Fig 7 to show 95% bootstrap confidence intervals. These plots make it clear that the previous insights carry over to the case with confidence intervals and additional seeds as well. \\n\\n\\u201crun VICE for as many hours as you do for your method\\u201d\\n-> We ran a longer run of VICE on hardware and updated the paper accordingly in Figs 8 and 14.\"}", "{\"title\": \"Thanks\", \"comment\": \"Thank you for the response,\\n\\nI didn't mean to suggest you were claiming VAE is the best for this application. It was more a question of what else you have tried and motivations from using VAE. Again, I think it is a fine choice but I appreciate the added discussion to highlight this is just an algorithmic choice.\\n\\nI'm happy with the added experiments in the appendix, and think this makes the work more concrete. I'm a bit worried about the lack of trials in the simulated domains (5 runs is not enough see \\\"Deep RL that Matters from Henderson https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewPaper/16669). I would recommend for more improvements for publication (or a future submission) you increase the number of trials, and/or use the bootstrap method Henderson employs to make better confidence intervals.\\n\\nFor Figure 8 (for valve rotation), I would make sure to run VICE for as many hours as you do for your method, or mention why they are different.\\n\\nAgain, thank you for the response.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for their detailed, insightful and constructive feedback! We acknowledge a number of clarity issues in the presentation and positioning of our results, which make the actual results somewhat hard to understand. We have updated the paper to make several of the points much more clear and have run additional hardware experiments to address the points raised in the review, as described in detail below:\\n\\n\\u201cthey could say more about the goals they show to the system, etc.\\u201d\\n-> We have added some visualizations about goals provided to the system to Appendix C. \\n\\n\\u201cyou may instrument for the sake of science (to measure the value of what you are doing, even if the real-world system won't use this instrumentation).\\u201d \\n-> Yes, this is a good point! We have now performed these experiments on the hardware and have included additional comparisons to baselines on hardware in Section 6.3, Fig 8. We find that the same trends observed in simulation hold on the hardware as well. \\n\\n\\u201cIn Fig. 4, I would like to know what is the threshold for success.\\u201d\\n-> Fig. 4 analyzes the sample complexity for the task of valve rotation. The experiment is considered successful when the learned policy achieves average training performance of less than 0.15 in pose distance (defined in Appendix C.1.3) across 3 seeds. Fig. 4 has now been updated to clarify this.\\n\\n\\u201cIn Section 6.2 the authors could have performed much more detailed ablation studies and stress in more details the impact of using the VAE alone versus using the random perturbation controller alone\\u201d\\n-> We have modified the Figure 7 legend and caption to make it more legible, and a discussion on the effects of the individual components based on these ablation experiments is now included in Section 6.2 in the updated manuscript. We have also updated the results after removing a small visual artifact in the environment, which allows the baselines to perform a bit better, but still maintains the same trends. We agree that the presentation of data in Figure 7 was hard to parse, and many of the comparisons (including the two requested by the reviewer) that we did actually already perform were hard to discern from the figure. The methods marked [VAE + VICE] and [RND + VICE] show the performance curves corresponding to the ablations suggested. A discussion on comparisons to explicit goal-based reset mechanisms and goal-conditioned policies has also been added to Section 6.2.\\n\\n\\u201cthere is an issue about the positioning too: the authors fail to mention a huge body of literature trying to address very close or just similar questions...central motives of Developmental Robotics and some of its \\\"subfields\\\" \\n-> We have expanded our related work with appropriate discussion with respect to the field of developmental robotics. The goal of our work is to enable reinforcement learning systems to handle the practicalities of learning in the real world without human instrumentation or interruption, even for a single task setting, without multi-task considerations. The insights we make should also be applicable for developmental robotics algorithms! Though our investigation doesn\\u2019t touch on all aspects of developmental robotics such as lifelong learning, open-ended learning, psychology, cognition etc., our proposed work R3L does bear strong relationship with respect to continual learning, intrinsic motivation, perceptual development, and sensory-motor development involving proprioceptive manipulation. We thank the reviewer for bringing out this interesting connection, and have added appropriate citations in the text. \\n\\n\\u201cauthors could reconsider their framework from a multitask learning perspective...agent may learn various controllers to bring the system into various goal states and switching from goal to goal to prevent the system for keeping stuck close to some goal.\\u201d\\n-> We agree that this is indeed an interesting and valuable perspective on this problem, and we have added some discussion of this to Section 6.2. We found in our experimental study that when we consider the case of using 2 goals, and switching between them (the Eysenbach et al comparison in Fig 7), it was not as effective and robust as using the perturbation controller. While this scheme chooses between only 2 goal options, and a more involved scheme could be chosen to pick multiple different goals, the performance of such an algorithm is dependent on the specific choice of goals. We find that the simpler solution via the perturbation controller can be very effective without the need for multiple meaningful alternative goals to be specified, although a better algorithm for self-supervised multi-goal selection is an interesting avenue for future work.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for their encouraging feedback! We are excited about further exploring the possibilities of this line of research!\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for their insightful and constructive feedback! We have run additional hardware comparisons and quantitative evaluations as requested (Section 6.3) and have updated the paper according to your suggestions and comments to better discuss related work. We respond to individual concerns in detail below:\\n\\n\\u201cdiscussion of the real world tasks from the appendix to appear in the main text.\\u201d\\n-> We have moved this discussion from the Appendix to Section 6.3. Additional comparisons to a VICE (Fu et al) baseline have been added for real world experiments in Section 6.3, Fig 8. We see that our algorithm is able to outperform this baseline on the real world tasks. \\n\\n\\u201cIt is not clear if a VAE is the best choice for unsupervised representation learning for RL agents.\\u201c\\n-> While a VAE works well in the domains we considered in this paper, we certainly agree that a VAE is not necessarily the optimal choice for all RL domains. We have updated Section 4.2 to reflect this explicitly, and have included references to Anand et al, Hjelm et al. and Lee et al as you pointed out as alternative methods for representation learning. We did not mean to claim that VAE\\u2019s were the only representation learning scheme that might suffice in this scenario, and many of the schemes suggested might also be effective. \\n\\n\\u201cI would like some more details on your simulation experiments\\u2026\\u201d\\n-> We have updated Section 6 and Appendix C to include details about the experimental setup, both in simulation and in the real world. We have updated Fig 7 after removing a small visual artifact in the environment, which allows the baselines to perform a bit better, but still maintains the same trends. \\n-- The plots are averaged over 5 random seeds for each method and task \\n-- The (shaded) error regions correspond to the variance of the seeds for each curve\\n-- Appendix B has been updated to include information on ranges of hyperparameters tuned, in addition to the optimal values used to generate the plots in figures 7 & 8\\n\\n\\u201cQ1: Did you try any of the other approaches on the real robotics system? Or was there no way to deploy these algorithms to your specific setup without instrumentation?\\u201d\\n-> Yes we did add a new real-world comparison to the VICE baseline, as requested, in Fig 8. We have updated Section 6.3 with a comparison in the real world on the valve rotation and bead manipulation tasks. A quantitative evaluation corroborates findings from the simulated environments and shows that our method outperforms these methods in terms of sample efficiency and robustness.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"*Synopsis*:\\n This paper focuses on current limitations of deploying RL approaches onto real world robotic systems. They focus on three main points: the need to use raw sensory data collected by the robot, the difficulty of handcrafted reward functions without external feedback, the lack of algorithms which are robust outside of episodic learning. They propose a complete system which addresses these concerns, combining approaches from the literature and novel improvements. They then provide an empirical evaluation and ablation testing of their approach and other popular systems, and show a demonstration on a real robotic system.\", \"main_contributions\": \"- A discussion of the current limitations of RL on real robotic systems\\n - A framework for doing real world robotic RL without extra instrumentation (outside of the robot).\\n\\n *Review*: \\n Overall, I think the paper is well written and provides some nice analysis of the current state of RL and robotics. I am not as familiar with the RL for robotics literature, but from some minor snooping around I believe these ideas to be novel and useful for the community. I have a few suggestions for the authors, and a few critical pieces I would like added to the main text.\", \"critical_additions\": \"1. I would like some more details on your simulation experiments. Specifically:\\n - How many runs were your experiments? \\n - What are the error bars on your plots?\\n - What ranges of hyper-parameters did you test for tuning?\\n\\n 2. I would quite like the discussion of the real world tasks from the appendix to appear in the main text. Specifically, giving the evaluation metrics you mentioned in the appendix. \\n\\n Suggestions/Questions:\", \"s1\": \"It is not clear if a VAE is the best choice for unsupervised representation learning for RL agents. Although a reasonable choice, Yashua Bengio recently released a look at several unsupervised techniques for representation learning in Atari which you may want to look at: https://arxiv.org/pdf/1906.08226.pdf.\", \"q1\": \"Did you try any of the other approaches on the real robotics system? Or was there no way to deploy these algorithms to your specific setup without instrumentation?\"}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents approaches to handle three aspects of real-world RL on robotics: (1)learning from raw sensory inputs (2) minimal reward design effort (3) no manual resetting. Key components:(1) learn a perturbation policy that allows the main policy to explore a wide variety of state. (2) learn a variational autoencoder to transform images to low dimensional space.\\n\\nExperiments in simulation on the physical robots are performed to demonstrate the effectiveness of these components. Close related work is also used for comparison. The only concern I have is that the tasks considered involve robots that can automatically reset themselves pretty easily. I doubt that this will scale to unstable robots such as biped/quadruped, where once they fail, the recovering/resetting tasks will be as much or more difficult than the main locomotion tasks. But I understand this is too much to address in one paper and limitation is also briefly discussed in the final section.\\n\\nOverall I think this is a good paper and valuable to the community.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper takes seriously the question of having a robotic system learning continuously without manual reset nor state or reward engineering. The authors propose a first approach using vison-based SAC, shown visual goals and VICE, and show that it does not provide a satisfactory solution. Then they add a random pertubation controller which brings the robot or simulated system away from the goal and a VAE to encode a compressed state, and show that it works better.\\n\\nThe paper is a nice read, it contains useful messages thus I'm slightly in favor of accepting it, but I may easily change my mind as it suffers from serious weaknesses.\\n\\nFirst, and most importantly, the experimental study is very short, the authors have chosen to spend much more space on careful writing of the problem they are investigating.\\n\\nTo mention a few experimental weaknesses, in Section 6.2 the authors could have performed much more detailed ablation studies and stress in more details the impact of using the VAE alone versus using the random pertubation controller alone, they could say more about the goals they show to the system, etc. There is some information in Figure 7, but this information is not exploited in a detailed way. Furthermore, Figure 7 is far to small, it is hard to say from the legend which system is which.\\n\\nAbout Fig.8, we just have a qualitative description, the authors claim that without instrumenting they cannot provide a quantitative study, which I don't find convincing: you may instrument for the sake of science (to measure the value of what you are doing, even if the real-world system won't use this instrumentation).\\n\\nSo the authors have chosen to spend more space on the positionning than on the empirical study, which may speak in favor of sending this paper to a journal or magazine rather than a technical conference. But there is an issue about the positionning too: the authors fail to mention a huge body of literature trying to address very close or just similar questions. Namely, their concern is one the central leitmotives of Developmental Robotics and some of its \\\"subfields\\\", such as Lifelong learning, Open-ended learning, Continual learning etc. The merit of the paper in this respect is to focus on a specific question and provide concrete results on this question, but this work should be positionned with respect to the broader approaches mentioned above. The authors will easily find plenty of references in these domains, I don't want to give my favorite selection here.\\n\\nKnowing more about the literature mentioned above, the authors could reconsider their framework from a multitask learning perspective: instead of a random perturbation controller, the agent may learn various controllers to bring the system into various goal states (using e.g. goal-conditioned policies), and switching from goal to goal to prevent the system fro keeping stuck close to some goal.\", \"more_local_points\": \"In the middle of page 5, it is said that the system does not learn properly just because it is stuck at the goal. This information comes late, and makes the global message weaker.\\n\\nin Fig. 4, I would like to know what is the threshold for success.\"}" ] }
S1g2skStPB
Causal Discovery with Reinforcement Learning
[ "Shengyu Zhu", "Ignavier Ng", "Zhitang Chen" ]
Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences. Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function. While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are less satisfactory in practice due to finite data and possible violation of assumptions. Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring. Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity. In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward. We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows for a flexible score function under the acyclicity constraint.
[ "causal discovery", "structure learning", "reinforcement learning", "directed acyclic graph" ]
Accept (Talk)
https://openreview.net/pdf?id=S1g2skStPB
https://openreview.net/forum?id=S1g2skStPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "uELYHwr-UE", "aF1c8IofC7", "NdZnF_RegQ", "AZ_pu0JgN_", "SklquiIIiH", "HJllWOIMir", "ryesqDUfsH", "BklImw8Msr", "rkeDLyLGor", "HyesVTrGoS", "H1eabJol9B", "HklEzhj6FB", "SJguv9wTtB" ], "note_type": [ "official_comment", "comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1580700138676, 1580676408530, 1577411980595, 1576798736127, 1573444466442, 1573181432275, 1573181331162, 1573181213545, 1573179215451, 1573178674963, 1572019973242, 1571826700491, 1571809887656 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper1929/Authors" ], [ "~Le_Song1" ], [ "ICLR.cc/2020/Conference/Paper1929/Authors" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1929/Authors" ], [ "ICLR.cc/2020/Conference/Paper1929/Authors" ], [ "ICLR.cc/2020/Conference/Paper1929/Authors" ], [ "ICLR.cc/2020/Conference/Paper1929/Authors" ], [ "ICLR.cc/2020/Conference/Paper1929/Authors" ], [ "ICLR.cc/2020/Conference/Paper1929/Authors" ], [ "ICLR.cc/2020/Conference/Paper1929/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1929/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1929/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Re: Related work\", \"comment\": \"Thanks for letting us know. My first impression is that the first one is indeed very relevant. We will have a careful read of both papers.\"}", "{\"title\": \"Related work\", \"comment\": \"This is a very interesting paper on using reinforcement learning to learn to solve combinatorial optimization problems involving graphs, in this particular case, the structure of the graphical models.\", \"there_are_two_highly_relevant_papers_which_are_worthwhile_discussing_in_context_and_can_enrich_the_current_paper\": \"1. Learning Combinatorial Optimization Algorithms over Graphs. Hanjun Dai, Elias B. Khalil, Yuyu Zhang, Bistra Dilkina, Le Song. NeurIPS 2017. \\n\\n2. GLAD: Learning Sparse Graph Recovery. Harsh Shrivastava, Xinshi Chen, Binghong Chen, Guanghui Lan, Srinivas Aluru, Han Liu, Le Song. ICLR 2020.\"}", "{\"title\": \"About Codes and Datasets\", \"comment\": \"[update: 03/19/2020]\\n\\nWe have released our codes, along with datasets and training logs in the paper, at https://github.com/huawei-noah/trustworthyAI/tree/master/Causal_Structure_Learning/Causal_Discovery_RL . \\n\\nPlease file an issue if you have any questions.\\n\\n---------------\\n\\nHi all, \\n\\nOur codes and datasets are currently undergoing the regular open-source process of Huawei Noah's Ark Lab, and will be made available as a repository at https://github.com/huawei-noah. We will also release the training logs of the experimental results that are reported in the paper.\\n\\nWe will let you know once the codes are released.\\n\\nBest Regards,\\nShengyu\"}", "{\"decision\": \"Accept (Talk)\", \"comment\": \"This paper proposes an RL-based structure search method for causal discovery. The reviewers and AC think that the idea of applying reinforcement learning to causal structure discovery is novel and intriguing. While there were initially some concerns regarding presentation of the results, these have been taken care of during the discussion period. The reviewers agree that this is a very good submission, which merits acceptance to ICLR-2020.\", \"title\": \"Paper Decision\"}", "{\"title\": \"We have uploaded a revised version\", \"comment\": [\"Dear reviewers,\", \"We have uploaded a revised version of our paper, following the suggestions/comments from all the reviewers. Some changes are:\", \"Section 3, Page 3: we add a sentence on the identifiability of Markov equivalence class, following Reviewer 1's comment;\", \"Section 4, Page 4: we revise the statement on self-attention scheme being capable of finding causal relationships, following Reviewer 2's suggestion;\", \"Section 5, Page 5: we add a sentence on the necessity of the acyclicity constraint from Zheng et al., according to Reviewer 1's comment;\", \"Section 5, Page 5: we add a sentence on the effect of picking $\\\\lambda_2=0$ or using only the indicator function in the reward, with a more detailed discussion given in Appendix C in the revised manuscript, according to Reviewer 2's comment;\", \"Section 5, Page 5: we add definitions for $\\\\Delta_1$ and $\\\\Delta_2$ and rephrase the paragraph for a better presentation, following Reviewer 2's comment;\", \"Section 6, Page 6: we add a sentence to state that a lower SHD indicates a better performance, according to Reviewer 2's comment.\", \"To Reviewer 3, we have not revised our statement on the finite sample result of GES in the new manuscript. We are eager to learn this finite sample result of GES and are happy to modify the statement if we misunderstand or omit related results in Nandy et al. paper.\", \"We once again thank all the reviewers for their effort and many helpful comments/suggestions.\"]}", "{\"title\": \"We greatly appreciate the reviewer's comments/suggestions [Author Response 1/3]\", \"comment\": \"We greatly appreciate the reviewer's comments/suggestions, many of which will lead to a more readable and self-contained version of our paper. We attempt to address all the concerns in the following. In case we may omit certain places, please do let us know. The revised manuscript will be uploaded at a later time within this week.\\n\\n(1) 'In page 4 Encoder paragraph, the authors mention that the self-attention scheme is capable of finding the causal relationships. Why? In my opinion, the attention scheme only reflects the correlation relationship. The authors should give more clarifications to convince me about their beliefs.'\\n \\nThis is a good point. The statement in the submitted manuscript is indeed vague and confusing. We agree with the reviewer that the attention scheme reflects only correlation or association. Many existing score based methods exploit correlations together with structural constraints to discover the causal relations. For example, NOTEARS uses linear regression for fitting the causal function with least squares as loss function. Clearly, using only linear regression could not find causal relationships, and what enables linear regression to find causal graphs is the acyclicity constraint. Since the self-attention scheme is very powerful in capturing the (correlated or associated) relations amongst variables, we believe that it, together with the acyclic constraint, is capable of finding causal relationships. We will revise the statement accordingly. Thanks very much for this comment that makes our paper more rigorous. \\n \\n(2) \\u2018The authors first introduce the h(A) constraint in eqn. (4), and mentioned that only have that constraint would result in a large penalty weight. To solve this, the authors introduce the indicator function constraint. What if we only use the indicator function constraint? In this case, the equivalence is still satisfied, so I am confused about the motivation of imposing the h(A) constraint.\\u2019 \\n \\nThis is very insightful. With only the indicator function term, problems (1) and (6) can still be equivalent. Yet this fact does not imply that an RL algorithm would also work well. Actually our initial reward consisted of the score function and only the indicator term, which worked well for small graphs (with $\\\\leq 6$ nodes or so) but very poorly for larger ones. We observed that the RL algorithm, with randomly initialized NN weights, could hardly generate DAGs in this case when only the indicator term was used. We now attempt to illustrate why this is the case:\\n \\n(a) the directed graphs in our approach are randomly generated according to Bernoulli distributions, and without loss of generality, consider that each edge is drawn independently according to Bern(0.5). For small graphs (with $\\\\leq 6$ nodes), a few hundreds of samples of directed graphs are very likely to contain a DAG. Yet for large graphs, the probability of sampling a DAG is a lot lower. If no DAG is generated during training, then the RL agent can hardly learn to generate DAGs. Thus, we need the reward to guide the agent to produce DAGs. This, however, is difficult for large graphs with only the indicator term; see below.\\n\\n(b) for a cyclic directed graph with all possible directed edges in place and a cyclic directed graph with only two edges (that is, $i\\\\to j$ and $j\\\\to i$ for some $i\\\\neq j$), the latter is 'closer' to be acyclic in some sense, e.g., number of edge operations to make it acyclic. However, the first one is likely to have a lower BIC score when using linear regression for fitting causal relations, and yet the penalty terms of acyclicity are the same. In other words, the first graph usually has a better reward, which does not help the agent to tend to generate DAGs. This fact motivates us to include the other penalty term that measures some 'distance' to be a DAG, so that the agent can be trained to produce graphs closer to acyclicity and finally to generate exact DAGs. With initialized NN weights, the generated graphs at early iterations can be 'far' from acyclicity for large problems, and we believe that using only the indicator function is insufficient.\\n \\nA question is then what if we start with a DAG, e.g., by initializing the probability of generating each edge to be very small. This setting did not lead to good performance, either. The generated directed graphs at early iterations can be very different from the true graphs in missing many true edges, and the resulting score is much higher than the optimum under the DAG constraint. With small penalty weights of the acyclicity terms, the agent would produce cyclic graphs with lower scores, which then reduces to case (b). On the other hand, large penalty weights, as we have discussed in the paper, limit exploration of the RL agent and usually result in DAGs whose scores are far from optimum.\\n \\nWe hope that the above discussion has addressed the reviewer\\u2019s concern. We will add more discussions to make this point clear in the revised paper.\"}", "{\"title\": \"We greatly appreciate the reviewer's comments/suggestions [Author Response 2/3]\", \"comment\": \"(3) \\u2018In the last paragraph of page 5, why the authors adjust the predefined scores to a certain range?\\u2019\\n \\nWe observe that the acyclicity penalty terms do not depend on the particular score functions. Consequently, even we have started with small penalty weights and then gradually increase them, the initial penalty weights may still be too high for other scores on the same problem, e.g., the independence based score function, which in the ideal case shall be zero, or if one wants to use the sample average BIC score. Therefore, we adjust the score to a certain range so that the RL algorithm with some choice of penalty weights is likely to work for other score functions as well.\\n \\n(4) 'Whether the acyclic can be guaranteed after minimizing the negative reward function (the eqn.(6))? I.e., After the training process, whether the graph with the best reward can be theoretically guaranteed to be acyclic? '\\n \\nThis is also a good point. In theory, no, since policy gradient methods only guarantee local convergence. But with the proposed strategy for penalty weights, the inferred graphs from RL algorithms are all DAGs in our experiments. In practice, if the graph from the training process is not acyclic, we may rerun the algorithm, possibly add more penalty weights, and/or try different NN weights as well. Post-processing method like pruning can also be used to make the inferred graph acyclic.\\n \\n(5) 'In section 5.3, the authors mention that the generated graph may contain spurious edges? Whether the edges that in the cyclic are spurious? Whether the last pruning step contains pruning the cyclic path? '\\n \\nWe do not fully understand this question, but we try to address it as much as we can.\\n\\nSpurious edge means false discovery, i.e., an edge in the estimated graph does not exist in the true graph. Using BIC, negative log likelihood, or other reconstruction error based score functions is very likely to result in spurious edge in finite sample regime. For example, the least squares loss would not increase, and usually decreases, if we include a non-parental node when fitting a causal relation. If this additional edge caused by this node does not violate the acyclicity constraint, we indeed get a better reward. So in practice with finite samples, spurious edges are hardly avoided and post-processing is needed. \\n \\nWe consider majority vote to somehow remove spurious edges based on the observation that the top few graphs, ranked by their rewards, are usually structurally similar. However, a majority vote of several DAGs is not necessarily a DAG. As per our assumption that the true graph is a DAG, a cyclic path must contain at least one spurious edge, but spurious edge does not necessarily lie in a cyclic path. Since the pruning methods with a decreasing tolerance or an increasing threshold can lead to the empty graph, i.e., graphs that have no edges, we believe that, with proper tolerance or threshold, the methods will result in DAGs so that the cyclic path is removed.\\n \\nPlease let us know if we have addressed your concern.\\n \\n(6) 'In the experiment, the authors adopt three metrics. For better comparison, the author should clarify that: the smaller the FDR/SHD is, the better the performance, and the larger the TPR is, the better the performance.' \\n \\nThanks. We will add this clarification in the revised version.\\n \\n(7) 'From the experimental results, the proposed method seems more superiors under the non-linear model case. Why? Could the authors give a few sentences about the guidance of the model selection in the real-world? i.e., when to select the proposed RL-based method? And under which case to choose RL-BIC, and which case to selection RL-BIC2?'\\n \\nThis is an acute observation. We did not notice it previously. We do not think that this should be the case, although it appears so. Different methods may have different assumptions on data generating procedures, and if the ground truth meets the assumptions, these methods usually perform very well (but may still incur estimation errors due to finite samples). For example, ICA-LiNGAM recovers all the true edges without any false discoveries for LiNGAM data. For non-linear model cases, it may be because we use Gaussian process regression which is nonparametric and can fit causal relations well.\\n \\nAs to the model selection, we believe that this is related to what score functions perform well here. For example, if we know that the true data model does not follow an additive model, then it is very likely that the least squares loss or BIC is not appropriate. For RL-BIC or RL-BIC2, model selection then reduces to whether we shall use the least squares or negative log likelihood as our loss function.\"}", "{\"title\": \"We greatly appreciate the reviewer's comments/suggestions [Author Response 3/3]\", \"comment\": \"(8) 'What\\u2019s training time, and how many samples are needed in the training process?'\\n \\nWe did not include the training time because we used different machines for our experiments. The implementation of benchmark methods can also be optimized to reduce time (e.g., DAG-GNN's codes did not work with GPU) and the results may be somehow inaccurate. Here we just provide a rough description with 12-node linear data models:\\n\\n- Traditional methods PC and GES were run on a laptop with Intel 4-core i7 CPU, and produced the estimated result within 10 seconds; \\n- NOTEARS and ICA-LiNGAM were also run on the laptop and can be finished in 1~3 minutes (we set the maximum number of iterations of the ICA algorithm to be 20,000, ten times of the default number used by the ICA-LiNGAM authors); \\n- CAM was run on the same laptop and typically required 7~8 minutes; \\n- Our algorithms RL-BIC and RL-BIC2 were run with Intel Xeon 3.20GHz CPU and Nvidia Quadro RTX 5000 GPU. Both methods took about 30~40 minutes with 12-nodes graphs and 20,000 iterations. For 30-node graphs and 30,000 iterations, they needed around 3 hours;\\n- DAG-GNN took about 1 hour with the same Intel Xeon 3.20GHz CPU (their codes with GPU option did not work; the algorithm in fact did not require such a long time to reach convergence, yet no early stopping choice was provided in the codes); \\n- GraN-DAG with the same CPU and GPU took about 20~30 minutes.\\n \\nRegarding the sample number, we have given the number of samples in each experiment description.\\n \\n(9) Minor: \\n \\n1. 'In the page 4 decoder section, the notation of enc_i and enc_j is not clarified. '\\n\\nActually $enc_i$ is given in the last sentence of the encoder part. \\n \\n2. 'On page 5, the \\\\Delta_1 and \\\\Delta_2 are not explained.'\\n\\nThanks for pointing this out. We will add a definition for the two notations.\\n \\n3. 'For better reading experience, in table 1,2,3,4, the authors should bold value that has the best performance.'\\n \\nThanks for this suggestion. We have considered doing so, but it is usually the case that a method that has the best TPR does not achieve the lowest FDR, and only making one in bold seems insufficient to evaluate the overall performance of a method. If possible, can the reviewer give further suggestion on this part? Thanks.\"}", "{\"title\": \"We are grateful to the reviewer's effort and the positive comment\", \"comment\": \"We are grateful to the reviewer's effort and the positive comment on our paper. We are revising the paper by taking into accounts all the reviewers' comments/suggestions, and the revised version will be uploaded at a later time within this week.\\n\\n* Regarding 'The novelty is somewhat limited, since the paper is combining two previously proposed ideas (combinatorial search and the acyclicity constraint) for structure learning':\\n \\nThese two ideas are indeed important to our RL based approach to causal discovery. Here we would like to briefly discuss the necessity of the acyclicity constraint from Zheng et al. With the proposed penalty weights in our work, Zheng et al.\\u2019s acyclicity constraint $h(A)$ is used to guide the RL agent to generate directed graphs \\u2018closer\\u2019 to be acyclic and the indicator function w.r.t. acyclicity aims to induce exact DAGs. The major benefit of $h(A)$, or more precisely, $h(W\\\\circ W)$ ($W$ denotes the weighted adjacent matrix if it exists, e.g., for linear models, and $\\\\circ$ denotes Hadamard product), is its smoothness that enables continuous optimization for structure learning. This property is not utilized in our approach, and we believe other acyclicity functions, which measure certain \\u2018distance\\u2019 of a directed graph to be acyclic and do not need to be differentiable, can also be used here. We will add more discussions on this point in the revision.\\n \\n* Regarding 'The paper is loose with technical points. Specifically, the authors claim to use the additive noise model, but then make no restrictions on f(). In this setting, it is fairly well known that we can only hope to learn up to the Markov equivalence class (not the fully directed graph), but there is no mention of this in the paper':\\n \\nThanks for this helpful comment. We will add a sentence in Section 2 to state this result, along with the fact that we use fully identifiable models to generate observations in our experiments.\\n \\nWe once again appreciate the reviewer\\u2019s effort on reviewing our paper.\"}", "{\"title\": \"We thank the reviewer for the positive feedback and would like the reviewer to provide more details\", \"comment\": \"We thank the reviewer for the positive feedback on our work.\\n \\nRegarding 'GES is not guaranteed in the finite sample regime' and Nandy et al. paper 'High-dimensional consistency in score-based and hybrid structure learning': \\n \\nHere we aimed to state the consistency result of GES established by Chickering, and we find that Nandy et al. paper is also about consistency of GES but w.r.t. high dimension settings. In our understanding, consistency means that the probability of correct estimation of the ground truth goes to one as the number of samples approaches infinity, and we believe that this is also the case with Nandy et al. paper (please find below some quoted sentences where $n$ denotes the number of samples). We however do not find a result or claim regarding guaranteed performance in the finite sample regime. In case we may misunderstand or miss certain results, can the reviewer please give more details on 'the Nandy et al. paper tackles exactly the finite sample problem', and if possible, the corresponding theorems or claims in Nandy et al. paper and other papers as well? \\n \\nAgain we greatly appreciate the reviewer's effort. We would definitely revise our statement if we misunderstand/omit the result of GES in the finite sample regime.\\n \\n-------------\\n \\nQuoted sentences from the arxiv version of Nandy et al. paper, available at https://arxiv.org/pdf/1507.02608.pdf:\", \"page_3\": \"'In this paper, we prove high-dimensional consistency of GES, and we propose new hybrid algorithms based on GES that are consistent in several sparse high-dimensional settings and scale well to large sparse graphs. To the best of our knowledge, these are the first results on high-dimensional consistency of score-based and hybrid methods.'\", \"page_7\": \"'Consistency of $\\\\mathcal S$ assures that $\\\\mathcal G_0$ has a lower score than any DAG that is not in the Markov equivalence class of $\\\\mathcal G_0$, with probability approaching one as $n\\\\to\\\\infty$ (Proposition 8 of Chickering [2002b]).'\\n \\nPage 19, Theorem 5.2: 'Assume (A1)-(A6). Let $\\\\hat{\\\\mathcal C}_n$, $\\\\breve{\\\\mathcal C}_n$ and $\\\\tilde {\\\\mathcal C}_n$ be the outputs of ARGES-CIG based on $\\\\hat{\\\\mathcal I}_n$, ARGES-skeleton based on $\\\\hat{\\\\mathcal U}_n$ and GES respectively, with the scoring criterion $\\\\mathcal S_{\\\\lambda_n}$. Then there exists a sequence $\\\\lambda_n\\\\to 0$ such that $\\\\lim_{n\\\\to\\\\infty}\\\\mathbb P(\\\\hat{\\\\mathcal C}_n=\\\\mathcal C_{n0})=\\\\lim_{n\\\\to\\\\infty}\\\\mathbb P(\\\\breve{\\\\mathcal C}_n=\\\\mathcal C_{n0})=\\\\lim_{n\\\\to\\\\infty}\\\\mathbb P(\\\\tilde{\\\\mathcal C}_n=\\\\mathcal C_{n0})=1.$'\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors propose an RL-based structure searching method for causal discovery. The authors reformulate the score-based causal discovery problem into an RL-format, which includes the reward function re-design, hyper-parameter choose, and graph generation. To my knowledge, it\\u2019s the first time that the RL algorithm is applied to causal discovery area for structure searching.\\n \\nThe authors\\u2019 contributions are:\\n(1) re-design the reword function which concludes the traditional score function and the acyclic constraint\\n\\n(2) Theoretically prove that the maximizing the reward function is equivalent to maximizing the original score function under some choices of the hyper-parameters.\\n\\n(3) Apply the reinforce gradient estimator to search the parameters related to adjacency matrix generation. \\n\\n(4) In the experiment, the authors conduct experiment on datasets which includes both linear/non-linear model with Gaussian/Non-gaussian noise.\\n\\n(5) The authors public their code for reproducibility.\\n \\nOverall, the idea of this paper is novel, and the experiment is comprehensive. I have the following concerns.\\n \\n(1) In page 4 Encoder paragraph, the authors mention that the self-attention scheme is capable of finding the causal relationships. Why? In my opinion, the attention scheme only reflects the correlation relationship. The authors should give more clarifications to convince me about their beliefs.\\n \\n(2) The authors first introduce the h(A) constraint in eqn. (4), and mentioned that only have that constraint would result in a large penalty weight. To solve this, the authors introduce the indicator function constraint. What if we only use the indicator function constraint? In this case, the equivalence is still satisfied, so I am confused about the motivation of imposing the h(A) constraint.\\n \\n(3) In the last paragraph of page 5, why the authors adjust the predefined scores to a certain range?\\n \\n(4) Whether the acyclic can be guaranteed after minimizing the negative reward function (the eqn.(6))? I.e., After the training process, whether the graph with the best reward can be theoretically guaranteed to be acyclic?\\n \\n(5) In section 5.3, the authors mention that the generated graph may contain spurious edges? Whether the edges that in the cyclic are spurious? Whether the last pruning step contains pruning the cyclic path?\\n \\n \\n(6) In the experiment, the authors adopt three metrics. For better comparison, the author should clarify that: the smaller the FDR/SHD is, the better the performance, and the larger the TPR is, the better the performance.\\n\\n(7) From the experimental results, the proposed method seems more superiors under the non-linear model case. Why? Could the authors give a few sentences about the guidance of the model selection in the real-world? i.e., when to select the proposed RL-based method? And under which case to choose RL-BIC, and which case to selection RL-BIC2?\\n \\n(8) What\\u2019s training time, and how many samples are needed in the training process?\", \"minor\": \"1. In the page 4 decoder section, the notation of enc_i and enc_j is not clarified.\\n\\n2. On page 5, the \\\\Delta_1 and \\\\Delta_2 are not explained.\\n\\n3. For better reading experience, in table 1,2,3,4, the authors should bold value that has the best performance.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Update: after the revision, I have decided to increase my score to 8.\", \"original_comments\": \"In this paper, the authors proposed a new reinforcement learning based algorithm to learn causal graphical models. Simulations on real and synthetic data also shows promise.\\n\\nPros\\n\\n1. It's great to see the authors has done a comprehensive comparison with the other methods, especially under different simulation scenarios.\\n\\n2. The novel idea of applying reinforcement learning to DAG search sounds intriguing. Reinforcement learning offers a powerful tool for policy evaluation and decision making. It\\u2019s good to see that the author can successfully extend such toolbox to the field of causal structure learning. To the best of the author\\u2019s knowledge, such idea has never been considered by previous work in causal graphical models.\\n\\nCons. \\n\\n1. In the introduction section, the authors claimed that \\u201cGES is not guaranteed in the finite sample regime\\u201d. This seems to be incorrect. For example, the Nandy et al. paper tackles exactly the finite sample problem. \\n\\nIn conclusion, overall this is a sensible idea, although some of the preliminaries still remain to be polished.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work addresses the task of causal discovery. The proposed contribution is to apply prior work which uses reinforcement learning for combinatorial optimization to structure learning. Specifically, the proposed optimization problem seeks to maximize a penalized score criterion subject to the acyclicity constraint proposed by Zheng, et al. Empirical results show the proposed method performing favorably in contrast to prior art.\\n\\nOverall I think this is a sensible idea, and the authors do a nice job of exposition, and empirical evaluation.\", \"my_concerns_are_as_follows\": [\"The novelty is somewhat limited, since the paper is combining two previously proposed ideas (combinatorial search and the acyclicity constraint) for structure learning.\", \"The paper is loose with technical points. Specifically, the authors claim to use the additive noise model, but then make no restrictions on f(). In this setting, it is fairly well known that we can only hope to learn up to the Markov equivalence class (not the fully directed graph), but there is no mention of this in the paper.\", \"With all of this said, I think overall the paper is an interesting addition to the causal discovery literature.\"]}" ] }
BJlisySYPS
Modelling the influence of data structure on learning in neural networks
[ "S. Goldt", "M. Mézard", "F. Krzakala", "L. Zdeborová" ]
The lack of crisp mathematical models that capture the structure of real-world data sets is a major obstacle to the detailed theoretical understanding of deep neural networks. Here, we first demonstrate the effect of structured data sets by experimentally comparing the dynamics and the performance of two-layer networks trained on two different data sets: (i) an unstructured synthetic data set containing random i.i.d. inputs, and (ii) a simple canonical data set such as MNIST images. Our analysis reveals two phenomena related to the dynamics of the networks and their ability to generalise that only appear when training on structured data sets. Second, we introduce a generative model for data sets, where high-dimensional inputs lie on a lower-dimensional manifold and have labels that depend only on their position within this manifold. We call it the *hidden manifold model* and we experimentally demonstrate that training networks on data sets drawn from this model reproduces both the phenomena seen during training on MNIST.
[ "Neural Networks", "Generative models", "Synthetic data sets", "Generalisation", "Stochastic Gradient descent" ]
Reject
https://openreview.net/pdf?id=BJlisySYPS
https://openreview.net/forum?id=BJlisySYPS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "t3TV3FqukD", "HklA9S5hsH", "r1g-YBknoB", "ByleSIzooB", "Syx18KbjiS", "BkxrJY-sjH", "ryx4oO-iir", "HkeSXiWb9B", "BJePTrbx9H", "Skxrfl2ptB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798736099, 1573852566163, 1573807480778, 1573754424503, 1573751110704, 1573751005047, 1573750939717, 1572047645222, 1571980735375, 1571827725386 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1928/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1928/Authors" ], [ "ICLR.cc/2020/Conference/Paper1928/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1928/Authors" ], [ "ICLR.cc/2020/Conference/Paper1928/Authors" ], [ "ICLR.cc/2020/Conference/Paper1928/Authors" ], [ "ICLR.cc/2020/Conference/Paper1928/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1928/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1928/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper examines the idea that real world data is highly structured / lies on a low-dimensional manifold. The authors show differences in neural network dynamics when trained on structured (MNIST) vs. unstructured datasets (random), and show that \\\"structure\\\" can be captured by their new \\\"hidden manifold\\\" generative model that explicitly considers some low-dimensional manifold.\\n\\nThe reviewers perceived a lack of actionable insights following the paper, since in general these ideas are known, and for MNIST to be a limited dataset, despite finding the paper generally clear and correct.\\n\\nFollowing the discussion, I must recommend rejection at this time, but highly encourage the authors to take the insights developed in the paper a bit further and submit to another venue. E.g. trying to improve our algorithms by considering the inductive bias of structure of the hidden manifold, or developing a systematic and quantifiable notion of structure for many different datasets that correlate with difficulty of training would both be great contributions.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to response\", \"comment\": \"I suppose my point was more of a meta-scientific one: many projects aim to identify something interesting by comparing between [synthetic toy dataset] and MNIST, and many such projects find something interesting. The moment those results are tried on [something more complicated than MNIST], they either fail, or the story becomes so muddy as to challenge the validity of the originally \\\"interesting thing\\\".\", \"if_i_can_compress_my_criticism_to_a_sentence\": \"It's difficult to evaluate the generality of a method when it's restricted only to a single comparison two datasets. I do not mean to discourage the authors--I think this line of work is quite promising!--I simply think it is premature, and would be *greatly* improved by analysis of more datasets.\"}", "{\"title\": \"Thank you for the clarification !\", \"comment\": \"We commented on the issue of realisability that you raised in that paragraph in our next paragraph!\"}", "{\"title\": \"Clarification\", \"comment\": \"> The referee states that he/she is not convinced by the significance of the latent task without giving any reason for this. Can the referee please express his/her reason for this so that we get a chance to address this criticism?\\n\\nJust for clarification, my reasoning was described in the next paragraph of the review comment.\"}", "{\"title\": \"Our response\", \"comment\": \"We thank the reviewer for his/her comments on our paper.\\n\\n1. We are not sure what the referee means by a \\\"blanket citation\\\"? Can the referee be more concrete? We cite related work in the whole introduction, amounting to [33] papers. In the paragraph titled \\\"related work\\\" \\nwe merely summarised those that did not otherwise naturally fit into the introduction. We renamed the Introduction and that mentioned paragraph to avoid giving the wrong impression.\\n\\nThe only specific comment about our \\\"abuse of citations\\\" is the reference to work [1]. While this work is interesting, it has respectfully no significant connection to our work. We do not study generalizations bounds, nor over-parametrization in our paper. \\n\\n2. We did not mean to claim that identification of this difference if novel to our paper, indeed as the referee notes we say the opposite. But we see how our presentation might have been misleading and we adjusted the wording to make it clear that the identified difference is our method, while the main contribution is the model that reproduces them.\\n\\n3. In most of natural science having a simple synthetic model reproducing the observed behaviour (or more of the observed behaviour than previous models) is seen as important progress even when it comes with no theorems and even when it does not explain ALL possible observed phenomena.\", \"experiments\": \"We thank the referee for the comment concerning the suggested test with MNIST inputs. \\nSince the reviews lead overall to the rejection of our paper, we will test this thoroughly later. But we strongly expect to find that the two independently trained students do agree on MNIST inputs, contrary to what the referee suggest (if we understood his point correctly). Our point is not that when you train on one distribution you will disagree on another (as in our understanding suggested by the referee), but that when you train on a low-dimensional manifold inputs then you disagree outside of the manifold. The vanilla teacher student model inputs are not on low-dimensional manifold and hence two independent students do agree, even on MNIST inputs.\\n\\nThe pointer related to paper [2] does not seem relevant. The fact that the vanila teacher-student model has (or can have) spurious local minima does not imply that two randomly initialized students will yield diverse outputs. We show that on the vanilla teacher student two independently trained students will with high probability lead to agreement comparable to the generalization accuracy. I.e. we will not see the behaviour of good generalization and large disagreement because of existence of several spurious local minima. With exponentially rare initialization we could, but this is not what we test in our paper, where the system sizes are large enough that the results we observe basically concentrate from one realization to another. These concentration properties are well studied and proven in the large size limit for the vanilla teacher-student models.\\n\\nThe comment about theoretical understanding of the model is very fair and we agree. This is something we are working on and the 10 days we have for the revision are not enough to consolidate and report our theoretical findings.\"}", "{\"title\": \"Our response\", \"comment\": \"We thank the reviewer for his/her comments on our paper.\\n\\nWe did not mean to claim that the observation that data lie on lower-dimensional manifold is novel, we indeed cite work [7, 8, 9, 10] where this appeared. The same for the absence of plateau is not novel. We see how our \\\"main contribution\\\" paragraph might have lead to this misunderstanding and we changed it accordingly in the revised manuscript. Our main contribution is indeed the simple synthetic model reproducing these observations.\\n\\nThe referee states that he/she is not convinced by the significance of the latent task without giving any reason for this. Can the referee please express his/her reason for this so that we get a chance to address this criticism?\\n\\nThe referee says that according to him/her the main difference is in the realizability or not of the task. This is certainly a point worth studying more thoroughly. We have not seen the MNIST properties in the basic ways un-realizability is usually put in the teacher-student model. But showing that what we report cannot be explained via realizability is again not the point of the paper and would be a topic for another paper. \\n\\nWe do thank the reviewer to the list of 5 minor issues that we corrected.\"}", "{\"title\": \"Our response\", \"comment\": \"We thank the reviewer for appreciating our work and summarising well what we aimed to do.\", \"concerning_our_reliance_on_mnist\": \"Out point is that there is already a noticeable difference between the behaviour of the canonical teacher-student model and learning with MNIST. In more complex datasets such as CIFAR there will only be more differences. Our main point is to construct a model that will reproduce the behaviour observe in MNIST better than the canonical teacher-student model.\\n\\nWe are not sure that going further and identifying key differences between learning on MNIST and, say, CIFAR is an interesting next step. While the need of convolutional layer will almost surely play against analytical tractability of an eventual corresponding model, the role of depth is for instance a direction that we highlighted as important. But this does goes beyond the scope of the present paper.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors consider the general problem of \\\"structure\\\" in datasets--particularly, what are the features of datasets that govern the learning dynamics of neural networks trained to classify that data. They approach this problem by looking at combinations of [iid gaussian, structure] inputs and [teacher, latent] tasks (for particular choices of \\\"teacher\\\", \\\"latent\\\", and \\\"structure). Finally, they identify that \\\"structure\\\" in the input space, and a notion of \\\"latent\\\"-ness in the task seem crucial for a synthetic dataset to recapitulate the learning dynamics of a real-world dataset.\\n\\nThe experiments, exposition, and motivation are all exceedingly clear. My only reservations are about the scope of the experimentation / strength of conclusions of the paper for generally structure data.\\n\\nThus, I suggest a Weak Reject of this paper (though I would likely increase my rating to Weak Accept given my comments below).\\n\\nThe primary weakness of this paper is the over-reliance on the MNIST dataset, which is very nearly linearly separable. Thus, I strongly worry that any notions of latentness that work for MNIST might not transfer at all to more complicated data regimes---i.e., while I believe the authors have identified and patched an interesting gap between the learning dynamics of iid data and MNIST, I'm not sure if there still isn't a gap between something like MNIST and, say, CIFAR-10. I would raise my score to an accept if the authors carried out their analysis on (at least) CIFAR-10 as well, and even higher if the authors greatly expanded their experiments.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies influences of data structures on neural network learning. The data structures discussed in this paper are structured inputs (concentrating on a low-dimensional manifold) versus unstructured ones, as well as the teacher task (labels are obtained as a function of high-dimensional inputs) versus the latent task (labels are obtained as a function of the lower-dimensional latent representation of inputs). The introduced model, the hidden manifold model, which is a latent task with structured inputs, is claimed to reproduce two features found in learning of the MNIST data set, whereas the teacher task with unstructured inputs does not.\\n\\nThe observation that typical real-world datasets are concentrated on a lower-dimensional manifold is not novel, and it is also well expected that networks trained with such a dataset would exhibit different behaviors for inputs outside such a lower-dimensional manifold. The other observation that in real-world learning tasks one rarely encounters plateaux is not novel either. The possible novelty of this paper would thus be in the proposal of the hidden manifold model, but I am not convinced with the significance of the latent task. Because of these, I would judge possible contributions of this paper rather weak, so that I would not be able to recommend acceptance of this paper.\\n\\nI think that the main difference between the authors\\u2019 \\u201cteacher task\\u201d and \\u201clatent task\\u201d lies in realizability of the underlying function: The teacher task is certainly realizable once the number of hidden units exceeds that of the teacher, whereas we are not sure about the realizability of the latent tasks. There might even be different levels of unrealizability which can affect learning. Anyway, the teacher versus latent distinction of learning tasks, as introduced in this paper, should be best regarded, at least in its current status, as a working hypothesis which would need more investigation. I would agree that this paper puts a step forward, but does not arrive at any decisive conclusion yet.\\n\\nPage 3, line 26: The assumption that g should act componentwise does not seem needed because in equation (1) it acts on a scalar.\\nPage 4, line 16: there exist(s) a student network\\nPage 6, line 20: gradient descent methods such (as) natural gradient descent\\nPage 7, line 7: I do not understand what is meant by \\u201cby dividing all entries by the covariance of the entire matrix\\u201d. An entry should be a real number, whereas the covariance should be a matrix.\\nPage 7, line 12: cf. (left -> right) of Fig. 1\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper studies how different settings of data structure affect learning of neural networks and how to mimic the behavior of neural networks seen on real datasets (e.g. MNIST) when learning on a synthetic one.\\nI would recommend rejecting the paper due to several issues pointed out below.\\n1. The paper abuses blanket citation making it difficult to identify and verify contribution and conduct a comparison with existing literature. Related work section amounts only to one paragraph in size. It looks questionable, that nobody ever treated the problem of the generalization ability of neural networks from a point of the data manifold properties. After a quick search, for example, [1] provides an in-depth analysis and generalization bounds for two-layered neural networks and provides a data complexity measure that can discriminate between random labels (which are equivalent to the outputs of randomly initialized fixed teacher networks in this work) and true labels on structured datasets like MNIST and CIFAR. The paper fails to cite this work as well. \\n2. The paper claims to experimentally identify key differences in the training dynamics of neural networks in teacher-student setup and on an MNIST task (binary classification into even and odd numbers). One of the differences is the presence of plateaus in the learning curves in the vanilla teacher-student setup, however, as the paper states itself - this is a well established and studied characteristic of the setup, not something unexpected and new.\\n3. Overall, I have yet to see actionable development in this paper as it consists of observations that have been noticed and studied previously and presents no attempt at explanation or rigorous analysis.\", \"as_for_the_experiments\": \"The setting of the experiment in section 3.1 leaves space for improvement. For example, it would be interesting to see whether two neural networks learned in the vanilla teacher-student setup on the iid random inputs agree on the MNIST inputs (i.e. inverting the experiment in 3.1) as a sanity check for other factors interplay since MNIST inputs would be an example of the out of distribution inputs for the networks learned on iid random examples used in the experiment. As another pointer, [2] shows that even when training input is from a standard normal distribution, the problem can have spurious local minima, implying that even on unstructured training datasets, neural networks from different initializations yield diverse outputs for out of distribution inputs, not agreeing among each other.\\n\\nI would also like to see concrete examples when and how the hidden manifold model may benefit theoretical understanding or practical knowledge on, for example, how to cook a dataset or check if the dataset admits/affects learning.\\n\\nReferences\\n[1] Arora, Sanjeev, et al. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks.\\n[2] Safran, Itay, and Ohad Shamir. Spurious local minima are common in two-layer relu neural networks.\"}" ] }
rJgsskrFwH
Scaling Autoregressive Video Models
[ "Dirk Weissenborn", "Oscar Täckström", "Jakob Uszkoreit" ]
Due to the statistical complexity of video, the high degree of inherent stochasticity, and the sheer amount of data, generating natural video remains a challenging task. State-of-the-art video generation models attempt to address these issues by combining sometimes complex, often video-specific neural network architectures, latent variable models, adversarial training and a range of other methods. Despite their often high complexity, these approaches still fall short of generating high quality video continuations outside of narrow domains and often struggle with fidelity. In contrast, we show that conceptually simple, autoregressive video generation models based on a three-dimensional self-attention mechanism achieve highly competitive results across multiple metrics on popular benchmark datasets for which they produce continuations of high fidelity and realism. Furthermore, we find that our models are capable of producing diverse and surprisingly realistic continuations on a subset of videos from Kinetics, a large scale action recognition dataset comprised of YouTube videos exhibiting phenomena such as camera movement, complex object interactions and diverse human movement. To our knowledge, this is the first promising application of video-generation models to videos of this complexity.
[ "autoregressive models", "video prediction", "generative models", "video generation" ]
Accept (Spotlight)
https://openreview.net/pdf?id=rJgsskrFwH
https://openreview.net/forum?id=rJgsskrFwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "fMw0PHMXls", "Hyg-BQCqsB", "HyeO0bRqjB", "rkl5dCEDsB", "SyxPJQzwsr", "HyeTaL-PjS", "BygwQS-DiS", "B1g90yoRtH", "r1gMc17TtH", "ryeoIONqKB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798736069, 1573737272551, 1573736912124, 1573502578105, 1573491422955, 1573488325342, 1573487902562, 1571889105962, 1571790730111, 1571600466667 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1926/Authors" ], [ "ICLR.cc/2020/Conference/Paper1926/Authors" ], [ "ICLR.cc/2020/Conference/Paper1926/Authors" ], [ "ICLR.cc/2020/Conference/Paper1926/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1926/Authors" ], [ "ICLR.cc/2020/Conference/Paper1926/Authors" ], [ "ICLR.cc/2020/Conference/Paper1926/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1926/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1926/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper presents an approach for scalable autoregressive video generation based on a three-dimensional self-attention mechanism. As rightly pointed out by R3, the proposed approach \\u2019is individually close to ideas proposed elsewhere before in other forms ... but this paper does the important engineering work of selecting and combining these ideas in this specific video synthesis problem setting.\\u2019\\nThe proposed method is relevant and well-motivated, and the experimental results are strong. All reviewers agree that experiments on the Kinetics dataset are particularly appealing. In the initial evaluation, the reviewers have raised several concerns such as performance metrics, ablation study, training time comparison, empirical evaluation of the baseline methods on Kinetics, that were addressed by the authors in the rebuttal. \\nIn conclusion, all three reviewers were convinced by the author\\u2019s rebuttal, and AC recommends acceptance of this paper \\u2013 congratulations to the authors!\", \"title\": \"Paper Decision\"}", "{\"title\": \"Corrections.\", \"comment\": \"We updated our related work section slightly so that it becomes clearer that many of the mentioned works (after discussing VAE based approaches) are actually completely different directions and not additions upon VAEs.\\n\\nWe added a reference for Figure 2 in section 4.2, paragraph \\\"Qualitative Observations\\\".\"}", "{\"title\": \"RE: Official Blind Review #2 Part 2\", \"comment\": \"Additional clarification concerning the definition of U_k, N_v and P in section 3.3.\\n\\nWe define the dimensionality of U_k and P in Eq. 8. These are trainable parameters.\\n\\nN_v is actually defined in the text as \\\"N_v=16\\\". It is 16 because we predict 6 channels (\\\"N_c=6\\\") of 4bit per channel. 4bit translates to 2^4=16 potential values.\"}", "{\"title\": \"Re: Official Blind Review #3\", \"comment\": \"Thank you for your detailed comments and recommendations for improvement in clarity! We will try to clarify the terms in the final version.\\n\\n== Model size, training- and generation time ==\\nWe would like to point out that training time is not a problem specific to autoregressive models - on the contrary since there are no latent variables to infer and no recurrence to unroll, fully attention-based autoregressive models are among the most efficient to train as gradient computation is completely parallel across the 3D volume during training.\\n\\nWe agree that generation time is currently a considerable practical limitation of these methods. However, as mentioned in the paper, we believe that parallel generation methods (e.g., Stern et al., 2018) and low-latency hardware could bring down this substantially.\\n\\nRegarding model-size, note that VideoFlow (Kumar et al., 2019), for instance, has about the same number of parameters than our base models, yet our perplexity is much lower, our generated videos have much better fidelity and maintain long-range temporal dependencies (e.g., objects hidden by the robot arm for multiple frames) better.\", \"update\": \"The authors of VideoFlow corrected their initial response to us regarding model size. Though still slightly higher, it turns out that their models' number of parameters are in the same ball park as our base models. Hence, we corrected our statements above.\\n\\n== Comparison on Kinetics ==\\nOur aim with this work is to push the limits of autoregressive models and demonstrate their effectiveness as baselines for competitive video prediction, as illustrated by the experiments on BAIR, while also providing momentum towards exploring much more challenging tasks. We would definitely be interested in seeing how other methods would perform on the Kinetics dataset and hope that the community will take on this challenge.\\n\\n== Block-local attention ==\\nBlock-local attention is necessary to limit memory requirements as attention is quadratic in the number of pixels, which grows prohibitive for 3D volumes. Block-local attention brings this down to linear complexity, similarly to the concurrently proposed flattened sparse attention of Child et al. (2019), while maintaining the explicit 3D structure of videos and not requiring any custom kernels.\\n\\nWe did experiment with varying block sizes in the time- and space dimensions across different layers and found the model robust to this choice. It seems that what matters is that there is a sufficient connectivity between pixels across the video volume, rather than the exact choice of per-layer connectivity.\\n\\n== Ablation and relative-position prior ==\\nTo clarify, the relative position attention-bias term does not enforce a local preference - it is a learned parameter which simply gives the model capacity to take relative position information into account. Note that we do ablate the number of attention heads, number of layers and the hidden size in Table 3 of the appendix, where we find the hidden size to be the most effective way to improve perplexity.\\n\\n== Why focus on video prediction ==\\nThis is an interesting question. We have focused on video generation conditioning on an initial frame to stay comparable to existing work. Completely free video generation is much more difficult to evaluate, beyond visual inspection and perplexity. By conditioning on an initial frame, measures such as FVD are much more informative. We also believe that predicting future frames is a more practically interesting task. However, we believe that autoregressive models could be competitive for unconditional generation as well, following the results on unconditional image generation by Menick & Kalchbrenner (2019).\\n\\n== Lower perplexity on Kinetics ==\\nThis is indeed an intriguing finding. There are many potential reasons. However, we think this might be due to the lower frame-rate in the BAIR robotic pushing dataset (10 frames per second) compared to Kinetics (25), resulting in faster movement between frames.\"}", "{\"title\": \"The response is satisfactory\", \"comment\": \"Thanks for the response!\\n\\n- Thanks for going through the effort to open-source the code!\\n\\n- I indeed somehow missed the videos, thanks for pointing out the link! \\n\\n== Aliasing effects ==\\nI agree that the effects are visible to a much smaller extent on the BAIR dataset, which indeed very likely indicates underfitting. The explanation seems plausible. I want to note that I think the effects are still visible on BAIR, such as in Fig 8, rows 6,7 from the top, the shadow border becomes a straight line over time.\\n\\n== Combination with VAEs or normalizing flows ==\\nThe response convinces me that the independence assumptions might not be the most pressing problem. The problem of learning spatially or temporally distant dependencies seems to be a much harder one, and current latent variable approaches also commonly suffer from this problem.\"}", "{\"title\": \"RE: Official Blind Review #1\", \"comment\": \"We thank the reviewer for detailed review and comments. We will try to answer open questions as best as possible in the following.\\n\\n== Open Source Code == \\nWe aim to open source code as soon as possible, hopefully in time for the publication.\\n\\n== Videos ==\\nA link to videos is actually provided in the beginning of section 4.\\n\\n== Corrections ==\\nWe are sorry for the incorrect associations in our related work section and will update the paper as soon as possible. We will also make sure to mention Figure 2a in the main text.\\n\\n== Aliasing effects ==\\nIndeed these effects are visible. However, the per-frame systematic independences are probably not the (sole) cause, as there should actually be very few. If it was, a slightly adapted model, using a masked CNN of slightly larger spatial receptive field should be able to prevent such unfortunate behaviour. However, we believe that this is due to the fact that indirect connections between pixels over multiple layers might sometimes be too weak to establish dependence. Interestingly, though, we did not observe any visual deterioration on BAIR robotic pushing, indicating that these effects might actually be due to under-parameterization. In fact, we believe that we might need to scale models even further to achieve strong performances on datasets of the complexity of Kinetics.\\n\\n== Combination with VAEs or normalizing flows ==\\nCombining AR models with more complicated approaches such as VAEs or normalizing flows to alleviate the systematic independence problem would indeed be an interesting research direction. However, note that it is possible to design AR models (e.g., Image Transformer for images) that do not exhibit such independence assumptions. Unfortunately, we did not find a viable solution that scaled well to TPUs by the time of writing this paper.\"}", "{\"title\": \"RE: Official Blind Review #2\", \"comment\": \"We thank the reviewer for their thorough review and try to address the questions and concerns in the following.\\n\\n== Comparison to prior work on Kinetics ==\\nWe would definitely be interested in seeing how these methods would perform on this dataset. However, setting up these methods for large-scale training and hyper-parameters tuning to allow for fair comparison would require a substantial engineering effort and time to experiment. Given the effort required to run all of the experiments in the paper, we determined this to be out scope for the current work. Our aim with this work is to push the limits of autoregressive models and demonstrate their effectiveness as baselines for competitive video prediction, as illustrated by the experiments on BAIR, while also providing momentum towards exploring much more challenging tasks.\\n\\n== PSNR and SSIM ==\\nWe justify not including PSNR and SSIM in the section 4.1 (Extrinsic Evaluation). To summarize, prior work has found that those metrics do not correlate well with perceptual quality (see for instance Lee et al., 2018, Unterthiner et al., 2019). In particular, VAEs perform very well on SSIM and PSNR on BAIR, despite the actual videos being quite blurry. In any case, we found that in terms of these metrics our models actually performed better on BAIR than SAVP, but still worse than VAE-based models. We discussed including those results but refrained from it in the interest of brevity and because they can be misleading, as demonstrated clearly by Lee et al., 2018.\\n\\n== Only single frame priming ==\\nThere have been discrepancies between evaluation protocols in prior work (e.g., between SAVP and VideoFlow). We settled for priming on a single frame because we found that our model was able to handle this (harder) setting quite well. Conditioning on 2 frames or more improves our scores slightly, but due to the inherent stochasticity of the robot arm, the difference is negligible.\\n\\n== How can the conditioning frame be fed into the block-local self-attention module? ==\\nWe actually do not really condition our model on initial frames using another prefix encoder. Instead we only have a single decoder for which we fix the given prefix pixels to the ground truth. This is referred to as \\\"priming\\\" in the paper. This means that the hidden representation of the initial priming frames are computed using the ground truth instead of generated pixels.\\n\\n== Channel Splitting ==\\nThe results of Menick & Kalchbrenner (2019) for image generation suggested that encoding and decoding the first four bits for each channel prior to the four fine bits is important. The argument is that the fine bits are quite easy to predict conditioned on the coarse bits. However, in our experiments we found only slight improvements of this splitting. On the other hand, splitting channels do improve the memory footprint in the slice encoder (thanks to a smaller one-hot encoding), so we kept with this setup for all experiments.\\n\\n== How to aggregate different video blocks ==\\nVideos are divided into blocks before each layer and subsequently reunited after each layer. Crucially the shapes of the blocks differ at every layer to allow for efficiently connecting pixels (indirectly) over large distances. For instance, consider a pixel at position [n,m] in a 2D image which we would like to connect to a pixel at position [1,1]. We can either do this directly if the block is large enough (at least of size [n,m]). However, this becomes intractable for large values of n,m due to the quadratic nature of attention. An alternative is to run one layer with self-attention in blocks of [1,m] --- which would connect pixel [1,m] to pixel [1,1] (among others of course) --- followed by self-attention in blocks of size [n,1] --- connecting pixel [n,m] with [1,m] which itself is already connected to [1,1]. This way, information can flow from pixel [1,1] to pixel [n,m] through pixel [1,m].\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work proposes an autoregressive video generation model, which is based on a newly proposed three-dimensional self-attention mechanism. It generalizes the Transformer architecture of Vaswani et al. (2017) to spatiotemporal data (video). The original Transform implies self-attention among different words in a sentence. Considering the larger scale of video, this work proposes to divide it into small blocks, and apply the self-attention (part of block-local self-attention modular) on each block. At the same time, it addresses the information exchange between blocks problem, by spatiotemporal sub-scaling (described in section 3.2). The proposed method achieves competitive results across multiple metrics on popular benchmark datasets (BAIR Robot Pushing and KINETICS), for which they produce continuations of high fidelity.\", \"some_questions\": [\"The proposed model is claimed to work on competitive results across multiple metrics on popular benchmark datasets. However, it only compares with stat-of-the-art models on BAIR Robot Pushing dataset (for the other dataset, the author only compares with the variations of the proposed model). Further, the author only reports the result of Bits/dim and FVD, instead of PSNR and SSIM, which are reported in the original papers. Any justification for this? Though FVD has its own advantages, showing PSNR and/or SSIM at the same time would help us get better sense of the performance.\", \"In table 1 (left) in section 4.2, the author mentions that results from all the stat-of-the-art models are not strictly comparable, since prior works use longer priming sequences of two or three frames, whereas the proposed models only observe a single prime frame. I am confused of why the proposed model can only see a single frame.\", \"The proposed block-local self-attention modular works on divideding video into small blocks, which seems to be a matrix of 3 or 4 dimensions (t,h,w,c). However, in the experiment, the input of the model for the BAIR Robot Pushing dataset is the first frame. How can this frame be feed into the block-local self-attention modular?\", \"In section 3.3, it splits the 3x8-bit RGB channel into 6x4-bit channels. It would be better if the author can show an example and clarify the advantages.\", \"In section 3.3, U_k, N_v and P seems not defined in the context.\", \"Videos are divided into small blocks and feed into the block-local self-attention modular separately. Then, I\\u2019m confused on how to aggregate these different blocks together to predict future frames.\", \"I would like to raise up my score if the author can address my questions.\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents an approach for scalable autoregressive models for video synthesis. Key to the approach is a form of 3D (2 space and one time) self-attention that operates in a softly local manner (through a bias on the attention weights that makes them tend to prefer nearby connections), and also limits its field of view to a specific 3D sub-\\\"block\\\" of video at each layer for scalability. They also propose a clever ordering for autoregressive synthesis of the video subsampling spatially and temporally to generate multiple slices that are synthesized autoregressively one after the other. Each of these ideas is individually close to ideas proposed elsewhere before in other forms, as the authors themselves acknowledge [Vaswani et al 2017, Parmar et al 2018, Parikh et al 2016, Menick et al 2019], but this paper does the important engineering work of selecting and combining these ideas in this specific video synthesis problem setting.\\n\\nResults on standard datasets for video generation match up to and/or surpass prior methods, in line with prior work on autoregressive image generation that has been shown to do very well on similar metrics (perplexity and FID). What is perhaps more interesting is that this paper presents initial promising results for open-world Youtube video settings (Kinetics dataset) that have not been evaluated systematically in any prior work in this area, to my knowledge.\\n\\nThe downsides of this paper are largely common to this method class (autoregressive generative models): training time (one of their models is \\\"trained in parallel on 128 TPU v3 instances for 1M steps\\\"), inference time (four short 64x64 video clips of 30 frames take 8 mins to generate on a Tesla V100), and model sizes (373M parameters for the Kinetics model). However, this does not take away from the contributions made here, that make it possible at all to train an autoregressive model of this size. \\n\\nOn the experiments, some questions, comments, and suggestions that the authors might consider addressing:\\n- How well do methods like SVG, SAVP, SV2P do on Kinetics, for comparison? It would be still more interesting if those models were scaled to have similar sizes to the large model in this work. While these methods have never been evaluated before on such unconstrained data, it is not clearly established that they do not work at all.\\n- To what extent does the blocking help, and when does it breaks down? e.g. how many layers/how large do blocks have to be for the idea of using different block sizes to suffice for smooth video synthesis? What happens when the blocking idea is not used at all?\\n- Other choices that aren't ablated in experiments: the choice of a local preference using the bias term in attention, the Transformer-style multi-attention heads. I do understand that these models are expensive to train and evaluate, but perhaps a smaller dataset might still suffice to demonstrate the value of these choices.\\n- Why is the proposed approach evaluated only on video prediction? Could it not be used for video generation without conditioning or with class conditioning?\\n- It is surprising to me that the perplexity of Kinetics models is lower than BAIR. Is there a reasonable explanation?\\n\\nWriting and presentation are good for the most part, despite the main paper being dense with details and multiple fairly involved ideas. I particularly enjoyed parts of related work, the illustration of slicing in Fig 1, and the illustrative examples in Fig 3. \\n\\nI would suggest however, that the paper might benefit from placing Sec 3.2 which describes the framework, before Sec 3.1. Fig 1 also belongs closer to Sec 3.2 anyway.\\n\\nThere are also terms/phrases I don't understand despite being reasonably familiar with the field like \\\"positional embbeddings\\\" (Sec 3.2). I also don't understand the need for \\\"one-hot encoding of the discretized pixel intensities\\\" (in that same paragraph). As a more minor comment, a footnote 1 before Eq 1 declares that capital letters denote matrices right before using capital letters to denote constants (T, H, W etc.).\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary \\nThis papers presents a pixel-autoregressive model for video generation, in the spirit of VPN (Kalchbrenner\\u201916). The proposed method uses video transformers and is made computationally efficient by extending block-local attention (Parmar\\u201918, Chen\\u201918) and sub-scaling (Menick\\u201919) to 3D volumes. The block-local attention is separable, meaning that in theory it is possible to connect every two pixels through a sequence of block-local layers. However, for efficient parallelization implement via masking mechanism it is necessary to ignore certain connections, introducing independence assumptions. The model is shown to substantially exceed state-of-the-art in terms of likelihood as well as quantitative and qualitative visual quality on several datasets, including the very challenging Kinetics-600. Interestingly, it is shown that the model with spatiotemporal subscaling is more robust to higher generation temperatures, which could imply robustness to accumulating errors. \\n\\nDecision\\nThe paper proposes a well-motivated method backed by solid state-of-the-art results. I recommend accept.\\n\\nPros\\n- The proposed method is relevant and well-motivated.\\n- The experimental results are strong.\\n\\nCons\\n- The paper novelty is somewhat limited as it is mostly a combination of previously existing techniques.\\n- The paper does not provide code which makes the results not easily reproducible. I think a minimal example of the code should be provided that is trainable at least on a simple dataset.\\n\\nQuestions\\n- No videos are provided. Please provide an (anonymous immutable) link to video results.\\n- Strong aliasing artifacts can be seen in the supplement on the Kinetics data, such as vegetables becoming increasingly \\u201cblocky\\u201d as well as general cube-like aliasing artifacts in Fig. 9. This indicates that the introduced independence assumptions are likely hurting the video quality. The paper discusses this in the appendix C, stating that there seems to be no remedy for the independence assumptions that does not increase the computational cost. However, this is exactly the problem that latent variable models such as variational inference or normalizing flows are designed to address. Would a certain combination of latent variable models with the proposed autoregressive approach alleviate these issues?\\n\\nMinor comments\\n- Contrary to the summary in the related work section, Kumar\\u201919 does not use variational inference and operates purely on the normalizing flows technique. Similarly, Mathieu\\u201916 and Vondrick\\u201916 do not use variational inference either instead relying on adversarial techniques. The paper correctly states that Lee\\u201918, Castrejon\\u201919 use variational inference.\\n- Figure 2 is never referred to in the text.\"}" ] }
S1eqj1SKvr
TOWARDS FEATURE SPACE ADVERSARIAL ATTACK
[ "Qiuling Xu", "Guanhong Tao", "Siyuan Cheng", "Lin Tan", "Xiangyu Zhang" ]
We propose a new type of adversarial attack to Deep Neural Networks (DNNs) for image classification. Different from most existing attacks that directly perturb input pixels. Our attack focuses on perturbing abstract features, more specifically, features that denote styles, including interpretable styles such as vivid colors and sharp outlines, and uninterpretable ones. It induces model misclassfication by injecting style changes insensitive for humans, through an optimization procedure. We show that state-of-the-art pixel space adversarial attack detection and defense techniques are ineffective in guarding against feature space attacks.
[ "towards", "new type", "adversarial attack", "neural networks", "dnns", "image classification", "different", "attacks", "input pixels" ]
Reject
https://openreview.net/pdf?id=S1eqj1SKvr
https://openreview.net/forum?id=S1eqj1SKvr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "O0LE7qmXz0", "BygAhn-5sr", "HkxaE3ZcoH", "SJgzniW5sr", "ryxidrATYB", "SklK3X3ptB", "BJlmoHxaKH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798736041, 1573686454463, 1573686324756, 1573686185626, 1571837299213, 1571828657053, 1571779995153 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1925/Authors" ], [ "ICLR.cc/2020/Conference/Paper1925/Authors" ], [ "ICLR.cc/2020/Conference/Paper1925/Authors" ], [ "ICLR.cc/2020/Conference/Paper1925/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1925/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1925/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper provides an improved feature space adversarial attack.\\n\\nHowever, the contribution is unclear in its significance, in part due to an important prior reference was omitted (song et al.) \\n\\nUnfortunately the paper is borderline, and not above the bar for acceptance in the current pool.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Review #1\", \"comment\": \"Thanks for your appreciation and constructive comments, we list your concerns and answer them as follows.\\n\\nReview #1\", \"r1q1\": \"human study\\n\\nWe add a user study on AMT to compare the perceptual quality of feature space attack with PGD.\\nWe randomly selected 200 pairs of adversarial samples, each consisting of an adversarial example from PGD and the other from our feature space attack. Every pair is repeated for 3 times, making a total of 600 pairs. The adversarial samples are for targeted attack on ImageNet, where PGD has 58% successful rate and Feature Space Attack has 88% successful rate. \\nThe worker is asked \\u201cWhich image appears more natural, reasonable and realistic to you? Choose left or right to indicate your choice.\\u201d The order in pair is shuffled.\\n40% of users chose samples Feature Space Attack and 60% users chose samples from PGD. It shows that although the images by PGD are slightly more natural than ours. They are comparable. \\nAlso, we provide a set of adversarial samples for inspection along with the code at https://github.com/JerishDansolBalala/FeatureSpaceAtk .\", \"r1q2\": \"unbroken detection mechanism\\nWe were not aware of that \\u201cThe Odds are Odd\\u201d has been broken during the submission period and will add this information in the paper. The attack [4] specifically designed for the detection approach requires a much stronger threat model, where the attacker already knows the existence of the defense. In our case, however, we are able to evade the detection method without knowing its existence or mechanism.\", \"r1q3\": \"add dKNN as feature space detection\\nWe use the same setting as in the original paper and the code provided by authors to conduct the experiments. We test on CIFAR-10 and two models (CNN+MLP in the default setting and ResNet18). For CNN+MLP, the detection rate against PGD attack is 3.9% and our feature space attack 1.9%. For ResNet-18, the detection rate against PGD attack is 11% and ours 5.4%. Our proposed feature space attack is stronger than pixel-level attacks, and can effectively evade feature based detection methods.\\n\\n[1] H. Hosseini, R. Poovendran. Semantic Adversarial Examples. https://arxiv.org/abs/1804.00499\\n[2] C. Laidlaw, S. Feizi. Functional Adversarial Attacks. https://arxiv.org/pdf/1906.00001\\n[3] Y. Song, R. Shu, N. Kushman, S. Ermon. Constructing unrestricted adversarial examples with generative models. https://arxiv.org/abs/1805.07894\\n[4] H. Hosseini, S. Kannan, R. Poovendran, Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples. https://arxiv.org/pdf/1907.12138.pdf\\n[5] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016.\\n[6] D. Ulyanov, A. Vedaldi, V. Lempitsky. Instance Normalization: The Missing Ingredient for Fast Stylization. https://arxiv.org/abs/1607.08022\\n[7] X. Huang and S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017\"}", "{\"title\": \"Response to Review #3\", \"comment\": \"Thanks for your constructive comments and appreciation. Here we list our response as follows.\\n\\nReview #3\", \"r3q1\": \"intuition of Eq. (5) and (6)\\nThe choice of mean and variance as style is largely an empirical observation. Previous work observed that normalized output of shallow convolutional layers keeps the shape of images. Paper [5] found statistics (e.g., correlation of feature maps) of an image represent style. Paper [6] found that instance normalization (IN) increases the quality of style transfer. Paper [7] investigated IN and proposed to use AdaIN, which transfer mean and variance only at a given layer. These experiment results from previous work show that statistics from model layers carry style information. We will add the discussion to the paper.\", \"r3q2\": \"Compare to other pixel space attacks and code release\\nWe additionally compared to pixel attacks FGSM, CW and DeepFool on CIFAR10. They have the same bound as PGD. The accuracies are respectively 61.06% for FGSM, 61.38% for DeepFool, 81.24% for CW. Compared with 8.64% for feature space attack and 54.02% for PGD. Our comparison aims to show the different nature of pixel attacks and feature attacks renders existing pixel defense ineffective. We have additionally experimented a recent feature space defense. Please see R1Q3 and R2Q2.\", \"we_release_the_code_at_https\": \"//github.com/JerishDansolBalala/FeatureSpaceAtk .\", \"r3q3\": \"Missing citation:\\nPaper [3] is a closely related work. It adds additional losses to GAN for generating unrestricted adversarial samples. A vanilla GAN model only generates over a distribution of limited support, while an autoencoder can attack specific sample. We will add this discussion to the paper.\\n\\n\\n[1] H. Hosseini, R. Poovendran. Semantic Adversarial Examples. https://arxiv.org/abs/1804.00499\\n[2] C. Laidlaw, S. Feizi. Functional Adversarial Attacks. https://arxiv.org/pdf/1906.00001\\n[3] Y. Song, R. Shu, N. Kushman, S. Ermon. Constructing unrestricted adversarial examples with generative models. https://arxiv.org/abs/1805.07894\\n[4] H. Hosseini, S. Kannan, R. Poovendran, Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples. https://arxiv.org/pdf/1907.12138.pdf\\n[5] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016.\\n[6] D. Ulyanov, A. Vedaldi, V. Lempitsky. Instance Normalization: The Missing Ingredient for Fast Stylization. https://arxiv.org/abs/1607.08022\\n[7] X. Huang and S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017\"}", "{\"title\": \"Response to Review #2\", \"comment\": \"We thank for your constructive comments. Here we list your concerns and answer them one by one.\", \"r2q1\": \"novelty\\nIt is a novel way to conduct feature space attack with the combination of style transfer and manipulation of model internal embedding, as pointed out by Review #1. Common style transfer task requires additional information such as painting styles. In this paper, we instead use samples from the same class that share rich style features, which is not explored by existing work. We leverage these implicitly learned features to launch our feature-space attack, which distinguishes ours among various existing attack methods.\\nUnlike in [3], where a vanilla GAN-based attack method generates over a distribution of limited support, and has no control of the generated samples, our encoder-decoder based structure enables attacking each individual sample with controlled content and there is no limit on the number of samples, which is also mentioned in Review #3.\\nWe also conducted an experiment that explores the defensive methods in different spaces. We observed that pixel-level defense is not effective against feature-space attack, and as we will show in the R2Q2 and R1Q3, existing defense that can be used in the feature space cannot defend our attack.\", \"r2q2\": \"add Pixel-DP as possible feature space defense\\nWe use the code provided by authors and the same setting to conduct experiments on Pixel-DP defense. When using l2 norm bound of 0.1, the model accuracy (with Pixel-DP defense) is 80% under PGD l2 attack and 0% under our feature space attacks. When using l2 norm bound of 1, the model accuracy (with Pixel-DP defense) is 31% under PGD l2 attack and 0% under ours. When further increasing l2 norm bound to 10, we found the accuracy on normal images degrades to below 15%. Pixel-DP is hence ineffective against our feature space attack. The results are reasonable as Pixel-DP can only certify the l_2 norm bound up to 1, whereas our feature space attack generates adversarial samples with l_2 norm usually larger than 10.\\nWe have also conducted adversarial training experiments over pixel/feature spaces, which is shown in Table 4 in appendix. When the model is trained with one type of adversarial attacks, it can only achieve non-trivial accuracy on the same attack but is ineffective to other types of attacks. It is non-trivial to design an effective adversarial training that is robust to all types of adversarial attacks, and it is out of scope of this paper.\", \"r2q3\": \"color space attacks\\nThe paper [1] proposed to modify the HSV color space to generate adversarial samples, which transforms all pixels by a non-parametric function uniformly. The experiments were only conducted on CIFAR-10 dataset and Madry model. Differently, feature space attack changes colors of specific objects or background alone and the transformation is learned from similar images, which is more imperceptible. The paper [2] proposed to change the lighting condition and color similarly in [1] to generate adversarial samples. We observe in experiments that feature space attack also learns to modify lightning condition, color and texture as well, please refer to Fig. 3. Compared to these, our attack is more general, and the features attacked are more subtle. We will include the discussion in the paper.\", \"r2q4\": \"large and visible perturbation\\nOur Decoder is trained to minimize difference between the reconstructed image and the original image. We only modify the mean and variance of activations that are considered style of features [5], where the content of images is preserved. When launching attack, we bound the perturbation of mean and variance, which controls the perturbation introduced in the pixel space. Fig. 3, Fig. 5 and Fig. 6 show the pixel/feature space distance of all the generated adversarial samples. It can be observed that our feature space attack has small feature space distance and even smaller than (pixel-space l_inf bounded) PGD attack in most cases. And we conducted an additional user study, suggesting feature space attack has comparable perceptual quality as PGD, please refer to R1Q1.\\n\\n[1] H. Hosseini, R. Poovendran. Semantic Adversarial Examples. https://arxiv.org/abs/1804.00499\\n[2] C. Laidlaw, S. Feizi. Functional Adversarial Attacks. https://arxiv.org/pdf/1906.00001\\n[3] Y. Song, R. Shu, N. Kushman, S. Ermon. Constructing unrestricted adversarial examples with generative models. https://arxiv.org/abs/1805.07894\\n[4] H. Hosseini, S. Kannan, R. Poovendran, Are Odds Really Odd? Bypassing Statistical Detection of Adversarial Examples. https://arxiv.org/pdf/1907.12138.pdf\\n[5] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016.\\n[6] D. Ulyanov, A. Vedaldi, V. Lempitsky. Instance Normalization: The Missing Ingredient for Fast Stylization. https://arxiv.org/abs/1607.08022\\n[7] X. Huang and S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"Authors have introduced a new type of adversarial attacks that perturb abstract features of the image. They have shown that pixel space adversarial attack detection and defense techniques are ineffective in guarding against feature space attacks.\", \"I have some concerns about the novelty of the attack and the appropriateness of defenses that have been tested.\", \"Since the attack is done in the feature space, the defense should also be done in the feature space. For example, adversarial training or smoothing can be done in the feature space. See: https://arxiv.org/abs/1802.03471\", \"There are attacks that perturb colors or other interpretable features of the image that have not been mentioned in the paper. For example, see https://arxiv.org/abs/1804.00499 and https://arxiv.org/pdf/1906.00001\", \"If the decoder has a high Lipschitz constant, a small perturbation in the feature scape 'can' lead to a large and visible perturbation in the pixel space. It was not clear to me how this is being controlled in the current method.\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents an adversarial attack method, which conducts perturbations in the feature spaces, instead of the raw image space. Specifically, the proposed method firstly learns an encoder that encodes features into the latent space, where style features are learned. At the same time, a decoder is learned to reconstruct the images with the encoded features. To conduct attacks, perturbations are added into the encoded features and attack images are generated with the decoder given the perturbated features. The experiment results look promising, showing that the proposed method achieves better attack performance with realistic adversarial images.\\n\\nThe general idea of perturbating the feature (latent) space is not a novel one, which has been studied in [1]. However, the proposed one is with an autoencoder framework instead of GAN used in [1]. Therefore, the proposed approach is able to construct adversarial examples for specific images. In addition, the training of the encoder is adapted from a style transfer method, which seems to learn good features that capture style features. \\n\\nIt is a bit unclear on the intuition of the constructions of Eq. (5) and (6). The details may be in Huang & Belongie, 2017. But it is better to provide more intuitive explanation and discussion on why these constructions capture style variation.\\n\\nThe results shown in the paper look promising. But it would be more comprehensive to compare with other pixel attacks in addition to PGD. Moreover, it is unclear whether it is a fair comparison between the proposed approach and pixel attacks, even under the same amount of perturbations. It would be good if the code will be released.\", \"minor\": \"\", \"last_sentence_in_the_first_paragraph_of_page_3\": \"a missing reference.\\n\\n[1] Song, Yang, Rui Shu, Nate Kushman, and Stefano Ermon. \\\"Constructing unrestricted adversarial examples with generative models.\\\" In Advances in Neural Information Processing Systems, pp. 8312-8323. 2018.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents an interesting new adversarial attack technique that attempts to perturb abstract features learned by the target neural net. It is well written and easy to follow. Its main contribution is the joining of ideas developed in the style transfer literature with those from the adversarial literature.\\n\\nThe authors establish that they are able to create adversarial attacks that look similar to the original image but are miss classified. These images are not bounded by small epsilons, but are said to be indistinguishable by people. They illustrate a sample of these attacks, but no human study is employed to back up this claim. A simple human evaluation to prove that the attacks are indistinguishable from unperturbed images would strengthen the work (This can be done easily and at low cost by employing mechanical Turk or an equivalent system for example). \\n\\nThey make use of a detection mechanism (The Odds are Odd, Rothet al.) to verify that their adversarial attacks are hard to detect, but this particular detection mechanism has already been broken (https://arxiv.org/pdf/1907.12138.pdf). If there is an as of yet unbroken detection mechanism that could be tested, that would improve the work. Alternatively the authors should acknowledge that there are simpler ways of evading this detection method. \\n\\nThe attack that they propose targets the feature space, but no feature space detection methods are tested. The work would be improved by testing on a feature based detection methods such as dKNN (https://arxiv.org/pdf/1902.01889.pdf)\\n\\nOverall the work is interesting and novel, and creatively joins together two otherwise distinct areas of machine learning research to make a modest but novel contribution to the field.\"}" ] }
SyeYiyHFDH
Convergence Analysis of a Momentum Algorithm with Adaptive Step Size for Nonconvex Optimization
[ "Anas Barakat", "Pascal Bianchi" ]
Although Adam is a very popular algorithm for optimizing the weights of neural networks, it has been recently shown that it can diverge even in simple convex optimization examples. Therefore, several variants of Adam have been proposed to circumvent this convergence issue. In this work, we study the algorithm for smooth nonconvex optimization under a boundedness assumption on the adaptive learning rate. The bound on the adaptive step size depends on the Lipschitz constant of the gradient of the objective function and provides safe theoretical adaptive step sizes. Under this boundedness assumption, we show a novel first order convergence rate result in both deterministic and stochastic contexts. Furthermore, we establish convergence rates of the function value sequence using the Kurdyka-Lojasiewicz property.
[ "nonconvex optimization", "adaptive methods" ]
Reject
https://openreview.net/pdf?id=SyeYiyHFDH
https://openreview.net/forum?id=SyeYiyHFDH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "NflYy4z4ZX", "H1exRRr0FH", "SJxODyRTtB", "BJlNQLdatS" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798736010, 1571868359527, 1571835744004, 1571812892037 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1923/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1923/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1923/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The reviewers have reached consensus that while the paper is interesting, it could use more time. We urge the authors to continue their investigations.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This work analyzes the performance of ADAM algorithm under a bounded step size assumption. it proves convergence rate for the deterministic setting and prove a convergence result for the stochastic setting.\\n\\n\\nHowever, I have a few concerns as follows. The first two comments are the major reasons of my rating.\\n1. the convergence in Theorem 4.2 is not standard. It is about \\\"the minimum of the gradients norms\\\", which does not imply the algorithmic convergence. it should be more clearly stated in the introduction.\\n2. The author should include simulations to show whether the theoretical result matches the empirical performance, for both deterministic setting and stochastic setting.\\n\\n3. Small comment: In section 2, g_i should be a d-dimensional vector since it is the gradient of f. But then what is g_i^2?\\n4. For comparison, can you set b=1 and compare with the existing theoretical results of RMSPROP?\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this work, the authors consider a variation of ADAM with a boundedness assumption on the step size and focus on unconstrained, smooth, non-convex minimization setting.\\n\\nThe authors provide non-asymptotic convergence rates for deterministic and stochastic settings, with respect to the gradient norm. Moreover, they also prove convergence in function value for a class of functions that satisfy Kurdyka-Lojasiewicz (KL) property. To the best of my knowledge, this is the first work that utilizes KL property and respective analysis for ADAM-like methods. I have to note that I disagree with your statement of being \\u201cadaptive\\u201d.\", \"i_summarize_my_comments_step_by_step_below\": \"I find the presentation and organization of the paper clear and structured. Related work is sufficient, the authors cite relevant papers with respect to convergence in deterministic and stochastic setting and present detailed comparison between their framework. Similarly, the momentum idea is motivated through Polyak\\u2019s Heavy ball. The known rates for KL functions are provided in conjunction with momentum-based methods.\\n\\nZaheer et al. (2018) assume \\\\beta_1 = 0, meaning their proof works for RMSProp. However for this paper, given that Theorems 4.2 and 4.3 are provided for Algorithm (2), then the algorithm in question is not exactly the same as ADAM either, in my opinion. The bias correction steps are missing. The authors should clarify this point in the paper.\\n\\nDefinition of adaptivity is a little vague for me. There exist different notions of adaptivity (adaptation to global/local smoothness without knowing L, adaptation to non-smooth & smooth problems simultaneously etc.) The authors define adaptivity as \\u201ccomputing step size using gradient history\\u201d. However, their upper bound for the step size requires knowledge of L. Also, they assume a uniform lower bound for step size. These make their framework rather non-adaptive in my opinion.\", \"follow_up_comment\": \"How would one apply this method to neural networks, or any class of problems where Lipschitz constant is virtually unknown?\\n\\nConvergence rate in deterministic oracle setting, provided in Theorem 4.2, is plausible. The proof is nice and easy to read. Compared to De et al., the authors show faster rates with respect to number of iterations. But the rate depends on upper/lower bounds of the step size.\\n\\nConvergence rates in the stochastic setting suggest the quantity does not go to zero, but converges to some noise dominated region (equivalently to some neighborhood) of stationary points. Zaheer et al. has a similar rate characterization for RMSProp, but this paper considers first order moment accumulation on top Zaheer et al. I am not sure about the impact of this results compared to existing ones for ADAM variants. Similar to deterministic case, the analysis is nice and clean. The rate has no dependence on dimension, but so does the analysis of De et al. (2018) and Zaheer et al. (2018). It is not new, but it is a desirable property of the analysis.\\n\\nI have doubts about \\u201cnot having bounded gradient assumption\\u201d. There exists a uniform lower bound for a_n, which implies the sum of square of each coordinate of gradients are bounded, which implies the infinity norm of gradients is bounded. I believe there is an implicit assumption of bounded gradients. It also makes me question if somehow the dimension dependence is hidden under this step size lower bound. I would expect the authors to address the concerns about bounded gradient and dimension dependence.\\n\\nConvergence characterization for KL functions with (sort of) adaptive step size is new also to the best of my knowledge. The convergence analysis is based on PALM (Bolte et al. (2014)) and Bolte et al. (2018) and these previous results are adapted to ADAM. My concern about this direction is whether neural networks belong to this class of functions. If not, how useful it is to use ADAM for this class of functions?\\n\\nOverall, authors approach ADAM-like methods from a generalized scheme based on Heavy Ball, and provide analysis for smooth, non-convex functions with deterministic/stochastic gradients. Having dimension-independent rates is a positive trait of the analysis, but I would like to see a clarification regarding my previous point. In the stochastic setting, dimension-free rate for ADAM (which has very similar characterization to RMSProp in Zaheer et al.) seems to be an important results of this paper. I think the KL function analysis is a new result for \\u201cadaptive\\u201d, evolving step sizes, as well. However, I do not agree with the claim that the authors do not make bounded gradient assumption. In a way, the uniform lower bound on the step size implies it. Knowledge of Lipschitz constant (through step size upper bound) is a restriction for \\u201chighly non-convex\\u201d neural network optimization problems. Considering that ADAM is a classical optimizer for neural networks, I also doubt KL property holds for complex networks. Also, I am not convinced that the algorithm is truly adaptive (upper bounding step size with a function of Lipschitz constant.) \\n\\nI am inclined to give a weak reject to this paper because I am not convinced that the results presented in the paper are compatible with the application. I would also like to see the justification/clarification of the authors for my previous concerns before making the final decision.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper provides convergence analyses for momentum methods using adaptive step size for non-convex problems under a bounded assumption on learning rates. Concretely, a sublinear convergence rate under a general setting and improved convergence rates under KL-condition are provided.\\n\\nAn interesting point of the paper is that (i) the boundedness assumption on the domain or gradient is not required and that (ii) shows faster convergence rates under KL-condition for non-descent methods. However, a major concern is a uniform boundedness assumption on learning rates. A lower bound is a matter because this condition is verified a posteriori after running methods. This kind of condition is not preferred in general. Indeed, most studies provide convergence analyses without such assumptions. In addition, there are several missing references (listed below) that attempt to analyze the convergence of adaptive methods. To make the position of the paper clear, it would be better to provide a theoretical comparison with these studies. Especially, [LO2019] and [XWW2019] are related to this study. [LO2019] has provided convergence analyses without boundedness assumptions on the domain or gradient and [XWW2019] has provided a linear or better convergence rate of an adaptive method by utilizing the KL-condition. Note that a method treated in [XWW2019] is also a non-descent method because there is no theoretical limitation on the initial step size.\\n\\n[WWB2018] X.Wu, R.Ward, and L.Bottou, WNGrad: Learn the Learning Rate in Gradient Descent. arXiv, 2018. \\n[WWB2019] R.Ward, X.Wu, and L.Bottou. AdaGrad Stepsizes: Sharp Convergence Over Nonconvex Landscapes. ICML, 2019. \\n[XWW2019] Y.Xie, X.Wu, and R.Ward. Linear Convergence of Adaptive Stochastic Gradient Descent. arXiv, 2019. \\n[Levy2017] K.Y.Levy, Online to Offline Conversions, Universality and Adaptive Minibatch Sizes. NIPS, 2017.\\n[LYC2018] Y.K.Levy, A.Yurtsever, and V.Cevher, Online Adaptive Methods, Universality and Acceleration, NeurIPS, 2018.\\n[LO2019] X.Li, and F.Orabona. On the Convergence of Stochastic Gradient Descent with Adaptive Stepsizes. AISTATS, 2019.\"}" ] }
SylKikSYDH
Compressive Transformers for Long-Range Sequence Modelling
[ "Jack W. Rae", "Anna Potapenko", "Siddhant M. Jayakumar", "Chloe Hillier", "Timothy P. Lillicrap" ]
We present the Compressive Transformer, an attentive sequence model which compresses past memories for long-range sequence learning. We find the Compressive Transformer obtains state-of-the-art language modelling results in the WikiText-103 and Enwik8 benchmarks, achieving 17.1 ppl and 0.97bpc respectively. We also find it can model high-frequency speech effectively and can be used as a memory mechanism for RL, demonstrated on an object matching task. To promote the domain of long-range sequence learning, we propose a new open-vocabulary language modelling benchmark derived from books, PG-19.
[ "memory", "language modeling", "transformer", "compression" ]
Accept (Poster)
https://openreview.net/pdf?id=SylKikSYDH
https://openreview.net/forum?id=SylKikSYDH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "HuSNXEo6H", "vPvdG8Lvb2", "t2u50Z4XfZ", "X1S0P5GNA", "ryl_UkcYoH", "BkgQZoLdoB", "B1e2NFwSjH", "ByltkFPHsB", "ByebjUDHjB", "rJgAnjgk5S", "B1eU8key9B", "Hylu-DRpYH", "HyxIZ7ZaYr", "rkxqwxZatB", "rkloDrohKB", "HkxPYrrhYS", "SylGq1FVdH", "BkeY9OZfdH", "Skxx2RqbOr" ], "note_type": [ "comment", "official_comment", "comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "comment", "comment", "comment" ], "note_created": [ 1580164347314, 1580139418941, 1580021662717, 1576798735982, 1573654352283, 1573575418698, 1573382452461, 1573382368809, 1573381784951, 1571912630033, 1571909453876, 1571837695714, 1571783421977, 1571782754234, 1571759459054, 1571734911102, 1570176905948, 1570015377508, 1569988263970 ], "note_signatures": [ [ "~James_A_Bowery1" ], [ "ICLR.cc/2020/Conference/Paper1922/Authors" ], [ "~James_A_Bowery1" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1922/Authors" ], [ "ICLR.cc/2020/Conference/Paper1922/Authors" ], [ "ICLR.cc/2020/Conference/Paper1922/Authors" ], [ "ICLR.cc/2020/Conference/Paper1922/Authors" ], [ "ICLR.cc/2020/Conference/Paper1922/Authors" ], [ "ICLR.cc/2020/Conference/Paper1922/Authors" ], [ "~Aran_Komatsuzaki1" ], [ "ICLR.cc/2020/Conference/Paper1922/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1922/Authors" ], [ "ICLR.cc/2020/Conference/Paper1922/Authors" ], [ "ICLR.cc/2020/Conference/Paper1922/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1922/AnonReviewer3" ], [ "~Sainbayar_Sukhbaatar1" ], [ "~Artus_KG1" ], [ "~Aran_Komatsuzaki1" ] ], "structured_content_str": [ "{\"title\": \"Principled As Well As Practical SOTA Benchmarking\", \"comment\": \"I agree that the time is long-since past for a prize based on an expanded corpus, such as the one you are (soon?) going to publish from Project Gutenberg. This should have been done by someone with deep-pockets, i.e. Google with the advent of the one billion word benchmark, because it came years after Hutter's enwik8 prize.\\n\\nBut I would strongly suggest that any move forward toward such a benchmark specify the minimum _practical_ common algorithmic resources (Universal Turing Machine instruction set) upon which to run a decompression program that produces the benchmark corpus. \\n\\nBy \\\"_practical_\\\" I admit that nowadays it may only be _practical_ to assume an \\\"instruction set\\\" consisting of the entire Tensorflow API -- particularly for a DeepMind-financed benchmark prize.\\n\\nAs for seeing what language model generalizes best, there are two quite distinct levels to this question:\\n\\n1) Empirical testing of the MDL principle in SOTA claims.\\n2) Application of the MDL principle in SOTA claims.\\n\\nPhilosophically, the MDL principle is already assumed in virtually all science and engineering due to \\\"the unreasonable effectiveness of mathematics in the natural sciences.\\\" So, on that basis, #1 is similarly assumed by those, such as Hutter, who finance #2, as would a DeepMind prize based on size of self-extracting archive.\\n\\n#1 is for who don't place their \\\"faith\\\" in such philosophical arguments, and is where tests, such as yours based on division of test and training sets, can do more than just measure \\\"what generalizes best\\\": They can, through model compression/model ablation/knowledge distillation, see if MDL* holds as a meta-empirical truth.\\n\\nSee \\\"Extreme Language Model Compression with Optimal Subwords and Shared Projections\\\"\", \"https\": \"//arxiv.org/abs/1909.11687\\n\\n*By \\\"MDL\\\" I am here assuming UTM algorithmic capacity in the description's language.\"}", "{\"title\": \"RE\", \"comment\": \"I think I agree with a lot of your comment. Just to be clear, although we use enwik8 as a dataset for language modelling, we have no stake in the Hutter Prize. This model, along with pretty much all neural network language models trained on this dataset, are too large to be competitive with the algorithms devised by Rhatushnyak. If the prize had been devised using 10GB of wikipedia then it would be a different story. There are lots of tricks to cut the final parameter count (e.g. make some of the linears low-rank, prune the weights, distill the large model to a smaller model etc.) if one wants to benchmark models at a fixed parameter budget. Our opinion is that it's a worthwhile pursuit to see what language model generalizes best irrespective of parameter size. Simply scaling the transformerxl to a larger no. parameters via larger width or a larger number of layers did not improve generalization.\"}", "{\"title\": \"Commensurability of Table 4 Items\", \"comment\": \"The judging* criterion for the Hutter Prize is size of a self-extracting archive of the enwik8 corpus, to standardized on the algorithmic resources available to the archive. This is essential for commensurability under the principle of minimum description length (MDL) approximation of Kolmogorov Complexity. Dividing the corpus into training and testing sets is neither necessary nor desirable under this metric.\\n\\nControlling for the same \\\"model setup\\\" is a big step in the right direction -- as it increases the commensurability with TransformerXL -- particularly as compared to the other items in Table 4. While model ablation can produce even more commensurable measures, it would be helpful for SOTA comparisons to be more rigorous in defining the algorithmic resources assumed in their measurements.\\n\\nA consequence of improved rigor would be to expose just how important \\\"small\\\" improvements, such as .99 to .97 can be, as indeed they are.\\n\\n*I'm on the Hutter Prize judging committee.\"}", "{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes a \\\"compressive transformer\\\", an extension of the transformer, that keeps a compressed long term memory in addition to the fixed sized memory. Both memories can be queried using attention weights. Unlike TransfomerXL that discards the oldest memories, the authors propose to \\\"compress\\\" those memories. The main contribution of this work is that that it introduces a model that can handle extremely long sequences. The authors also introduces a new language modeling dataset based on text from Project Gutenberg that has much longer sequences of words than existing datasets. They provide comprehensive experiments comparing against different compression strategies and compares against previous methods, showing that this method is able to result in lower word-level perplexity. In addition, the authors also present evaluations on speech, and image sequences for RL.\\n\\nInitially the paper received weak positive responses from the reviewers. The reviewers pointed out some clarity issues with details of the method and figures and some questions about design decisions. After rebuttal, all of the reviewers expressed that they were very satisfied with the authors responses and increased their scores (for a final of 2 accepts and 1 weak accept).\\n\\nThe authors have provided a thorough and well-written paper, with comprehensive and convincing experiments. In addition, the ability to model long-range sequences and dependencies is an important problem and the AC agrees that this paper makes a solid contribution in tackling that problem. Thus, acceptance is recommended.\", \"title\": \"Paper Decision\"}", "{\"title\": \"^\", \"comment\": \"Fixed some typos and further clarified algorithm box in paper update. Please feel free to scan over the revised text and express any other points of concern!\"}", "{\"title\": \"Updated paper\", \"comment\": \"Thanks for the comprehensive reviews, they have certainly improved the quality of the paper.\", \"list_of_changes\": [\"[credit to reviewer 1]\", \"Updated figure 1 and caption with more details.\", \"Re-written model section: added formal notation, added algorithm box for full model, and for attention-reconstruction loss.\", \"Added subsection on temporal receptive field.\", \"[credit to reviewer 2]\", \"Attention bins are more granular, include uncertainty over attention per bucket. Remember, the self-attention is causally masked (mentioned in the text) thus the increase in attention to earlier sequence. Crucially, there is an increase in attention from the oldest memories, to the newest compressed memories (which are older).\", \"Added memory size ablations (Table 8 & 9).\", \"[credit to reviewer 3]\", \"Added PG-19 results table with Compressive Transformer and TransformerXL (improved both models from original result, using deeper networks).\", \"We appreciate the reviewers have a limited time to read paper revisions, however we feel almost all points have been substantially addressed and thus we would strongly welcome feedback.\"]}", "{\"title\": \"Re resources, training difficulty, other text applications, PG-19 results\", \"comment\": \"Thank you for your thorough review!\\n\\nRe. \\u201cIt would be interesting to find out how much resources were spent (in terms of preliminary experiments) to getting these models to start working decently.\\u201d \\n\\nThe majority of experiments were spent reproducing the sota (at the time) TransformerXL; that is, getting the model and training setup working well. We plan to open-source the TransformerXL baseline alongside the Compressive Transformer in TensorFlow (The TXL is now open-sourced in a few locations also). We considered 7 model/loss compressive transformer variants, displayed in Table 4, and ran 16 experiments in total on enwik8. These experiments swept over compression rates (typically 1-4) and then we experimented with different model setups. We then ran 6 compressive transformer experiments on WikiText-103. \\n\\nRe \\u201cIt also seems like these models are not trivial to train (or get them to work)\\u201d\\nWe trained these models with the same parameters as the transformerxl and we basically found (as shown in Table 4) that pretty much all compression approaches worked ok. Even mean-pooling activations performed reasonably (exceeded baseline performance and matched the current sota). However the learnable conv1d performed the best. The optimization schedule of decreasing optimization updates (S5.6.3) allowed us to achieve better results but this wasn\\u2019t necessary to train the models. So we would challenge the conclusion that this model is difficult to train. \\n\\nRe. is there an intended way of use for long-text that is not necessarily framed as a LM problem? \\u2026 Such as NarrativeQA\\n\\nWe think any sequential prediction problem with long-range dependencies is a good fit for this model. Ideally a streaming task where you need to maintain an online representation of the past that is quickly updated. So perhaps reading comprehension tasks where you read a book but periodically answer questions about it, a little like Children\\u2019s Book Test but with longer contexts. For summarization, such as NarrativeQA, only one set of predictions needs to be made at the end of the book and it appears that the best solutions are (currently) maintaining the book statically in a simple embedded space and repeatedly attending to it, possibly copying sections of text. It would be interesting to see the results from simple autoregressive models for summarization nonetheless.\\n\\nRe. Why are the results on PG-19 not reported in a Table format? \\n\\nVery good point. We have remedied this, it is now in a table. We also have new results with larger models that serve as a better initial baselines\\n\\n36 layer TransformerXL (3,000 mem) \\t\\t\\t\\t\\t 36.25\\n36 layer Compressive Transformer (1,500mem + 1,500 CM)\\t\\t33.6\"}", "{\"title\": \"Re model ablations, enwik8, attention weights & speech modelling\", \"comment\": \"Thank you for your kind review!\", \"regarding_memory_size\": \"here\\u2019s an ablation with performance versus compressed memory size for both enwik8 and wikitext-103! Both models improve significantly as a function of compressed memory size from small values. There is an optimal value, if we make the compressed memory much larger than the training regime then performance eventually deteriorates as the model\\u2019s attention drifts out-of-distribution (e.g. 4096+ for Enwik8). We have added this table to the paper also.\\n\\nEnwik8\\nCompressed Memory Size\\t 512\\t 1024\\t 2048\\t3072\\t4096\\nBPC\\t\\t\\t\\t 1.01\\t0.99\\t 0.98\\t0.97\\t 1.00\\t\\t\\t\\n(Model has a chunk size of 768 and memory size of 768)\\n\\nWikiText-103\\nCompressed Memory Size\\t 256\\t512\\t 1024\\t1536\\t2048\\nPerplexity\\t\\t\\t 18.2\\t17.9\\t 17.6\\t17.1\\t 17.7\\n(Model has a chunk size of 256 and memory of size 512)\\n\\nNote that CM=0 is literally the TransformerXL which we have included results for in the paper (incl. our implementation). For the published TransformerXL\\u2019s 18.3 perplexity, it was using an attention window of 1600 but we improve on this result with an attention window of only 768 (512 + 256).\", \"re_enwik8\": \"We agree the improvement on Enwik8 may seem quite small but this is partially due to the metric. BPC has a very small range. If we look at the word-level perplexity of these models, the 0.99bpc transformerxl has a word-level perplexity of 170 whereas the 0.97bpc sota compressive transformer has a word-level perplexity of 153. So a gain of 17 perplexity. This calculation comes from ppl_word = 2^(7.48 * bpc) as 7.48 is the average word-length in enwik8\\u2019s test set. Enwik8 actually has a longer range of dependency over wikitext-103 because of the more granular sequence data; they both represent wikipedia pages but processing the article at the character-level stresses the model\\u2019s range of attention.\", \"re_speech\": \"It would be preferable to perform a full human quality survey. The observation we wanted to convey was that one can get a transformer-like model to model high-frequency speech unconditionally and the compressive model helped in obtaining learning dynamics that are comparable with wavenet (in comparison to the TransformerXL which performs worse).\\n\\nHowever we do not wish to claim that this implies we have a better text-to-speech model; this would require substantially more work, conditioning on linguistic features, and expert human raters. Instead of focusing on text-to-speech, we look at raw speech modelling which has many downstream applications beyond text-to-speech (e.g. speaker identification) and stresses long-range dependency. We have made this more clear in the text (update soon-to-be-posted), and will consider removing the results entirely if other reviewers feel the experiment is misleading.\\n\\nRe. why six attention bins? We just chose a multiple of 3 (so the buckets have boundaries at the compressed_memory, memory, sequence boundaries) that is not too large such that there\\u2019s not too much noise. However we have re-run this analysis with 18 buckets and are including the updated figure in our (soo to be posted) updated paper. This is a better visualization of the data and captures the trend more carefully (we also remove the trend curve and switch to violin plots to better display the variability of each bucket). However the conclusion remains the same - that there is an increase in attention weight over the compressed memories versus the older regular memories\"}", "{\"title\": \"Re technical detal\", \"comment\": \"We completely agree that the model could be described more explicitly. *We are updating the paper with more mathematical details and an algorithm box to make things more explicit*. We originally wrote this paper to convey the key components of the model for those familiar with TransformerXLs, with the idea that all of the fine details are better represented in the code --- however we realize this was not the best strategy. We will still open-source the code so people can use the model and be certain of every detail, but we are completely re-writing the model section with the inclusion of an algorithm box. As pseudo-code here, the compression mechanism is really just passing memories that would otherwise be forgotten through a conv1d compression network:\\n\\ncompression_rate <- 3\\nold_memory <- memory[:-seq_size] # the memories to be forgotten\\ncompression_fn <- conv_1d(kernel_size=compression_rate, stride=compression_rate)\\nnew_cm <- compression_fn(old_memory ) # new compressed memories\\n\\nThen for attention, before in the TransformerXL one would compute\\nattention(seq, [memory, seq])\\nwhereas here we compute\\nattention(seq, [compressed_memory, memory, seq])\\n\\nBefore in the TransformerXL one would update memory by concatenating the sequence and truncating the oldest memories (to keep the memory fixed-size):\\nmemory <- concat_and_truncate(memory, sequence)\\n\\nwhere 'concat_and_truncate' refers to:\\ndef concat_and_truncate(old_state, new_state):\\n new_state_size <- new_state.shape[1] # time dimension\\n return concat([old_state, new_state])[new_state_size:]\", \"now_we_update_both_the_memory_and_compressed_memory\": \"memory <- concat_and_truncate(memory, sequence)\\ncompressed_memory <- concat_and_truncate(compressed_memory, new_cm)\\n\\nIn Figure 1 we kept the sequence and memory the same colour, as these hidden activations represent information for a single time-step in the transformer. We use an arrow to indicate that we map a set of memories to a smaller set of compressed memories. We chose a different colour for the compressed memories (and made the ticks more frequent) to indicate that these represent information over multiple time-steps. We are updating the figure and caption with more details such that this is clearer.\\n\\nIf there is anything else that is unclear, feel free to give us feedback!\"}", "{\"title\": \"re. Predicting the past\", \"comment\": \"This is a good point, one room for improvement is further analysis of whether the model's temporal range is indeed increased. The greater relative improvement in prediction of rare words (vs frequent words) hints that the performance improvement is due to longer-range reasoning, but it would be nice to make this more explicit. We could fine-tune a trained model on the task of predicting the past at varying intervals to see how it compares to the TXL. If we get time to perform this analysis before the discussion period is over, we will include these results.\"}", "{\"title\": \"Some thoughts\", \"comment\": \"Thank you very much for your response. It may be hard to compensate for the computation overhead due to self-attention at this moment, but I'm very excited for further development in this topic. I was also trying to make the context of Transformer unlimited as in this study (nevertheless not succeeded).\\n\\nOne thing I think is worth considering is, by generalize the idea of [1], to let the model to occasionally predict a randomly sampled segment of (either near or distant) past sequence (\\\"distant\\\" seq. comes from outside of the current TBPTT segment). Successful memory architecture should be able to recall the past easily, so this may become an alternative to measure how far your architecture can track back. Also, this may improve the retention of memory. \\n\\n[1] Learning Longer-term Dependencies in RNNs with Auxiliary Losses https://arxiv.org/abs/1803.00144\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper investigates a so-called \\\"compressive transformer\\\" approach. The idea is to compress distant past memories into a coarse-grained representation while keeping a fine-grained representation for close past memories. A variety of compression techniques and training strategies have been investigated in the paper and verified using tasks from multiple domains including language modeling, speech synthesis and reinforcement learning. Particularly, the authors propose a new benchmark PG-19 for long-term sequence modeling.\\n\\nOverall, I found the work interesting and experiments are thorough and strong. It is always great to see a new benchmark released to the community. That being said, I have concerns regarding the paper. The authors put huge amount of effort into the experiments but only describe the proposed technique in a very rough and abstract way, lacking necessary technical details to formulate the technique. What is the mathematical formulation of the problem? How exactly the compression is carried out on various network architectures is not clear after reading the paper. Also, I guess many readers including me do not have a perfect understanding of Fig. 1 although it shows something intuitively. (What is the difference between different colors? What is the difference between sequence, memory, and compressed memory? What do the arrows mean? There is no explanation whatsoever either in the figure or in the caption). This is the major concern I have regarding the paper. Despite of the strong experimental presentation, lacking the technical details has significantly hurt the quality of the paper. \\n\\nP.S. Thanks for the rebuttal. I have lifted my score.\"}", "{\"comment\": \"We wanted to use the exact model setup from the TransformerXL, which we used as our baseline. So for WikiText-103 this was 18 layers with a hidden size of 1024 (16 heads), 4096 mlp hidden size, using the same adaptive input representations scheme to embed words. For Enwik8 we used the same 24 layer model, 1024 embedding and hidden size, 8 heads, 3072 mlp hidden size.\\n\\nIn terms of the number of parameters optimizing the loss, this is exactly the same as the TransformerXL 277M for Enwik8 and 257M for WikiText-103.\\n\\nFor the compression network, which was only optimized with respect to the auxiliary compression loss, this consumed 0 params for max/mean pooling, and most-used. For 1D conv it consumed 1M x compression_rate x #layers params, and for the dilated convolution it consumed more. We will update the paper with much more explicit model details since this is clearly a room for improvement.\", \"title\": \"re. model sizes\"}", "{\"comment\": \"Thanks so much for your comments and innovative line of thinking. This is something we have considered but had not come around to trying!\\n\\nI don't think it's infeasible, however there are several ways of using attention for compression, some of them are not desirable.\\n\\nE.g. if one has n memories to compress to n/c compressed memories. One could instantiate n/c learnable parameters, each performs attention over the memories to compress. This would certainly result in n/c compressed memories, where attention was used to perform the compression. This scheme could be effective, but it makes the scheme dependent on the memory size.\\n\\nAnother idea was to use the conv1D, or even pooling, to reduce the number of memories (from n -> n/c) and then do self-attention over this set to absorb information across. We did not try this but it seems reasonable. Conversely we could perform self-attention over the memories and then compress, we think this would be powerful but too expensive as it would effectively double the compute of the whole model.\\n\\nIf there's anything obvious we are missing, feel free to comment!\", \"title\": \"attention for compression\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"## Updated review\\n\\nI have read the rebuttal. First I'd like to thank the authors for the detailled rebuttal. \\nThe latest version of the paper adressed all my concerns, hence I change my rating to Accept.\\n\\n## Original review\\n\\nThis paper presents a new variation of the Transformer model, named Compressive Transformer. The key novelty of this model is to preserve long range memory in a compressed form, instead of discarding them as previous models have done. This improves the long-range dependencies modelling capabilities of the approach. The model is evaluated on two common language modelling benchmarks and yields state of the art results in both of them. The paper also introduces a new benchmark for long-range dependencies modelling composed of thousands of books. The paper finally presents an analysis of the compressed memory and provide some insights, including the fact that the attention model uses the compressed memory. The model is also evaluated on two other tasks: speech generation and reinforcement learning on videos.\\n\\nI think this paper should be accepted, mainly because:\\n- The proposed model is novel as far as I can tell. \\n- The presented approach is significant, as modelling long-range dependencies is an important milestone in sequence modelling.\\n- The new benchmark is a good addition.\\n- The comparison with the relevant literature is thorough and well done.\\n- The experiments are convincing and demonstrate the viability of the approach, although some aspects can be improved (see below).\", \"detailed_comments\": [\"About the character-level language modelling on Enwik8, the improvement is very small, it seems that the task doesn't benefit from have long-range memory, could it be because character-level modelling is less dependent on the long-range past? can the authors comment on that? It would also been interesting to evaluate the gain of the memory, for instance by varying the size of the compressed memory from 0 to 1152.\", \"The WikiText-103 evaluation is interesting, specially Table 6, which shows the advantages of the model. However when comparing with the literature, it's not clear if the performance gain is due to the compressed memory or to the network capacity. A study with different lengths of the compressed memory (starting at 0) would bring some insights about that.\", \"In Section 5.6.2: can the authors justify why the attention weights were split in only 6 bins? creating a trended curve on only 6 points could be problematic, and I don't see why more bins couldn't be used.\", \"The speech analysis section (5.7) is not very insightful. It shows that the proposed model is on par with WaveNet on unconstrained speech generation, which is not very useful and feels a bit half-finished. I think that the authors should either commit to this study by constraining the model with linguistic features like in (Oord et al. 2018) and evaluate it in a TTS framework with subjective evaluation or discard this section entirely.\"]}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a way to compress past hidden states for modeling long sequences. Attention is used to query the compressed representation. The authors introduce several methods for compression such as convolution, pooling etc. The outcome is a versatile model that enables long-range sequence modeling, achieving strong results on not only language model tasks but also RL and speech. For testing and evaluating the modeling of really long context sequence modeling, the authors introduce PG-19, a new benchmark based on Project Gutenberg narratives.\\n\\nThe idea is a simple and straightforward one. The choices of compression functions are intuitive and natural. The probably more interesting part of this paper is the training schemes designed to train the memory compression network. \\n\\nResults are very strong and there is a pretty diverse set of experiments. That said, it seems like a huge amount of resources were spent on this work alone. It also seems like these models are not trivial to train (or get them to work). It would be interesting to find out how much resources were spent (in terms of preliminary experiments) to getting these models to start working decently. There are also no reports of parameter counts, which might make the experiments unfair. \\n\\nAchieving SOTA is one thing, which could be attributed to large resource pools and maybe larger parameter sizes of models.\\n\\nOverall, I am voting for a weak accept. While this paper is more incremental and novelty may be slightly lacking, I think the breadth of experiments and competitive results warrants an acceptance.\", \"several_issues_and_questions_for_the_authors\": \"1) Why are the results on PG-19 not reported in a Table format? Why are there no results of the base Transformer on PG-19? I think this is really necessary and should be reported.\\n2) The authors mention that this memory compression architecture enables long sequence modeling. However, is there an intended way of use for long-text that is not necessarily framed as a LM problem? For instance, results on NarrativeQA benchmark would be nice.\", \"update\": \"I have read the author response and other reviewer's comments. I am happy with the efforts made by the authors and I am raising my score to 8 (accept).\"}", "{\"comment\": \"What are the model sizes (the total number of parameters, hidden sizes, the number of heads, etc.) used in the language modeling experiments? I can't find it in the paper.\", \"title\": \"model sizes\"}", "{\"comment\": \"Seems possible but would be more costly (square vs linear). Would indeed be interesting to see if attention on data compressed with attention is better than attention on data compressed by convolutions is more effective.\\n\\n100 TPUv3 cores is a nice bunch of compute.\", \"title\": \"Re:\"}", "{\"comment\": \"I believe you could've instead used (self)-attention to produce compressed memory. Or is this not viable for some reason?\", \"title\": \"(Self-)attention for compression\"}" ] }
ryedjkSFwr
Global Momentum Compression for Sparse Communication in Distributed SGD
[ "Shen-Yi Zhao", "Yin-Peng Xie", "Hao Gao", "Wu-Jun Li" ]
With the rapid growth of data, distributed stochastic gradient descent~(DSGD) has been widely used for solving large-scale machine learning problems. Due to the latency and limited bandwidth of network, communication has become the bottleneck of DSGD when we need to train large scale models, like deep neural networks. Communication compression with sparsified gradient, abbreviated as \emph{sparse communication}, has been widely used for reducing communication cost in DSGD. Recently, there has appeared one method, called deep gradient compression~(DGC), to combine memory gradient and momentum SGD for sparse communication. DGC has achieved promising performance in practice. However, the theory about the convergence of DGC is lack. In this paper, we propose a novel method, called \emph{\underline{g}}lobal \emph{\underline{m}}omentum \emph{\underline{c}}ompression~(GMC), for sparse communication in DSGD. GMC also combines memory gradient and momentum SGD. But different from DGC which adopts local momentum, GMC adopts global momentum. We theoretically prove the convergence rate of GMC for both convex and non-convex problems. To the best of our knowledge, this is the first work that proves the convergence of distributed momentum SGD~(DMSGD) with sparse communication and memory gradient. Empirical results show that, compared with the DMSGD counterpart without sparse communication, GMC can reduce the communication cost by approximately 100 fold without loss of generalization accuracy. GMC can also achieve comparable~(sometimes better) performance compared with DGC, with an extra theoretical guarantee.
[ "Distributed momentum SGD", "Communication compression" ]
Reject
https://openreview.net/pdf?id=ryedjkSFwr
https://openreview.net/forum?id=ryedjkSFwr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "Dj11MrLeXV", "HJgWOKP0YH", "B1gmm0BAKB", "HJg4VKwbKB" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735951, 1571875177080, 1571868186813, 1571023147861 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1921/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1921/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1921/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The author propose a method called global momentum compression for sparse communication setting, and provided some theoretical results on the convergence rate. The convergence result is interesting, but the underlying assumptions used in the analysis appear very strong. Moreover, the proposed algorithm has limited novelty as it is only a minor modification. Another main concern is that the proposed algorithm shows little performance improvement in the experiments. Moreover, more related algorithms should be included in the experimental comparison.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The author propose a method called global momentum compression for sparse communication setting. The contributtions can be summarized into 3 parts: switching DGC setting from local momentum to global momentumm, theortical proof of the convergence, empirical results showing performance.\", \"however_there_have_several_issues\": \"1. No significant contribution. Although they theoretically prove a new version of DGC, it's just a minor modification and no significant performance improvement as shown from their empirical results.\\n\\t2. In the experiment session, as shown in the results, their method seems more stable during training but there achieves minor improvement in terms of the test accurarcy. Second, they only compare with DGC and it's counterpart baseline. It's better to include more related algorithms for comparison (like quantization methond: QSGD or signSGD).\\n\\t3. Comparing with DGC, there is no improvement in saving communication as shown in their results. Since GMC only changes DGC setting from local momentum to global momentum, no modification is involved in the compression part of DGC. \\n\\nOverall, I appreciate the authors for their theoretical contribution for DGC and well written paper. However, it would be great to show a better improvement and include more related methods for comparison.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a scheme for incorporating compressed (sparsified) gradients with momentum in distributed SGD. The approach differs from others in the literature, comes with theoretical guarantees, and improved performance. The results are correct, and the experiments illustrate that the proposed approach can make a difference (albeit, modest) in the quality of the resulting model.\\n\\nThe main point I find dissatisfying about the theoretical results of the paper are the use of Assumption 2. The memory vector is a parameter of the algorithm. I realize that one can enforce this with a projection, as argued in the paragraph rationalizing this assumption. However, that specific case isn't analyzed and it isn't clear how incorporating that projection would affect the accuracy, since it would essentially be countering the effect of error feedback.\\n\\nI also find Assumption 3 to be strange. In the convex setting, one can typically show that this follows from Assumption 1 alone under the additional assumption of a suitably small step size. In the non-convex setting it isn't clear what this means, since w^* is not well defined (if there are multiple global minimizers).\\n\\nAssumption 1 is also strong. Typically one assumes that the stochastic gradients are unbiased, and either that the expected gradient is Lipschitz continuous (in the smooth case), or the expected gradient is bounded (in the non-smooth case). Assuming that the stochastic gradients are uniformly bounded essentially implies that the noise vanishes when the gradient gets large. \\n\\nCan you provide examples of functions/problems satisfying these assumptions? Even an example as simple as the case where $F$ is a finite sum of quadratic functions, and one randomly samples one of the terms in the finite sum to compute the gradient (i.e., using SGD to solve a large linear least squares problem) doesn't appear to satisfy Assumption 1.\\n\\nOverall the results are potentially interesting. I would have given a higher rating if the assumptions didn't appear to be so strong, and if the experimental results demonstrated a more substantial difference with DGC.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"Gradient sparsification is an important technique to reduce the communication overhead in distributed training. In this paper, the authors proposed a training method called global momentum compression (GMC) for distributed momentum SGD with sparse gradient. Following existing gradient sparsification techniques such DGC, GMC is also built up on the memory gradient approach; the major distinction between GMC and existing techniques is that GMC keeps track of global gradient to maintain the memory gradient, while the existing technique keeps track of worker-local gradients for memory gradient. The primary contributions in the paper are as the following:\\n\\n1. The authors propose GMC, a training method for distributed momentum SGD with sparse gradient communication. It uses global gradient (but still achieve sparse communication) to maintain the gradient memory while existing approaches such as DGC use worker-local gradient to do so.\\n\\n2. The authors prove the convergence rate of GMC for 1. strongly convex and smooth functions 2. convex functions and 3. Non-convex Lipschitz smooth functions. This is the first work on proving the convergence rate of distributed momentum SGD using sparse communication techniques based on memory gradient.\\n\\n3. Empirically, the authors show that GMC can empirically attain the same model accuracy as conventional distributed momentum SGD with ~100x reduction in communication overhead. It can also match the performance of DGC at the same communication compression rate.\\n\\nI think in general the ideas and efforts of the authors in proving the convergence rate of distributed momentum SGD with *gradient sparsification* is interesting and important. However, I have the some questions and concerns on validating the claims in the paper. I currently give weak reject but I am happy to raise the score if the authors can clarify or improve in their rebuttal / future drafts. The primary questions and concerns (critical to the rating) are:\\n\\n1. One important claimed advantage of GMC over existing method is that it uses global gradient for memory gradient, while existing methods such as DGC uses local-work gradient to do so. But I did not find convincing support of this advantage in the paper: Empirically, in the experiment results, I don't think GMC demonstrate better performance than DGC in a statistical meaningful way; instead they are basically demonstrating matching performance. Theoretically, I am not sure if only the global gradient enables the proof of convergence rate while the worker-local gradient cannot. My preliminary feeling is that by bounding the gradient variance, it should also be possible to prove a rate for DGC using worker-local gradient; this is because the difference between the global gradient and the local gradient might be bounded via the gradient variance.\\n\\n2. In the experiments, the authors focus on momentum SGD for image classification tasks. To better support the versatility and efficacy of GMC, it would be interesting to include some experiments for other domains (e.g. using the STOA transformer style models for NLP tasks). In these models, momentum-like components are also used in the optimizer (e.g. Adam for fairseq for machine translations), it will be interesting to see if the efficacy of GMC also empirically transfer to these settings.\\n\\nMinor questions (influencing the rating in a secondary way) \\n\\n1. Regarding the assumptions in the paper, I think assumption 2 need some validation / support to show that it is a proper one. My preliminary feeling is that assumption 2 is intuitive as the sparsification procedures only zero out small values so that the error introduced in the gradient is small and bounded. But it should be more convincing to empirically show the magnitude of u comparing to the magnitude of gradient g in Equ. 8. \\n\\n2. I notice that the experiments uses conventional momentum SGD for a few epochs as warm up, is there any specific reasoning on using this warmup approach instead of the sparsity level warmup as used in DGC?\\n\\n3. In the experiments, GMC does not use the factor masking trick while DGC uses. If it is for demonstrating the benefits of global gradient for gradient memory, I think it is more proper to also include the results of DGC without factor masking? In this way, this question can be directly answered in an ablation study way by eliminating the possible contribution of using/not using factor masking. \\n\\n\\nNITS to improve the paper (not related to the rating):\\n\\n1. The last contribution bullet forgets to mention that it is about comparing to DGC.\\n\\n2. In algorithm 1, it is clearer to mention how the mask m is generated (e.g. based on magnitude).\\n\\n3. In the second paragraph in section 3.2, the vector inner product is not properly written between coefficients and w.\\n\\n4. Above theorem 1, in the text, the condition for the discussion on the two cases are confusing.\\n\\n5. In the definition of CR in the first paragraph of section 5, why the summation starts from 5. The text describes as warm up with 5 *epochs* while in the equation it is saying warm up with 4 *steps*.\"}" ] }
BkevoJSYPB
Differentiation of Blackbox Combinatorial Solvers
[ "Marin Vlastelica Pogančić", "Anselm Paulus", "Vit Musil", "Georg Martius", "Michal Rolinek" ]
Achieving fusion of deep learning with combinatorial algorithms promises transformative changes to artificial intelligence. One possible approach is to introduce combinatorial building blocks into neural networks. Such end-to-end architectures have the potential to tackle combinatorial problems on raw input data such as ensuring global consistency in multi-object tracking or route planning on maps in robotics. In this work, we present a method that implements an efficient backward pass through blackbox implementations of combinatorial solvers with linear objective functions. We provide both theoretical and experimental backing. In particular, we incorporate the Gurobi MIP solver, Blossom V algorithm, and Dijkstra's algorithm into architectures that extract suitable features from raw inputs for the traveling salesman problem, the min-cost perfect matching problem and the shortest path problem.
[ "combinatorial algorithms", "deep learning", "representation learning", "optimization" ]
Accept (Spotlight)
https://openreview.net/pdf?id=BkevoJSYPB
https://openreview.net/forum?id=BkevoJSYPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "kLCrV6CyN", "H1l7e7n2jB", "B1gviFQvoB", "H1l8znxvjS", "r1ebFcgwoB", "HkgC2txwoH", "B1lpzueDoS", "SyepKWTAKB", "Hke2mmXCYB", "ByxYxfZaKr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735921, 1573860075368, 1573497246875, 1573485582402, 1573485176571, 1573484982419, 1573484564722, 1571897732583, 1571857187824, 1571783152658 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1919/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1919/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1919/Authors" ], [ "ICLR.cc/2020/Conference/Paper1919/Authors" ], [ "ICLR.cc/2020/Conference/Paper1919/Authors" ], [ "ICLR.cc/2020/Conference/Paper1919/Authors" ], [ "ICLR.cc/2020/Conference/Paper1919/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1919/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1919/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper proposes a method for efficiently training neural networks combined with blackbox implementations of exact combinatorial solvers.\\n\\nReviewers and AC agree that it is a well written paper with a novel idea supported by good experimental results. Experimental results are of small scale and can be further improved, but the authors acknowledged this aspect well.\\n\\nHence, I recommend acceptance.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"Thanks for the additional details and clarifications! It is surprising the constant \\\\lambda baseline is quite strong. I have gone through the other reviews and discussions and this thread and agree with R3 that this is a clear accept and have also updated my score to an 8.\"}", "{\"title\": \"Addressing the authors' replies\", \"comment\": [\"After reading the author's replies, I have changed my score to 8 as I believe this paper is a clear accept.\", \"I still think the paper is a bit weak experimentally (for example, the authors could have applied the method presented in Bello et al with a small convnet to extract (x,y,z) from the country flags) but the authors have presented their work in a fair and honest light and addressed potential concerns.\", \"Meaning of theoretical guarantees: I am also not aware of techniques to evaluate such gradients. I believe the presentation could be a bit improved: the flow of the paper (during my first pass) made me expect some sort of theoretical guarantee.\", \"My comments on the type of supervision and related work are suggested as potential improvements of the paper.\", \"I recommend adding results with non-exact solvers as they show that the feature extraction process also works with non-exact solvers\"]}", "{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for your comments.\\n\\nRegarding approximate solvers and baselines, please refer to the joint part of our response.\\n\\n== Meaning of Theoretical Guarantees ==\\n\\nTheorem 1 indeed does not give any usual type of a guarantee. To our knowledge, there are however no established techniques for evaluating gradients suggested for piecewise constant functions (any kind of comparison to the true zero gradient misses the point).\\nIn this uncharted territory, our intention was to give a theoretical description and guarantees about the *process* sketched in Figure 3.\\n\\nThe technical insights embedded in the proofs might also allow for proving different type of guarantees. What would be a convincing statement about piecewise constant function interpolation that Reviewer 3 would like to see?\\n\\nWe can certainly make some improvements on the presentation side. For example, connect Property A2 to Figure 3 (green regions shrink with lambda) or offer a more intuitive interpretation of Property A3 (it suggests that gradients of f\\\\_lambda are reasonable everywhere -- as elementary interpolators have certainly reasonable gradients).\\n\\n== Type of Supervision == \\n\\nWe agree that supervision based on the value of the combinatorial objective is natural for example in reinforcement learning scenarios and we will look into it in the future. The full supervision we use is however not artificial. The motivation comes from computer vision tasks where the ground truth assignment is typically known (e.g. segmentation, stereo matching, pose estimation).\\n\\n== Additional Related Work ==\\n\\nDriven by maintaining focus on the main message, we primarily included literature at the intersection of deep learning and combinatorial optimization in the related work section. The literature that is relevant from the optimization point of view is cited throughout the method section (differentiation through argmin is also discussed). If the reviewer is missing references to concrete works, we will include them. We agree that if the list of optimization references grows by more than a couple of papers, it would be reasonable to introduce a new subsection of Section 2, and we will do so in that case.\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your comments and positive appraisal of our paper.\\n\\n== Discussion on Approximate Solvers ==\\n\\nThe remark about approximate solvers is particularly aimed at computer vision applications (graph matching, multicut etc.). The combinatorial instances in such applications are large and exact solvers become impractical (e.g. Gurobi solver spends significant computation time proving optimality of an already known solution). We will clarify this in the final version of this work.\\n\\nSee also our common response for baselines and an additional experiment with an approximate solver.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for your assessment. Please also refer to our common answer above.\\n\\n== Linear Programs ==\\n\\nThe distinction to make about linear programs is the output format. If the linear program seeks to find an integral solution (ILPs) our method is applicable out of the box. We agree this opens interesting directions for future work. However, if continuous solutions are required, the function at hand is no longer piecewise constant and other methods (such as [Amos \\\\& Kolter]) may be preferable in terms of gradient estimation/computation. Also note that our method, in its current form, is designed to optimize an objective and not decide feasibility, as is the case for Sudoku. It takes a deeper thought whether we can generalize the method in that direction.\\n\\nAlso, thank you for pointing out [Elmachtoub, A. N., \\\\& Grigas, P. Smart]; we were not aware of this work and will include a reference for the final version.\\n\\n== Lambda Adaptation ==\\n\\nWe deliberately did not include any kind of lambda scheduling into the paper as we wanted to keep the method in its purest possible form. However, the proposed (and other) ideas are very inviting and we are currently looking into them in ongoing work. Surprisingly, it seems that only marginal improvements are possible and the constant lambda baseline is quite strong.\"}", "{\"title\": \"Common Answer\", \"comment\": \"We thank the reviewers for their comments.\\n\\n== Choice of Baselines ==\\n\\nOur main aim was to enable solving various types of combinatorial problems from raw inputs without making any concessions on the combinatorial side. From that perspective, there is no clear baseline to compare with (other than maybe zero order methods). We believe, this puts us in a similar position as works [Wang and Kolter] and [Amos and Kolter] who provided similar building blocks for convex optimization and satisfiability. We actually built on their experimental design -- compare against a ResNet on clean synthetic tasks -- however, with larger dimensionalities in both the raw images and the solver inputs. The main purpose of the ResNet baseline is to make sure the datasets do not contain easily exploitable features.\\n\\nGiven that there is a volume of work at the intersection of deep learning and combinatorial optimization -- as we also list in Section 2 -- it seems hard to believe that there is no appropriate baseline. Let us briefly explain why for example the works [Bello et al, Deudon et al., Kool et al.] are not comparable to ours. We found similar mismatches with the other cited literature.\\n\\nThe works [Bello et al, Deudon et al., Kool et al.] aim to compete with the dedicated solvers; purely on the combinatorial side (see their experimental sections). They are not aspiring to be neural network building blocks. In fact, they are not even fully differentiable as the underlying reinforcement learning algorithm executes a sequence of discrete actions (i.e. the same piecewise constant structure emerges) to find the TSP tour. We do not see any natural way for differentiating these pipelines other than casting them as blackbox solvers and using our method.\\n\\nHaving said all of this, we do not claim (and never have) that our experimental section is a decisive proof of performance; this remains to be seen on more complex real-world applications. At this point, we claim a proof of concept with a broad potential of follow-up applications.\\n\\n== Additional Experiment == \\n\\nWe would like to propose to include the results of running our method with an approximate solver. For this we use the Google OR-Tools solver in the TSP experiment.\\nWe draw two conclusions from the numbers presented below.\\n\\n1) The choice of the solver matters. Even if OR-Tools is fed with the ground truth representations (i.e. true locations) it does not achieve perfect results on the test set (see the right column). We expect, that also in practical applications, running a suboptimal solver (e.g. a differentiable relaxation) substantially reduces the maximum attainable performance.\\n\\n2) The suboptimality of the solver didn't harm the feature extraction -- the point of our method. Indeed, the learned locations yield performance that is close to the upper limit of what the solver allows (compare the middle and the right column).\\n\\n Accuracy of perfect paths\\n Embedding OR-Tools Solver OR-Tools on GT representation\\n k Train% Test% Test %\\n 5 99.8 99.3 100.0\\n 10 84.3 84.4 88.6\\n 20 49.2 48.6 54.4\\n 40 14.6 15.1 15.2\\n\\nWould you recommend to include this in the paper?\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper shows how end-to-end learning can be done through\\ncombinatorial solvers by using the derivative of\\ncontinuous surrogate function in the backward pass.\\nOne elegant part of the method is that no modification\\nor relaxation is done to the combinatorial solver in\\nthe forward pass and that the backward pass just requires\\nanother call to the blackbox solver.\\n\\nThe idea of constructing continuous surrogate functions\\nand using them for differentiating through solvers with\\npiecewise-constant output spaces is thought-provoking and\\nI can see it inspiring many new directions of work.\\nFor example looking at Figure 2 for intuition, one could\\nimagine other ways of making the solution space continuous.\\nThe solution space of linear programs over continuous spaces,\\nas considered in [Elmachtoub & Grigas], the Sudoku example in\\n[Amos & Kolter], and related papers, is also piecewise constant and\\nit seems like a similar method could be used to bring more\\ninformative derivative information to linear programs ---\\nhave you considered this as a future direction?\\n\\nOne of my concerns with this work is that the ResNet baseline in the\\nexperimental results seems like too much of a straw man for the tasks.\\nI do not see why they should have the capacity to generalize well.\\nThis paper shows the ResNet baseline achieve near-zero\\ntest accuracy but doesn't compare to other relevant baselines\", \"that_are_mentioned_in_the_related_work_section\": \"for example [Bello et al, Deudon et al., Kool et al.] for the TSP.\", \"and_one_smaller_comment\": \"If one wanted to squeeze the performance even\\nmore, would starting the training process with a large \\\\lamdba\\nand annealing it to zero help?\\n\\n----\\n\\nElmachtoub, A. N., & Grigas, P. Smart \\\"predict, then optimize\\\". arXiv 2017.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a straightforward method for training black box solvers of a restricted kind (namely those with inputs in R^n and linear cost functions). The proposed algorithm is tested on path finding, the travelling salesman problem, and a min-cost-perfect-matching problem, with promising results.\\n\\nI would recommend accepting this paper. It is a well written paper with a novel idea supported by good experimental results.\\n\\nThe caveat is that I did not have the time to thoroughly review all the mathematical details. From a high-level they looked correct, and the math is sufficiently illustrated with figures and examples that it is easy for a reader to follow in detail given enough time.\\n\\nThe main shortcomings I see are that there are no experimental results comparing this method against any existing results; the authors do compare against their own ResNet18 implementation, but this is not ideal.\", \"i_found_the_discussion_a_bit_cryptic\": \"Why are approximate solvers needed for real-world problems? Are there no real-world problems where exact solvers are still applicable?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"=== Summary ===\\nThe authors propose a method for efficiently backpropagating through unmodified blackbox implementations of exact combinatorial solvers with linear objective functions. \\nThe gradient of such exact combinatorial solvers exists almost everywhere but is zero. The authors remark that the loss has the same gradient wrt to the solver's input as its linearization around the solver's input. They therefore propose to interpolate the loss' linearization with a continuous (piecewise affine) function and use the gradient of this interpolation to backpropagate through the solver. This gradient is obtained efficiently by simply calling the solver on a single perturbed input (the perturbation depends on the incoming gradient, ie the gradient of the loss wrt to the solver's output).\\nThe authors further study the properties of this piecewise affine interpolation and characterize its interpolation behavior as a function of a hyperparameter which controls the trade-off between \\\"how informative the gradient is\\\" and \\\"how faithful the interpolation is to the original solver\\\".\", \"the_authors_validate_their_method_with_experiments_on_synthetic_tasks_that_have_both_a_visual_processing_aspect_and_a_combinatorial_aspect\": \"- Shortest Path on Warcraft II terrain maps\\n - TSP between country capitals where the inputs to the convnet are country flags\\n - Min-cost perfect matching from Mnist digits.\\nSpecifically, they feed the output of a convnet to the relevant solver (depending on the task) and learn end-to-end by backpropagating through the solver with their proposed method. They show that their method successfully solves the tasks where baseline ConvNet architectures fail.\\n\\n=== Recommendation ===\\n\\nThis paper addresses an important problem and presents a novel approach.\\n\\nMethods for combining combinatorial optimization algorithms and machine learning usually rely on modifiying or relaxing the combinatorial problem itself which prevents using solvers as-is. \\nIn contrast, the presented method allows to efficiently backpropagate through unmodified implementations of blackbox exact solvers with a linear objective. AFAIK this is the first method that allows this.\\n\\nA weakness of the paper is that the experiments only validate proof of concept (as noted by the authors). They are small-scale and only compare against conventional ConvNets baselines (as opposed to other approaches to backpropagate through relaxed combinatorial problems).\\nAdditionally, the characterization of the interpolation (whose gradient is used) doesn't directly explain why the gradient of the interpolation is a reasonable choice.\\n\\nOverall, I recommend for acceptance.\\n\\n=== Questions / Comments ===\\n- The authors show properties related to the interpolation behavior of the proposed interpolation function. What is the actual point/benefit of satisfying these properties? Are there arguments for why this is important besides the experimental results? Is the point that since lambda controls how \\\"faithful vs informative\\\" the gradient is , there must be a range of values for lambda for which the method works? \\n- It would be interesting to have experiments with non-exact solvers\\n- It would be interesting to optimize directly for the combinatorial objective in the experiments (using a policy gradient for example) rather than perform supervised learning on the solutions.\\n- Consider adding related work subsection on argmin optimization and meta-learning.\"}" ] }
rkxDoJBYPB
Reinforced Genetic Algorithm Learning for Optimizing Computation Graphs
[ "Aditya Paliwal", "Felix Gimeno", "Vinod Nair", "Yujia Li", "Miles Lubin", "Pushmeet Kohli", "Oriol Vinyals" ]
We present a deep reinforcement learning approach to minimizing the execution cost of neural network computation graphs in an optimizing compiler. Unlike earlier learning-based works that require training the optimizer on the same graph to be optimized, we propose a learning approach that trains an optimizer offline and then generalizes to previously unseen graphs without further training. This allows our approach to produce high-quality execution decisions on real-world TensorFlow graphs in seconds instead of hours. We consider two optimization tasks for computation graphs: minimizing running time and peak memory usage. In comparison to an extensive set of baselines, our approach achieves significant improvements over classical and other learning-based methods on these two tasks.
[ "reinforcement learning", "learning to optimize", "combinatorial optimization", "computation graphs", "model parallelism", "learning for systems" ]
Accept (Poster)
https://openreview.net/pdf?id=rkxDoJBYPB
https://openreview.net/forum?id=rkxDoJBYPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "FdwJq_Jl-P", "HJxVQ5MnjH", "S1xv-cfhsH", "r1ezDrz2iS", "rkezBNn5oH", "SJlqiP0S5H", "SJlyOpyNcH", "ryxXud915S" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735891, 1573820955874, 1573820927396, 1573819737880, 1573729338184, 1572362145557, 1572236647356, 1571952747385 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1918/Authors" ], [ "ICLR.cc/2020/Conference/Paper1918/Authors" ], [ "ICLR.cc/2020/Conference/Paper1918/Authors" ], [ "ICLR.cc/2020/Conference/Paper1918/Authors" ], [ "ICLR.cc/2020/Conference/Paper1918/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1918/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1918/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"The submission presents an approach that leverages machine learning to optimize the placement and scheduling of computation graphs (such as TensorFlow graphs) by a compiler. The work is interesting and well-executed. All reviewers recommend accepting the paper.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Continuation\", \"comment\": \"> - Could the authors clarify why the two methods mentioned in \\u201cLearning to directly predict a solution\\u201d has quadratic complexity w.r.t. # of nodes and whereas REGEL is linear?\\n\\nLet n be the number of nodes in the input graph for which placement and scheduling decisions need to be predicted. Predicting the decisions with an autoregressive model will need O(n) steps, where each step involves performing inference on the graph neural network. Since a single inference pass on the GNN has at least O(n) cost, the total prediction cost scales as O(n^2). We also experimented with a non-autoregressive approach for predicting the decisions that has O(n) total cost, but the results were significantly worse. REGAL performs a single inference pass on the GNN, so it has O(n) cost.\\n\\n> - Confusion on Figure 4(b): Could some more critical statistics about the graphs in the training/test dataset be reported? e.g. what\\u2019s the average depth of the training graphs? When there are 32 MP layers a node\\u2019s feature will be passed across its 32-hop neighborhood, which seems surprising as it is common to observe GNN starts degenerating with increased depth \\u2026\\n\\nWe added Figure 6 in the appendix to show the distribution of the diameters of graphs in our dataset.\\n\\nWe do observe a plateau of GNN performance with increased depth (as reported in Figure 4(b)), but no significant drop with large depth. In principle, even when the number of layers is larger than the graph diameter, the GNN can still use the additional layers to do more computation, which can be helpful for making predictions. [Selsam et al. (2019)] (https://arxiv.org/abs/1802.03685 ) shows an extreme example of this where 1000 message passing layers were used in GNNs to make predictions on graphs with much less number of nodes, and improved performance was reported with increased number of message passing layers even up to 1000. Training GNNs with large depth may be more challenging than training shallower GNNs, but various techniques can be applied to make this easier, e.g. adding GRU / LSTM-style gating or residual connections. Overall we did not experience the degeneration at 32 message passing layers that the reviewer suspected.\"}", "{\"title\": \"Response to \\\"Official Blind Review #3\\\"\", \"comment\": \"> The paper is well-written and I enjoyed reading the paper.\\n\\nThanks for your comments!\\n\\n> Some more descriptions about the BRKGA algorithm ...\\n\\nSee changes to Section 3.2.\\n\\n> - I am very confused by one of the claims that \\u201c the first work on learning a policy for jointly optimizing placement and scheduling\\u201d. \\u2026\\n\\nGood point. Our claim was not clearly stated, and we have changed the discussion in the introduction. The works we cite that learn a policy for device placement relied on TensorFlow\\u2019s dynamic scheduler to make the scheduling decisions. In that setting, we do not claim that one should jointly optimize placement and scheduling, and it\\u2019s not obvious how to do so.\\n\\nOn the other hand, we have approached the problem from the perspective of static scheduling, which applies in a number of recently developed compilers for deep learning computation graphs. In the static setting, jointly deciding the assignment from the operations to devices and the schedule of operations within a device is a classical problem and hence a natural one to solve; see Kwok and Ahmad (1999) and Sinnen (2007), both cited in our paper. Deciding on placement and scheduling separately makes the task harder. The poor performance of the GP+DFS baseline is an example of this; the graph partitioner ignores scheduling when making placement decisions. A similar motivation for joint optimization can also be found for the problem of CPU instruction scheduling and register allocation, see e.g., Motwani et al. (1995) https://pdfs.semanticscholar.org/1b7d/20b856fd420f93525e70a876853f08560e38.pdf.\\n\\nWe have indeed performed ablation tests on the value of learning in the placement and scheduling spaces; see appendix section A.12. These results suggest that the majority of gains from REGAL are in fact thanks to learning better sampling distributions in the scheduling part of the action space.\\n\\n> - The model is trained with standard REINFORCE -- how many training time and resources are needed to train a REGEL model for a task? How\\u2019re the training dynamics looking like (variance, convergence, etc?)? \\n\\nSee figure 7 in section A.3 for the training set reward curves for runtime and peak memory minimization tasks. The graph neural network policy is trained using 10 cores of a CPU machine (no GPUs were used), with multi-threading used by BRKGA for evaluating chromosomes and by TensorFlow. A training run takes approximately 2-3 days to be completed.\\n\\n> - In terms of the generalization ability of REGEL, the paper has clearly shown that REGEL is able to generalize to differently shaped graphs, with acceptable cost, but I am wondering for the same dataflow graph, how REGEL generalizes to different input data configurations (size, modality, etc.)? \\u2026\\n\\nWe have not tested REGAL in this setting, opting to focus on the harder task of generalizing across graphs with different topologies. It would indeed be interesting to see if the performance gains are greater when the dataset is restricted to variations on a single topology.\\n\\nTo partially address this question, we performed an additional analysis on our results. We\\u2019ve added figure 12 in the appendix to show a breakdown of the reward by unique graph topology on the TF runtime test set (note that as per section A.1.2 we created 99 additional copies of each graph and randomly modified the tensor sizes). We see that for many graphs, the effect of REGAL (most often, the improvement from REGAL) is consistent within the family. This suggests that REGAL is identifying patterns that are specific to an architecture.\\n\\n> - It seems the method and assumptions about graphs or training data are pretty coupled with TensorFlow and graph-mode execution, how could the method be generalized to other ML frameworks (e.g. frameworks with eager execution)\\n\\nThe methods and its assumptions are indeed tied to static scheduling; however, it\\u2019s not accurate to say that they are coupled to TensorFlow; they apply to any optimizing static compiler for neural network computation graphs. Such compilers include Glow, MLIR, TVM, and XLA. XLA, for example, can be used from TensorFlow, PyTorch, Jax, and Flux/Julia.\\n\\nFor a pure eager-mode setting, our methods do not apply and would need to be substantially redesigned. The work of Mao et al. (2019) may be interesting in this regard, in that they apply learning to an on-line scheduling problem where both a schedule and mapping onto hardware must be decided as new jobs arrive to a data processing cluster.\"}", "{\"title\": \"Response to \\\"Official Blind Review #2\\\"\", \"comment\": \"Thank you for the review and the interesting questions!\\n\\n> 1. The detailed explanations of o_a(G) and o_s(G) should be included.\\n\\no_a(G) and o_s(G) are defined as the objective value of the best solution for graph G found by BRKGA using, respective, 1) the mutant sampling distributions predicted by the GNN, and 2) uniform distributions (i.e., as is done in standard BRKGA).\\n\\nMultiple reviewers have requested additional details on how BRKGA works, so we have added a self-contained description of the meta-heuristic algorithm to Section 3.2.\\n\\n> 2. How were the attribute vectors x_v and x_e defined in your experiments?\\n\\nThe specific node features x_v and edge features x_e have are described in section A.2 of the appendix. We have expanded on this description.\\n\\n> 3. The baseline (GP+DFS) may not be strong enough, since it is designed to reduce the communication cost. With the information of the input size and time complexity of ops, a better greedy algorithm can be designed. Moreover, the performance of Local Search and BRKGA 5K are similar, and REGAL is just slightly better than BRKGA 5K. Hence, the improvement over the best efficient greedy algorithm seems small.\\n\\nWe have acknowledged in Section 5.3 that GP+DFS is a weak baseline. We compare with it because a similar GP approach is used by XLA for model parallelism and by Mirhoseini et al. (ICML \\u201817) as a baseline.\\n\\nIf we understand your suggestion about a greedy algorithm, this would be one that sequentially decides which task to run next and on which device. One issue with this approach is that it could get stuck with no feasible moves to make due to the memory constraints. It would nevertheless be possible to try this (see, e.g., https://arxiv.org/abs/1711.01912 which we recently discovered), although we expect that Tuned BRKGA would provide higher-quality solutions.\\n\\nAlso, the hyperparameters for local search were tuned using grid search the same way as Tuned BRKGA, so it should be compared to Tuned BRKGA rather than BRKGA 5K. The gap in percent improvement between local search and Tuned BRKGA is larger than the gap between local search and BRKGA for the TF Runtime and Synthetic Runtime test sets (Table 1).\\n\\nIn our opinion, the improvements are not small, they have to be judged with respect to how difficult it is to obtain these improvements - see room for improvement (Table 1; \\u201cGap from best known\\u201d), effort required to gain the same improvement for BRKGA (Fig 3) and absolute improvements (Fig 2).\\n\\n> Overall, the studied topic is interesting, and this paper is also intriguing.\\n\\nThanks!\"}", "{\"title\": \"Response to \\\"Official Blind Review #1\\\"\", \"comment\": \"Thank you for the review and the interesting questions!\\n\\n> Then the authors use a heuristic BRKGA to learn a policy ..., that actually works on unseen graphs. \\n> ...it is not immediately clear to the reader the effect of BRKGA on the mapping of the graph to the resource network and why it works so well, \\u2026\\n\\nTo clarify, BRKGA is a genetic algorithm we use to solve the joint placement and scheduling problem. BRKGA guides its search based on the solutions seen so far, like a classical optimization algorithm. We introduce learning by training a Graph Neural Network (GNN) that defines a mapping from computation graphs to mutant sampling distributions for BRKGA. The combination of GNNs and BRKGA is REGAL.\\n\\nWe expanded the description of BRKGA in Section 3.2 to help clarify its role. It is challenging to explain why it works so well. In addition to the references cited in the paper, we recommend the tutorial given by Resende at CLAIO/SBPO 2012 (http://mauricio.resende.info/talks/2012-09-CLAIO2012-brkga-tutorial-both-days.pdf ). BRKGA is a relative of the cross-entropy method (https://doi.org/10.1007/s10479-005-5724-z ) that has been successfully applied in combinatorial optimization and machine learning.\\n\\nNote also that to apply BRKGA to a specific problem, one must design a mapping from [0, 1]^n to the space of solutions. An exploration of design choices here could yield insights but is outside the scope of this work.\\n\\n> - Can you explain why the beta distribution choices ... have a negative impact on the makespan in certain cases? ... \\n\\nWe don\\u2019t have detailed insights for why the learned policy performs worse than BRKGA for certain cases. However, it is easy to formulate an example where this can occur\\u2014consider a mutant sampling distribution that has unit probability mass at a single poor solution. In such a case, REGAL (i.e., BRKGA with this bad sampling distribution) will never sample good solutions, but plain BRKGA may find better solutions by using the uniform random distribution.\\n\\n> - To what extent are the simulations realistic? ...\\n\\nWe have validated our performance model in an end-to-end production setting that is more restricted than the setting in the paper. When the number of devices (i.e., d) equals 1, the performance model reliably identifies schedules with low peak memory usage. The runtime part of the simulation, only non-trivial when d > 1, has not yet been validated with experiments on hardware. We expect that it will be necessary to model the asynchronous aspect of transfers in order to accurately predict runtimes on real hardware. \\n\\nRather than claiming that the performance model is realistic, we have claimed that it provides a challenging (i.e., NP-hard) setting in which to study how to learn an optimizer. Maintaining the simpler performance model also allows us to compare with baselines like constraint programming (CP), which help us validate the methodology. While CP would be hard to extend to more complex performance models, REGAL can be applied just as well.\\n\\n> - Have you tried Scotch? \\u2026\\n\\nWe have not tried Scotch; however, our GP+DFS baseline is analogous to Scotch, to the best of our knowledge. We set up the graph partitioning (GP) problem as follows: Each node in the graph is a TensorFlow operation, and edges represent direct data dependencies, with weights proportional to the sizes of the tensors. We aim to find a partitioning of the nodes into d (= 2) disjoint subsets such that the weight of edges that cross the subsets is minimized. We believe that this matches the graph partitioning setup, e.g., reported in Mirhoseini et al. (ICML \\u201817).\\n\\nIn place of Scotch, we use an implementation of the classical Kernighan\\u2013Lin algorithm modified to support weighted edges. We chose this proprietary implementation over Scotch because it\\u2019s already in use in an optimizing compiler for device-placement decisions.\\n\\n> - Can you obtain insights with respect how you could cluster the TensorFlow computation graphs? \\n\\nWe have not yet tried to obtain insights about how the policy\\u2019s behavior can be used to cluster computation graphs. One possibility is to learn a fixed dimensional graph-level embedding as part of the policy network, and then to cluster the embedding vectors for the training set graphs. Another possibility, as mentioned in Section 6, is to use a Mixture of Experts architecture for the policy, and once trained, analyze which graphs are selected by which experts to understand how the policy clusters graphs. Both of these are interesting directions for future work.\\n\\n> - Can you improve the discussion on BRKGA? \\u2026 \\n\\nSee changes to Section 3.2.\\n\\n> - ... can you comment on any insights concerning the structure of the partition and schedule? \\n\\nWe have added a new section (A.13), where we provide some insights into the structure of the joint placement and scheduling policy at the node-level. While we see some patterns in Figure 10, the overall learned policy remains non-trivial.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors proposed a framework to generating a task scheduling for a compiler to reduce the execution cost of neural networks. A computation graph is first fed into a GNN to produce a beta distribution, which is then fed into the BRKGA algorithm to yield the encoded solutions. The motivation is interesting, and the proposed method is technically reasonable. The details are also included in the appendix. To improve the quality, the following concerns may be considered:\\n\\n1. The detailed explanations of o_a(G) and o_s(G) should be included.\\n\\n2. How were the attribute vectors x_v and x_e defined in your experiments?\\n\\n3. The baseline (GP+DFS) may not be strong enough, since it is designed to reduce the communication cost. With the information of the input size and time complexity of ops, a better greedy algorithm can be designed. Moreover, the performance of Local Search and BRKGA 5K are similar, and REGAL is just slightly better than BRKGA 5K. Hence, the improvement over the best efficient greedy algorithm seems small.\\n\\nOverall, the studied topic is interesting, and this paper is also intriguing.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary\\n\\nThis paper proposes an ML-based method to optimize TensorFlow Graph execution. Specifically, it combines graph neural networks (GNNs) and BRKGA (a genetic algorithm) to search over the joint space of TF node-device placement and scheduling. The core claims on the advantages of this method are that (1) it co-searches placement and scheduling space, (2) the trained model can generalize to different graphs and inference cost is very small. The experimental results show that REGEL can outperform a few baseline methods on this problem.\\n\\nWriting\\n- The paper is well-written and I enjoyed reading the paper.\\n- Some more descriptions about the BRKGA algorithm could be added in.\\n\\n\\nMethod and Results\", \"some_confusion_if_the_authors_could_answer\": [\"I am very confused by one of the claims that \\u201c the first work on learning a policy for jointly optimizing placement and scheduling\\u201d. I don\\u2019t see much evidence in the result section about showing the co-searching the joint space yield advantages? I am fairly familiar with the line of work on only optimizing device placement, but it would be good to see some ablation studies showing search over the joint space is advantageous.\", \"The model is trained with standard REINFORCE -- how many training time and resources are needed to train a REGEL model for a task? How\\u2019re the training dynamics looking like (variance, convergence, etc?)?\", \"In terms of the generalization ability of REGEL, the paper has clearly shown that REGEL is able to generalize to differently shaped graphs, with acceptable cost, but I am wondering for the same dataflow graph, how REGEL generalizes to different input data configurations (size, modality, etc.)? E.g. if the batch size of the input data is changed, the execution time of each kernel and their memory usage (in general, the system treatment) would change; Can a trained REGEL model on a data config A generalize to B? How would this affect the performance of REGEL?\", \"It seems the method and assumptions about graphs or training data are pretty coupled with TensorFlow and graph-mode execution, how could the method be generalized to other ML frameworks (e.g. frameworks with eager execution)\", \"Could the authors clarify why the two methods mentioned in \\u201cLearning to directly predict a solution\\u201d has quadratic complexity w.r.t. # of nodes and whereas REGEL is linear?\", \"Confusion on Figure 4(b): Could some more critical statistics about the graphs in the training/test dataset be reported? e.g. what\\u2019s the average depth of the training graphs? When there are 32 MP layers a node\\u2019s feature will be passed across its 32-hop neighborhood, which seems surprising as it is common to observe GNN starts degenerating with increased depth (because all node features become similar during message passing)\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"In this work the authors propose a deep RL approach to minimize the makespan and the peak memory usage of a computation graph as produced by MXNet/PyTorch/TensorFlow. This is an increasingly important problem as distributed deep learning is necessary in many cases. The authors aim to minimize the execution time and/or the peak memory usage. For this purpose they generate a training dataset out of a real-world dataset of various TensorFlow computation graphs using simulation software. The proposed RL approach consists of two steps. First a GNN is used to derive representations for computation graphs. Then the authors use a heuristic BRKGA to learn a policy for the placement of computation graphs, that actually works on unseen graphs. Overall this paper is well-written, deals with an important practical problem. While it is not immediately clear to the reader the effect of BRKGA on the mapping of the graph to the resource network and why it works so well, the results are convincing (but still there is space for improvement). That is why I rate it as a \\\"weak accept\\\".\", \"Can you explain why the beta distribution choices at each node may have a negative impact on the makespan in certain cases? Have you looked into them?\", \"To what extent are the simulations realistic? Can you please comment more on this aspect?\", \"Have you tried Scotch? https://www.labri.fr/perso/pelegrin/scotch/. Since the software aims to achieve a different objective, it serves as a baseline.\", \"Can you obtain insights with respect how you could cluster the TensorFlow computation graphs?\", \"Can you improve the discussion on BRKGA? Since it is a vital component of the proposed framework, it would be informative to read few more self-contained details on how it works in section 2.\", \"Once you obtain a mapping, can you comment on any insights concerning the structure of the partition and schedule?\"]}" ] }
B1lDoJSYDH
Lagrangian Fluid Simulation with Continuous Convolutions
[ "Benjamin Ummenhofer", "Lukas Prantl", "Nils Thuerey", "Vladlen Koltun" ]
We present an approach to Lagrangian fluid simulation with a new type of convolutional network. Our networks process sets of moving particles, which describe fluids in space and time. Unlike previous approaches, we do not build an explicit graph structure to connect the particles but use spatial convolutions as the main differentiable operation that relates particles to their neighbors. To this end we present a simple, novel, and effective extension of N-D convolutions to the continuous domain. We show that our network architecture can simulate different materials, generalizes to arbitrary collision geometries, and can be used for inverse problems. In addition, we demonstrate that our continuous convolutions outperform prior formulations in terms of accuracy and speed.
[ "particle-based physics", "fluid mechanics", "continuous convolutions", "material estimation" ]
Accept (Poster)
https://openreview.net/pdf?id=B1lDoJSYDH
https://openreview.net/forum?id=B1lDoJSYDH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "7uEbcwt6Ga", "r1gOjm0YiS", "HygzfjhFjB", "B1xTMG3toH", "HJxau-3tir", "Bkl6MZhKsr", "HylajenFiH", "H1li4T0GcS", "SygIGXDRtB", "rkx3iqf0KH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735860, 1573671840156, 1573665546134, 1573663253479, 1573663093084, 1573662997210, 1573662884535, 1572166963338, 1571873549899, 1571855012416 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1917/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1917/Authors" ], [ "ICLR.cc/2020/Conference/Paper1917/Authors" ], [ "ICLR.cc/2020/Conference/Paper1917/Authors" ], [ "ICLR.cc/2020/Conference/Paper1917/Authors" ], [ "ICLR.cc/2020/Conference/Paper1917/Authors" ], [ "ICLR.cc/2020/Conference/Paper1917/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1917/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1917/AnonReviewer4" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes an approach for N-D continuous convolution on unordered particle set and applies it to Lagrangian fluid simulation. All reviewers found the paper to be a novel and useful contribution towards the problem of N-D continuous convolution on unordered particles. I recommend acceptance.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thank you for address my concerns.\", \"comment\": \"I thank the authors for addressing my concerns, and the revisions strengthen the paper. I would like to raise my rating from 6 (Weak Accept) to 8 (Accept).\"}", "{\"title\": \"Changes in the revision\", \"comment\": [\"Changes in the revision\", \"We thank the reviewers for their help to improve the paper. In the following we list the changes to the draft:\", \"We extended the evaluation by adding KPConv convolutions and SplineCNN convolutions.\", \"We add a new metric to measure the distance between the GT and the prediction over the whole sequence.\", \"The ablation study now contains a version of our network without FC layers.\", \"We added the waterfall scene to Figure 5 as an example for the particle representation of the environment.\", \"We added a quantitative generalization experiment (Figure 6).\", \"We added 2 more test scenes to the viscosity estimation experiment to test generalization.\", \"We moved the runtime evaluation for the nearest neighbor search to the appendix and give more information.\", \"We discuss the choice of the window function in the appendix and report numbers for a triangular window.\", \"We added a figure showing the fluid shapes used for data generation in the appendix\", \"We give the definition of the function Lambda in the appendix\"], \"minor_changes\": [\"In equation 8 the window function is now normalized with respect to the radius.\", \"We improved the description of Figure 1\", \"Figure 2 now uses the same symbol for the viscosity as the main text.\", \"We added labels to Figure 3\", \"We updated and fixed missing information for 2 references.\"]}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank R1 for the assessment and the comments for improving the paper.\", \"q\": \"In (7), you seem to be using convolutions between functions that have not been pre-mirrored, and it would be better to then express (5) and (6) on the same form.\", \"a\": \"In (7) we compute $x_i - x$ to get a relative position, which corresponds to $\\\\tau$. We removed \\u201cpre-mirrored\\u201d from the text as we explicitly refer to convolutions in ConvNets.\\n\\n\\n[1] Thomas et al., \\u201cKPConv: Flexible and Deformable Convolution for Point Clouds,\\u201d ICCV, 2019.\\n[2] Liu et al., \\u201cPoint-Voxel CNN for Efficient 3D Deep Learning,\\u201d NeurIPS, 2019.\\n[3] Lei et al., \\u201cOctree guided CNN with spherical kernels for 3D point clouds,\\u201d CVPR, 2019.\\n[4] Xu et al., \\u201cSpiderCNN: Deep Learning on Point Sets with Parameterized Convolutional Filters,\\u201d ECCV, 2018.\\n[5] Wang et al., \\u201cDeep Parametric Continuous Convolutional Neural Networks,\\u201d CVPR, 2018.\\n[6] Su et al., \\u201cSPLATNet: Sparse Lattice Networks for Point Cloud Processing,\\u201d CVPR, 2018.\\n[7] Schenck and Fox, \\u201cSPNets: Differentiable Fluid Dynamics for Deep Neural Networks,\\u201d CoRL, 2018.\\n[8] Li et al., \\u201cPointCNN: Convolution On X-Transformed Points,\\u201d NeurIPS, 2018. \\n[9] Hermosilla et al., \\u201cMonte Carlo Convolution for Learning on Non-uniformly Sampled Point Clouds,\\u201d ACM Trans. Graph., vol. 37, no. 6, 2018.\\n[10] Fey et al., \\u201cSplineCNN: fast geometric deep learning with continuous b-spline kernels,\\u201d CVPR, 2018.\\n[11] Atzmon et al., \\u201cPoint Convolutional Neural Networks by Extension Operators,\\u201d ACM Trans. Graph., vol. 37, no. 4, 2018.\", \"minor_remarks\": \"\"}", "{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank R4 for the comments and suggestions.\", \"q\": \"\\u201cAdditionally, the paper states that the convolutions of SPNets were \\\"specifically designed to implement the position-based fluids algorithm\\\" but that it was used in the paper with a much larger number of channels. If it was designed only to work for that one algorithm, how were the number of channels increased? That is unclear. Also, the average error for SPNets is not shown in Table 1 and it is not stated why.\\u201d\", \"a\": \"We use the convolutions from SPNets with our network architecture to compare the performance to our continuous convolution implementation. We made this more clear in the updated paper. While the SPNets convolutions were designed with the PBF algorithm in mind, the implementation is quite general and allows to change the number of channels. However, we measure very long runtimes using the convolutions in our more general training scenario. We do not state the average error because we estimate a training time of at least 29 days. We state this in the updated paper and added more comparisons with other state-of-the-art convolutions instead.\"}", "{\"title\": \"Response to Reviewer 3 (2/2)\", \"comment\": \"Q: \\u201cIn the experiment section, the authors claimed that SPNets take \\\"more than 29 days\\\" to train. Correct me if I am wrong, but from my understanding, SPNets directly write Position-Based Fluids (PBF) in a differentiable way, where they can extract gradients. Except for the tunable parameters like viscosity, cohesion, etc., I'm not sure if there are any learnable parameters in their model. Could the authors elaborate on what they mean by \\\"the training time\\\" of SPNets?\\u201d\", \"a\": \"We added 2 more sequences with viscosity parameters outside of the training range.\", \"q\": \"\\u201cThe data was generated using viscosity varying between 0.01 and 0.3. How well can the model do extrapolate generalization? It would be great to show some error plots indicating its extrapolate performance.\\u201d\"}", "{\"title\": \"Response to Reviewer 3 (1/2)\", \"comment\": \"We thank R3 for the comments and questions.\\n\\n[Major comments]\", \"q\": \"\\u201cIn figure 3, the model's rollout is a bit slower than the ground truth. The authors explained the phenomenon using the \\\"differences in the integration of positions and the much larger timestep.\\\" I do not quite get the point. Could you elaborate more on this? Also, it might be better to include labels for the two columns in figure 3 to make it more clear.\\u201d\", \"a\": \"Since DFSPH uses a much smaller time step it updates the particle velocities and positions more often resulting in slightly faster falling particles. Additionally, the time integration scheme is different. We use the midpoint method for computing the position, which is not used by DFPSH. Instead DFSPH corrects the density before updating the positions.\\nWe added labels to figure 3.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a novel technique to perform fluid simulations. Specifically, they promote the idea of using spatial convolutions to model how particles interact with nearby particles. Compared to graph-based models, this approach has several advantages and yields a model that can be conveniently trained end-to-end. The authors also develop a specific type of continuous convolution that yield better and faster inference than the benchmark algorithms.\\n\\nTo main contribution in this paper is the idea of using spatial convolutions to model particle interactions. Even though the obtained results contain significant errors compared to ground truth, the paper indicates a promising strategy others may leverage on to develop even more accurate deep learning based simulators. Considering that they have also plausibly argued that their specific algorithm is already state of the art, I view this as a significant contribution. Having said this, the contribution is really to use a well-known technique (spatial convolutions) on a new problem (fluid simulations). My understanding is that ICLR primarily wants to promote general learning techniques and I am not convinced that this paper contains any significant contributions in this field. \\n\\nThe authors also develop a specific network architecture that they compare with other deep learning architectures for continuous convolutions. Unfortunately, the design contains a number of questionable choices and I suspect that the main reason that existing architectures for deep learning using continuous convolutions perform worse is that their hyper-parameters have been fine-tuned for a different task. As an example of a questionable choice, why do you \\u201cexclude the particle at which we evaluate the convolution\\u201d in your convolutions?\", \"minor_remarks\": [\"You include a constant 1 in the input feature vectors. Assuming that the neurons in your network have weights and biases, this constant is completely redundant. Of course, this also means that you can include it without ruining your performance, but why would you?\", \"I found the explanation of Lambda in Figure 1 too short to be understandable.\", \"In (7), you seem to be using convolutions between functions that have not been pre-mirrored, and it would be better to then express (5) and (6) on the same form.\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"[Summary]\\n\\nThis paper proposes to learn fluid dynamics by combining the position-based fluids (PBF) framework and continuous convolution. They use dynamics particles to represent the fluids, and static particles to describe the scene boundaries, and employ continuous convolution to learn the interactions between the particles of different kinds. They have demonstrated the effectiveness of the proposed method by comparing it with several state-of-the-art learning-based and physics-based fluid simulators. Their method outperforms the baselines in terms of both accuracy and efficiency. They have also shown that the model can extrapolate to terrains that are more complex than those used in training, and are useful in estimating physical properties like the viscosity of the fluids.\\n\\n\\n[Major comments]\\n\\nFor now, I slightly lean towards acceptance, as I like the idea of combining PBF and continuous convolution for fluid simulation, and the method seems to have a much better performance than the baselines. The experiments have also convincingly demonstrated the method's generalization ability to terrains of various geometry and fluids of different viscosity. However, I would still like the authors to address my following questions.\\n\\nMy primary concern about the proposed method is the scope of its applicability. One of the benefits of using learning-based physics engines is that they directly learn from observations while making very few assumptions towards the underlying dynamics, which gives them the potential to handle complex real-world scenarios. The model in this paper, however, heavily relies on the PBF framework that may limit its ability to simulate objects like rigid bodies and other deformable materials. I would be curious to know the authors' views on how to extend their model to environments with not just fluids, but also other objects of various material properties.\\n\\n\\n[More detailed questions]\\n\\nWill the method run faster than DFSPH, given that the timestep is much larger than the timestep used by DFSPH, 0.02 ms vs. 0.001 ms? Will the learning-based physics engine have the potential to outperform the physics-based physics engine in terms of efficiency?\\n\\nFor estimating the viscosity of the fluids, how well does the gradient descent on the learned model perform comparing with black-box optimization, e.g., Bayesian Optimization using the ground truth simulator?\\n\\nIn the SPNet paper, they have also tried to solve the inverse problem of estimating the viscosity of the fluids. It would be great to include a comparison to see if the proposed method can outperform SPNet in terms of efficiency and accuracy.\\n\\nEquation 8 smooth out the effect between particles of different distances. How sensitive is the final performance of the model to the specific smoothing formulation? Is it possible to learn a reweighting function instead of hardcoding?\\n\\nIn figure 3, the model's rollout is a bit slower than the ground truth. The authors explained the phenomenon using the \\\"differences in the integration of positions and the much larger timestep.\\\" I do not quite get the point. Could you elaborate more on this? Also, it might be better to include labels for the two columns in figure 3 to make it more clear.\\n\\nIn the experiment section, the authors claimed that SPNets take \\\"more than 29 days\\\" to train. Correct me if I am wrong, but from my understanding, SPNets directly write Position-Based Fluids (PBF) in a differentiable way, where they can extract gradients. Except for the tunable parameters like viscosity, cohesion, etc., I'm not sure if there are any learnable parameters in their model. Could the authors elaborate on what they mean by \\\"the training time\\\" of SPNets?\\n\\nFrom the videos, DPI-Nets does not seem to have a good enough performance in the selected environments. I can see why their model performs not as good since they did not use as much of a structure in the model. But from the videos of DPI-Nets, it seems that they perform reasonably well in scenes like dam break or shake a box of fluids. Would you please provide more details on why they are not as good in the scenes in this paper?\\n\\nThe data was generated using viscosity varying between 0.01 and 0.3. How well can the model do extrapolate generalization? It would be great to show some error plots indicating its extrapolate performance.\\n\\nWhy there are no average error numbers for SPNets?\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper applies 3D convolutions to the problem of Lagrangian fluid simulation. The primary difficulty in this is that, unlike for Eulerian fluid simulation, which represents the fluid as a grid and adapts nicely to 3D convolutions, in Lagrangian simulations the fluid is represented as an unordered set of particles. It is not straight-forward to apply 3D convolutions on such a data structure, however this paper proposes a method to apply the same regular-grid kernels used in grid-based convolutions to the particle structure. To do this, several points around the kernel are evaluated by first using trilinear interpolation between the particles to get feature values at those points and then convolving those values with the kernel weights. This results in a new particle in the next layer up with those features. In the paper, this method is used to train the weights of the network to reproduce fluid dynamics generated by a simulator. The results show that the proposed method was able to model fluid dynamics over 2 timesteps more accurately than other methods and can do so quickly.\\n\\nWhile I have some reservations about this paper (detailed below), on the whole I think it is a quality contribution and should be accepted. This paper contributes a novel method for performing 3D convolutions on unordered particle sets, and it shows that the learned fluid dynamics generalize to novel situations. One major hurdle to applying modern convolutional learning techniques to Lagrangian methods is the mismatch between the layout of the data (unordered particles) and the layout of the kernels (regular grid). This paper presents a novel way of bridging that divide, and it shows that the proposed method actually works by applying it to the problem of fluid dynamics and successfully learning it. However, one major concern I had was that it seems all of the training data was generated in box-like environments, which could easily lead to overfitting. This was alleviated by the results showing that although the network was trained only in boxes, it generalized to environments with channels and waterfalls (as seen in the video). This is a powerful result and shows that this method really did learn fluid dynamics and not just a shortcut that only works in boxes.\\n\\nI do think this paper can be improved in a few aspects however. The biggest issue is that the quantitative analysis of the core functionality (reproducing fluid physics) is lacking. The paper only reports results for error after at most 2 timesteps, which is not nearly long enough to determine if the output is accurate. Furthermore, the results are only reported for the box scenes, not the generalization scenes mentioned above. Qualitatively, from the videos, it is clear that the output does at least somewhat model fluid dynamics, but it would be much better to have hard numbers to back that up. I suspect the authors discovered that Lagrangian systems are sufficiently chaotic that after only a few timesteps the particle positions have diverged significantly. This is not a bug but a feature of such systems. In Lagrangian fluids, the particles are but an approximation of the fluid, and unlike Eulerian systems*, multiple different sets of particles can approximate the same fluid. This makes particle position only useful as a measure of error if the trained model can perfectly reproduce the fluid dynamics. But of course it can't (trained networks aren't ever perfect in practice), and so small errors quickly compound into large particle position disparities. So even though the trained network models the fluid well overall, the particles end up in completely different locations. Instead a better error metric would be something like measuring the difference between the surface of the fluids, or the velocities or densities at various locations. These are agnostic to the particular particle positions, but still measure how well two different sets of particles represent the same fluid. Using a metric like this, it would be nice to see error graphs over time for both the box and generalization scenes.\\n\\nA couple other smaller points. The chaotic divergence behavior of DPI-Nets seems inconsistent with that paper. Is this possibly a bug in the way it was implemented here? Additionally, the paper states that the convolutions of SPNets were \\\"specifically designed to implement the position-based fluids algorithm\\\" but that it was used in the paper with a much larger number of channels. If it was designed only to work for that one algorithm, how were the number of channels increased? That is unclear. Also, the average error for SPNets is not shown in Table 1 and it is not stated why.\\n\\n*Assuming same grid shape, size, and position.\"}" ] }
rkl8sJBYvH
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
[ "Sanjeev Arora", "Simon S. Du", "Zhiyuan Li", "Ruslan Salakhutdinov", "Ruosong Wang", "Dingli Yu" ]
Recent research shows that the following two models are equivalent: (a) infinitely wide neural networks (NNs) trained under l2 loss by gradient descent with infinitesimally small learning rate (b) kernel regression with respect to so-called Neural Tangent Kernels (NTKs) (Jacot et al., 2018). An efficient algorithm to compute the NTK, as well as its convolutional counterparts, appears in Arora et al. (2019a), which allowed studying performance of infinitely wide nets on datasets like CIFAR-10. However, super-quadratic running time of kernel methods makes them best suited for small-data tasks. We report results suggesting neural tangent kernels perform strongly on low-data tasks. 1. On a standard testbed of classification/regression tasks from the UCI database, NTK SVM beats the previous gold standard, Random Forests (RF), and also the corresponding finite nets. 2. On CIFAR-10 with 10 – 640 training samples, Convolutional NTK consistently beats ResNet-34 by 1% - 3%. 3. On VOC07 testbed for few-shot image classification tasks on ImageNet with transfer learning (Goyal et al., 2019), replacing the linear SVM currently used with a Convolutional NTK SVM consistently improves performance. 4. Comparing the performance of NTK with the finite-width net it was derived from, NTK behavior starts at lower net widths than suggested by theoretical analysis(Arora et al., 2019a). NTK’s efficacy may trace to lower variance of output.
[ "small data", "neural tangent kernel", "UCI database", "few-shot learning", "kernel SVMs", "deep learning theory", "kernel design" ]
Accept (Spotlight)
https://openreview.net/pdf?id=rkl8sJBYvH
https://openreview.net/forum?id=rkl8sJBYvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "4uudRwsglG", "HJeJkFCisr", "B1xFj_Rijr", "rJgJBORiiH", "SJg-qDAjjS", "SJx5zKqaKS", "Sklj47KpFS", "r1eG5udtKS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735828, 1573804246773, 1573804192887, 1573804087293, 1573803912743, 1571821842104, 1571816243293, 1571551369807 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1915/Authors" ], [ "ICLR.cc/2020/Conference/Paper1915/Authors" ], [ "ICLR.cc/2020/Conference/Paper1915/Authors" ], [ "ICLR.cc/2020/Conference/Paper1915/Authors" ], [ "ICLR.cc/2020/Conference/Paper1915/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1915/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1915/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper carries out extensive experiments on Neural Tangent Kernel (NTK) --kernel methods based on infinitely wide neural nets on small-data tasks. I recommend acceptance.\", \"title\": \"Paper Decision\"}", "{\"title\": \"General Response and Revision Summary\", \"comment\": \"We thank all reviewers for the positive reviews.\\nWe have revised our paper to fix typos and add clarifications according to reviewers\\u2019 suggestions.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your positive review. We have revised our paper according to your suggestion. Regarding your comment \\u201cNTK tunes one more parameter (L\\u2019) than NNs\\u2026\\u201d: Since training all layers in NN is the standard practice, we did not fix the first $L\\u2019$ layers. Also note that for experiments on UCI, more hyper-parameters do not necessarily give better performance because we used 4-fold cross-validation.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your positive review. Please find our response to your comments.\\n1.\\tNTK initialization means a neural network with parameterization defined in Equation 2 with all weighted being initialized to be i.i.d. $\\\\mathcal{N}(0, 1)$. We have added a sentence after Equation 2 to clarify this.\\n2.\\t\\u201cIn Figures 1-2, it can be observed \\u2026..\\u201d There is no clear trend on which dataset NTK can be better than other classifiers. We believe that investigating on which dataset NTK gives better performance requires more domain knowledge. Some analyses on pairwise comparisons: NTK vs. RF and NTK vs. Gaussian kernel, are provided in Section 4.2.\\n3.\\t\\u201cIn Tables 2-5, it can be observed that \\u2026\\u2026\\u201d Note for Tables 2-5, CNTKs are used on top of raw images, so to achieve better performance, one needs to use multi-layer CNTKs to extract higher-level features. On the other hand, CNTKs on VOC07 are used on top of extracted features from ResNet-50, which are already high-level features. Therefore, shallow CNTKs suffice for this case.\\n4.\\tWe have stated the experiment details in the third paragraph in Section B.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your positive review. We have revised our paper according to your comments. Please find our response to your questions below.\\n-\\t\\u201cpoint 4 in the abstract\\u201d:\\nFor point 4, we mainly refer to Figure 2(b). We found \\\"There is no dataset the one classifier is significantly better than the other\\\".\\n-\\t\\u201cBias in NTKs and NNs\\u201d:\\nWe did not add bias in NTKs and NNs.\\n-\\t\\u201cResNet-34 is not properly tuned\\u201d\\nWe agree but note that there is not good way to tweak large nets on small datasets. Also note in the small data regime ($n=10$ to $n=320$), CNTK with 5,8,11 and 14 layers all beat ResNet.\\n-\\t\\u201cis there a consistent trend one could find regards to $L\\u2019$\\u201d?\\nWe did not find a consistent trend. We did not try very deep NTKs ($L \\\\le 5$ and $L\\u2019 \\\\le 4$ in our experiments).\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper conducts very interesting and meaningful study of kernels induced by infinitely wide neural networks on small data tasks. They show that on a variety of tasks performance of these kernels are superior to both finite neural networks and Random Forest methods.\\n\\nWhile neural tangent kernel (NTK) [1] is motivated for studying training dynamics of neural networks, it is also important to ask to find utility of these new powerful kernels that captures functional priors of neural networks. This paper conducted important study on small dataset regime and on a wide range of tasks (90 UCI datasets, small subset of CIFAR-10, few shot image classification task on VOC07. \\n\\nAuthors introduce a family of generalized NTK kernels interpolating between NNGP kernels [2] to original NTK[1] by fixing first L\\u2019 layers and allowing to train remaining layers. Treating L\\u2019 as a hyperparameter, the authors try both NNGP/NTK and kernels in between as well. \\n\\nAnother contribution I observe is applying kernel SVM where one utilizes NTK and shows that it can work well. This paper shows that kernels induced by infinitely wide networks could become useful for real world applications where data size is not so large. \\n\\nThere are few small concerns regarding experiments which are discussed in detailed comments. Overall I think the message of the paper is clear and well supported therefore I recommend accepting the paper. \\n \\nDetailed comments\\n\\t\\n1) From reading the paper it was not easy to grasp where point 4 of the abstract was based on. \\n2) In the first footnote, small nit is that, in practice one should not invert matrix but just do a linear solve for better numerical stability and efficiency (still O(N^3) but with better constant)\\n3) In section 3, there seems to be no bias. Are NTK and NNs considered in this work contain no bias? Or is bias ignored for ease of presentation? \\n4) Nit p4 first paragraph in section 4 : multiplayer -> multilayer\\n5) Regards to NTK initialization performing better than standard He initialization: It was observed in [3] that for multilayer perceptron both parameterization is on-par but for CNN or WideResNet case standard parameterization performed significantly better.\\n6) Note that similar to analysis in section 5, for CIFAR-10 with fully connected model [1] shows that for all dataset size(100-45k) NNGP performs better than trained neural networks.\\n7) One may worry that ResNet-34 is not properly tuned as most hyperparameters were fixed for large dataset. \\n8) Regards to hyperparameters for NTK, is there a consistent trend one could find regards to L\\u2019? What percentage of tasks that NTK performed well actually have a high L\\u2019?\\n9) To help the readers, I would suggest adding a little more description on statistics used for comparison as well as what VOC07 task entails. \\n\\n[1] Jacot et al., Neural Tangent Kernel: Convergence and Generalization in Neural Networks, NeurIPS 2018\\n[2] Lee et al., Deep Neural Networks as Gaussian Processes, ICLR 2018\\n[3] Park et al., The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study, ICML 2019\", \"edit_after_author_response\": \"I have read the response from authors. I appreciate all the efforts to improve the paper.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper evaluates the empirical power of neural tangent kernel (NTK) on small-data tasks. The authors demonstrate the superior performance of NTK for classification/regression tasks on UCI database, small CIFAR-10 dataset and VOC07 testbed.\\n\\nOverall, this paper is well written and organized. The experimental results are also quite interesting. Besides, some questions and comments are as follows:\\n\\nOne of the baseline algorithms in Table 1 is NN with NTK initialization. However, this paper does not give the formal definition of NTK initialization.\\n\\nIn Figures 1-2, it can be observed that NTK cannot universally outperform baselines on all dataset. For some dataset, NTK can be worse than baselines but for some other dataset, NTK can be significantly better than baselines. Therefore, I would like the authors to briefly discuss which kind of data can be more efficiently learned through NTK or other training algorithms.\\n\\nIn Tables 2-5, it can be observed that for CIFAR10 dataset, increasing the number of layers leads to higher test accuracy. But for VOC07, one can observe the opposite thing. Is there any explanation for this phenomenon?\\n\\nThe authors should provide a clear description of the experimental setting. For example, do you use batch normalization/weight decay in ResNets? For training NN, which optimization algorithms do you use? Do you use learning rate decay? \\n\\n======================\\nAfter reading authors' response:\\n\\nThanks for your response, I would like to keep my score.\"}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"[Summary]\\nThis paper performs an extensive empirical evaluation of Neural Tangent Kernel (NTK) classifiers---kernel methods that theoretically characterize infinitely wide neural nets---on small-data tasks. Experiments show that NTK classifiers (1) strongly resemble the performance of neural nets on small-data tasks, (2) can beat prior benchmark methods such as Random Forests (RF) on classification tasks in the UCI dataset, and (3) can also outperform standard linear SVM on a few-shot learning task.\\n\\n[Pros]\\nThe question considered in this paper is well motivated, and a very natural extension of Lee et al. (2019) and Arora et al. (2019a). These papers show that NTK performs well on (relatively) large benchmark tasks such as CIFAR-10 but is still a bit inferior to fully trained neural nets. On the other hand, for small-data tasks, the relationship is reversed --- neural nets are slightly inferior to more traditional methods such as random forests (e.g. from Fernandez-Delgado et al. 2014) and Gaussian kernel SVMs. As the NTK gives a limiting characterization for wide neural nets, it is a sensible question to test the performance of NTK on these small datasets, and see if they can improve over neural nets and compare more favorably against the traditional methods.\\n\\nThe experimental results, in my perspective, is a reasonably convincing evidence that the resemblance between NTK and NN on small-data tasks is stronger than on larger tasks such as CIFAR-10, which agrees with the NTK theory. In addition to the UCI datasets, the paper also tries out NTK in a few-shot learning task and show that SVM with the convolutional NTK does better than the linear SVM as the few-shot learner. I am less familiar with few-shot learning though so am not entirely sure about the strength of this part.\\n\\nThe paper is well-written and delivers its messages clearly. The results and discussions are easy to follow.\\n\\n[Cons, and suggestions]\\nThe message that \\u201cNTK beats RF\\u201d seems a bit delicate to me, specifically considering the fact that the average accuracies of (NTK, NN, RF) are all pretty close but the Friedman rank comparison says NTK > RF > NN (somewhat more significantly). This implies the difference between all these methods has to be small and it\\u2019s only that NTK happens to win on more tasks. In addition, NTK tunes one more parameter (L\\u2019) than NNs, so I guess perhaps NNs can also be tuned to outperform RF in the rank sense if we also tune L\\u2019 (by fixing the bottom L\\u2019 layers to be not trained) in NNs?\\n\\nAlso, it would be better if the authors could provide a bit more background on the metrics used in the UCI experiments -- for example, the Friedman rank is not defined in the paper.\"}" ] }
B1eBoJStwr
Semi-supervised semantic segmentation needs strong, high-dimensional perturbations
[ "Geoff French", "Timo Aila", "Samuli Laine", "Michal Mackiewicz", "Graham Finlayson" ]
Consistency regularization describes a class of approaches that have yielded ground breaking results in semi-supervised classification problems. Prior work has established the cluster assumption\,---\,under which the data distribution consists of uniform class clusters of samples separated by low density regions\,---\,as key to its success. We analyze the problem of semantic segmentation and find that the data distribution does not exhibit low density regions separating classes and offer this as an explanation for why semi-supervised segmentation is a challenging problem. We then identify the conditions that allow consistency regularization to work even without such low-density regions. This allows us to generalize the recently proposed CutMix augmentation technique to a powerful masked variant, CowMix, leading to a successful application of consistency regularization in the semi-supervised semantic segmentation setting and reaching state-of-the-art results in several standard datasets.
[ "computer vision", "semantic segmentation", "semi-supervised", "consistency regularisation" ]
Reject
https://openreview.net/pdf?id=B1eBoJStwr
https://openreview.net/forum?id=B1eBoJStwr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "ONZVM38Bum", "HyxVtjH3sB", "HyxCccH3iB", "rJg6t8rPsr", "H1lvGISPoB", "HkxEhHrDoS", "BklLvzGCFS", "H1eLF2upFH", "SJlKzMliKB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735798, 1573833596376, 1573833365814, 1573504645510, 1573504526712, 1573504428364, 1571852893824, 1571814526505, 1571648017283 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1914/Authors" ], [ "ICLR.cc/2020/Conference/Paper1914/Authors" ], [ "ICLR.cc/2020/Conference/Paper1914/Authors" ], [ "ICLR.cc/2020/Conference/Paper1914/Authors" ], [ "ICLR.cc/2020/Conference/Paper1914/Authors" ], [ "ICLR.cc/2020/Conference/Paper1914/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1914/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1914/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes a method for semi-supervised semantic segmentation through consistency (with respect to various perturbations) regularization. While the reviewers believe that this paper contains interesting ideas and that it has been substantially improved from its original form, it is not yet ready for acceptance to ICLR-2020. With a little bit of polish, this paper is likely to be accepted at another venue.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Official Blind Review #2 (2)\", \"comment\": \"We have added a little more to Section 3.2 paragraph 4, in that we have stated that the perturbations should be high dimensional in order to adequately constrain a decision boundary in the high-dimensional space of natural images.\"}", "{\"title\": \"Response to Official Blind Review #3 (2)\", \"comment\": \"We have added results for ICT, CutOut, CutMix and CowOut using DeepLab2 for Cityscapes with 372 supervises samples. We will run experiments to produce results for other values for number of supervised samples, U-Net architecture and the Pascal dataset in due course.\\n\\nThe new results show that CutOut harms performance, CutMix contributes an improvement (but only just) while CowOut makes a fair improvement, but trailing that of CowMix, strengthening the position of CowOut and CowMix relative to the others.\"}", "{\"title\": \"Response to Official Blind Review #1\", \"comment\": \"Thank you for your review. We appreciate your effort and we thank you for highlight areas in need of clarification.\\nIf there are any others that come to mind, please feel free to let us know.\\n\\nWe now state explicitly that we do use supervised loss in section 4.1 paragraph 2. In order to stay within paper\\nlength limitations, we removed some details from this paragraph as they are also present in Appendix D.3. We\\nwill look further at Figure 3 before the end of the response/discussion period.\\n\\nWe used different split ratios to that of Hung et al. as the practical benefit of semi-supevised learning is maximised by\\nreducing the required number of labelled samples -- and therefore the effort required to label them -- by\\nas much as possible. We therefore tested our approach using a significantly smaller number of labelled samples\\nin order to illustrate that our approach gives strong performance in these challenging but practically useful\\nconditions. Labelling 1,323 images (12.5% of augmented Pascal) or 5,291 images (50%) requires a considerable\\namount of manual labour.\\n\\nWe have attempted to answer your query concerning applying geometric transformations in reverse in Appendix D.1.1.\\nIn short, classification is translation invariant while semantic segmentation is translation *variant*. As a consequence\\nin semantic segmentation scenarios any translation must in effect be reversed elsewhere in the pipeline to prevent\\nthe network from learning from erroneous training data. That said, using translation can cause a part of the image\\nto take a slightly different path through the convolutional layers of the network, so it can provide a small\\nimprovement. Our standard augmentation based unsupervised regularizer can and does utilise this, although it does not\\nachieve gains in semi-supervised segmentation.\", \"figure_2\": \"we have replaced with word gap with the term 'low density region' that is consistent with the rest of the\\npaper. We constructed two similar artificial scenarios for Fig 2 (a) and (b); one with a low density region\\nseparating the two regions of unsupervised samples and one without.\\n\\nWe have expanded the text in Appendix A to explain how Figure 2(c) was made. We hope this clears things up.\\nIf not, please feel free to let us know.\"}", "{\"title\": \"Response to Official Blind Review #2\", \"comment\": \"Thank you for your review and thank you for identifying areas that are of concern.\\n\\nI will attempt to answer your query concerning how our analysis of a 2D example carries over to a high dimensional\\nproblem.\\n\\nTo recap, Figure 2(d) shows that consistency regularization can succeed without requiring low density regions in the input\\ndistribution by constraining the perturbations that drive consistency regularisation to be parallel to the\\nintended decision boundary. In such a simple 2D example, perturbing along a line parallel to the decision boundary\\n(or rather parallel to a line tangent to the decision boundary at the closest point on said boundary)\\nis sufficient. In higher dimensions, perturbing a sample in one direction (making it trace out a line) is insufficient as \\nthe decision boundary is free to orient itself almost arbitrarily while still being perpendicular to the line of perturbation.\\nIn order to properly constrain the orientation of the decision boundary, the perturbations must operate in as many\\ndimensions as possible. The cutting and mixing regularizers (CutOut, CutMix, CowOut and CowMix) discussed in this paper\\nare high dimensional as an axis of perturbation is supplied by each pixel in the cutting/mixing mask.\\n\\nFurthermore, masking out part of an object (as in CutOut or CowOut) does not change the ground truth class of the pixels\\nof the object that remain, hence these perturbations do not cross the class boundary. Mixing the images does not change\\nthe class for a similar reason.\\n\\nIt is our intention look carefully at the wording of this part of the paper in the next few days.\"}", "{\"title\": \"Response to Official Blind Review #3\", \"comment\": \"Thank you for your very helpful and detailed review. You have identified several areas in which we can improve and clarify our work.\\n\\nWe have updated the caption in Figure 1 and the text in Section 3.1 to clarify that we are comparing the contents\\nof overlapping patches that are centred on neighbouring pixels; a patch A is compared with a neighbouring patch B\\nthat is shifted e.g. one pixel to the right of patch A. Figure 1 (b) and (c) are generated by computing the distance between patches that are centered on each pixel in the image.\\n\\nWe have expanded Section 3.1 to confirm that we are compare the raw pixel content of image patches. While the use\\nof raw pixel space -- as opposed to e.g. a feature space from a pre-trained network -- may seem unusual given the\\nhighly non-linear nature of neural networks, prior consistency regularisation based semi-supervised learning approaches\\n(e.g. Laine et al. etc.) apply sample perturbation in the input space. We specifically reference the virtual\\nadversarial approach of Miyato et al. as they use adversarial techniques to generate an adversarial example\\n$\\\\hat{x}$ from $x$ that maxmimizes the distance between predictions $d(\\\\hat{y}, y)$ (where y = f_\\\\theta(x)).\\nOnce again this perturbation is performed in raw pixel / input space, illustrating the significance of the input space\\ndata distribution.\\n\\nAs far as we know the data distribution of semantic segmentation problems has not been studied in prior work.\\nTo recap, we observed that computing the distance between overlapping neighbouring patches is equivalent to applying\\na uniform filter to the squared gradient image, thus suppressing the fine details that would be required for\\nlow density regions to manifest along object or texture boundaries. We believe that this should apply to natural\\nimages in general. We have however analysed the patch distribution in the Cityscapes dataset to confirm this.\\nDue to the space limitations we have added this as Appendix B.\\n\\nWe are currently in the process of conducting experiments on the Cityscapes dataset using CutOut, CutMix\\nand CowOut using the DeepLab2 network. We hope to get results for ICT as well. We will revise the paper\\nlater during the rebuttal period to include them when the experiments have finished running. We will only be able\\nto do this for 372 supervised samples prior to the end of the rebuttal period due to lack of available\\ncompute at this time and the short amount of time available. We will endeavour to produce a complete\\nset of results should our work be accepted.\\n\\nApplying CowOut and CowMix to classification problems is a topic that we intend to explore in further work.\\nWe believe that the challenging nature of semi-supervised semantic segmentation that we have described\\nin this work has resulted in a regularizer that could have interesting properties when used in classification\\nand are eager to evaluate it.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"# Summary\\n\\nThis paper proposes a method for semi-supervised semantic segmentation. \\nThe authors tackle this problem through consistency regularization, a successful technique in image classification, that encourages the network to give consistent predictions on unlabeled samples which are perturbed in multiple ways. The authors argue that the cluster assumption (to which effectiveness of consistency regularization has been partially attributed) does not hold in semantic segmentation. Thus, in order to enable class boundaries to become low-density regions and then better guide contrastive regularization, the authors argue that a stronger perturbation must be inserted. To this effect they first propose looking at CutOut and CutMix types of methods. They improved upon them by putting forward a variant of CutMix, coined CowMix, with more degrees of freedom and using flexible masks instead of rectangular ones. CowMix is evaluated on the Cityscapes and PascalVOC 2012 datasets in the semi-supervised regime and showing encouraging results.\\n\\n\\n# Rating\\nI find the paper and the advanced ideas of interest for the community and I consider they are novel. I'm currently on the fence between Weak Accept and Weak Reject, mostly due to incomplete evaluations and support for claims made in the introduction regarding the infeasibility of contrastive regularization methods for semantic segmentation. I would be happy to upgrade my rating if authors addressed these concerns.\\n\\n\\n# Strong points\\n- The paper is well written and mostly clear with a good coverage and positioning w.r.t. related work. The authors illustrate well the reasoning and the choices they have made. The author provide plenty of ablation studies (e.g., per class statistics) and implementation details, improving significantly the reproducibility of the contribution.\\n- The flexible masking technique that is advanced here is novel and experimentally seems effective.\\n- This work is among the few that address semi-supervised semantic segmentation in a non-adversarial manner, so I would give it some novelty credit.\\n- I appreciate the evaluation protocol of averaging across multiple runs.\\n\\n# Weak points\\n\\n## Unclear aspects\\n- The authors argue that consistency regularization has had little success so far in semantic segmentation problems since low density regions in input data do not align well with class boundaries. It would be useful to provide a reference to this claim or at least validate it experimentally on a large dataset.\\n\\n- In Figure 1, it is not clear on which features where the distances between patches computed? Is it on raw pixels or intermediate feature maps from a CNN? \\nIf the distances are made over raw pixels, I find it difficult to make the connection between distances in the pixel space and distances in the class space.\\nAre the neighbor patches overlapping with the central/query patch?\\n\\n## Experiments\\n- The authors compare against other methods on the CamVid dataset. CamVid is a small and relatively limited dataset (~367 images for training from the streets of Cambridge). I'm worried that this dataset might not be enough to conclude and emphasize the benefits of this method over other semi-supervised techniques. For instance CowOut does not seem do be above CutOut, while CutMix has convergence problems and low scores. \\nThe other experiments on Cityscapes and Pascal VOC are certainly interesting, but the method is compared only against Hung et al. which a different family of methods and the subset baseline (which is useful but not enough). I think this work would benefit from an additional baseline in the style of contrastive regularization methods, e.g. ICT, and eventually CutOut, to support the initial arguments regarding the limitations of these methods in semantic segmentation and respectively the effectiveness of the flexible masks over the rectangular ones in this setup.\\n\\n\\n# Suggestions for improving the paper:\\n1) It would be useful to include other semi-supervised baselines, e.g. ICT, and the baseline perturbation CutMix on larger experiments, in order to better emphasize the contributions of this work.\\n\\n2) Did the authors try the flexible masking on image classification? How is it expected to perform over ICT, MixUp or MixMatch?\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper provided first provided analysis for the problem of semantic segmentation. Through a few simple example, the authors suggested that the cluster assumption doesn\\u2019t hold for semantic segmentation. The paper also illustrated how to perturb the training examples so that consistency regularization still works for semantic segmentation.\\nThe paper also introduce a perturbation method that can achieve high dimensional perturbation, which achieve solid experimental results.\\n\\nThe analysis part seems interesting and innovative to me. But it is very qualitative and I'm not fully convinced that the analysis on 2d example can actually carry over to high dimensional spaces for images. I also don't quite see the connection between the toy example and the proposed perturbation method. For example, why the proposed perturbation method has the property of \\\"the probability of a perturbation crossing the true class boundary must be very small compared to the amount of exploration in other dimensions\\\"?\\n\\nThe proposed algorithm is an extension of the existing cutout and cut mix. The way to generate new mask is a very smart design to me. This should be the most important contribution of the paper.\\n\\nThe writing of the paper is very clear and easy to follow. The experimental results look very convincing overall and proposed algorithm does show very promising results. \\n\\nTo sum up, the paper is an ok paper from the practical perspective, but the analysis in the paper wasn't strong enough to me.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work analyzes the consistency regularization in semi-supervised semantic segmentation. Based on the results on a toy dataset, this work proposes a novel regularization for semi-supervised semantic segmentation, which is named CowMix.\", \"pros\": \"-- The proposed CoxMix is easy to understand and implement.\\n-- The experimental results seem to benefit from this proposed CoxMix at a first glance.\", \"cons\": \"The writing is not clear. Sometimes I have to make a ``guess\\\" about the technical details. For example:\\n-- Other than L_{cons}, is there any other Loss term utilized in this work? Based on Figure 3, it seems only L_{cons} is utilized. If so, is it a waste not to use the label training data (although very few) to calculate a cross-entropy loss?\\n\\n-- It seems the experimental setting in this submission follows the settings in Hung, 2018. However, for the experiment on VOC 2012 validation set, Hung tested their method on 1/8 1/4 1/2 of labeled data (Table 1). While in this submission, Table 3 shows the results on label data of 100, 200, 400, 800, 2646(25%). The split ratios seem different from Hung's work, which confuses me.\\n\\n-- \\\"Note that in context of semantic segmentation, all geometric transformations need to be applied in reverse for the result image before computing the loss (Ji et al., 2018). As such, translation turns into a no-op, unlike in classification tasks where it remains a useful perturbation.\\\" \\n Is there any experimental result to support this claim?\\n\\n-- It is a little hard for me to fully understand Figure 2. For example, how to get 2. (c)? What is the meaning of the word \\\"gap\\\" here?\"}" ] }
B1gHokBKwS
Learning to Guide Random Search
[ "Ozan Sener", "Vladlen Koltun" ]
We are interested in derivative-free optimization of high-dimensional functions. The sample complexity of existing methods is high and depends on problem dimensionality, unlike the dimensionality-independent rates of first-order methods. The recent success of deep learning suggests that many datasets lie on low-dimensional manifolds that can be represented by deep nonlinear models. We therefore consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold. We develop an online learning approach that learns this manifold while performing the optimization. In other words, we jointly learn the manifold and optimize the function. Our analysis suggests that the presented method significantly reduces sample complexity. We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems. Our method achieves significantly lower sample complexity than Augmented Random Search, Bayesian optimization, covariance matrix adaptation (CMA-ES), and other derivative-free optimization algorithms.
[ "Random search", "Derivative-free optimization", "Learning continuous control" ]
Accept (Poster)
https://openreview.net/pdf?id=B1gHokBKwS
https://openreview.net/forum?id=B1gHokBKwS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "mNbW2d3Ki3", "r1lilgjFjB", "B1lfR0FYjS", "rkeVidtYjH", "H1ltjUFtoH", "Hyx0NWWCcB", "H1gLHy9L9B", "rye3yb6ptr", "B1lK0Nv6KS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735767, 1573658610723, 1573654217884, 1573652635876, 1573652129120, 1572897077707, 1572409150494, 1571832035939, 1571808465308 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1913/Authors" ], [ "ICLR.cc/2020/Conference/Paper1913/Authors" ], [ "ICLR.cc/2020/Conference/Paper1913/Authors" ], [ "ICLR.cc/2020/Conference/Paper1913/Authors" ], [ "ICLR.cc/2020/Conference/Paper1913/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1913/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1913/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1913/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper develops a methodology to perform global derivative-free optimization of high dimensional functions through random search on a lower dimensional manifold that is carefully learned with a neural network. In thorough experiments on reinforcement learning tasks and a real world airfoil optimization task, the authors demonstrate the effectiveness of their method compared to strong baselines. The reviewers unanimously agreed that the paper was above the bar for acceptance and thus the recommendation is to accept. An interesting direction for future work might be to combine this methodology with REMBO. REMBO seems competitive in the experiments (but maybe doesn't work as well early on since the model needs to learn the manifold). Learning both the low dimensional manifold to do the optimization over and then performing a guided search through Bayesian optimization instead of a random strategy might get the best of both worlds?\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Review #2\", \"comment\": \"We thank the reviewer for the time and effort, as well as the encouraging comments. We address the concerns as follows.\", \"q\": \"Do we deal with smaller search spaces in every problem? Any other way of searching the parameter space to further improve the efficiency?:\", \"a\": \"We already have ideas on how to incorporate ideas from Bayesian optimization and/or Hyperband into our method. They are not straightforward and we consider them for future work.\", \"minor_questions\": \"\", \"remark\": \"The i in Proposition 1&2 is a typo, it is supposed to be t. We fixed this and all other typos pointed out by the reviewer.\"}", "{\"title\": \"Response to Review#3\", \"comment\": \"We thank the reviewer for the time and effort, as well as the encouraging comments. We address the concerns as follows:\", \"q\": \"\\u201c...The number of episodes reduces by roughly 50% for all tasks but this keeps the ratio between the different tasks identical. I would have assumed that the ratios would increase in favor of the larger problems like the Humanoid task\\u201d\", \"a\": \"We thank the reviewer for this interesting analysis and the comment. The ratio of improvement is between 1.7 and 3.7 times, and the trend seems not exactly to follow dimensionality. It is important to note that both the problem dimension ($d$) and manifold dimension ($n$) are changing between each experiment. Hence, we believe it is not easy to make any conclusion from MuJoCo experiments. To understand this phenomenon in a more controlled environment, we designed synthetic problems with controllable manifold and problem dimensions and compared our method with the baseline random search. We include this study in Appendix A, and the effect of manifold dimensionality is very clear.\"}", "{\"title\": \"Response to Review#1\", \"comment\": \"We thank the reviewer for the time and effort. We also appreciate the encouraging comments, and address the concerns as follows:\", \"q\": \"\\u201cThe \\\"unbiasedness\\\" of the gradient should be more clear. It is NOT unbiased gradient w.r.t. the original function, but the smoothed version.\\u201d\", \"a\": \"We went over the manuscript and carefully clarified/re-worded every time we used the word \\\"unbiased.\\\"\"}", "{\"title\": \"Response to Review#4\", \"comment\": \"We thank the reviewers for their time and effort spent providing feedback. We address the concerns as follows:\", \"q\": \"Minor issues:\", \"a\": \"Thanks for pointing them out. We fixed them in the updated version.\", \"q_2\": \"\\u201cfor a thorough examination, reporting the performance over wall-clock time is recommended and required, ideally in both serial and parallel settings\\u201d\\n\\nWe also added a study on the wall-clock computation times in Fig 2 (see Section 5.1 for discussion). We only add parallel computation times as the serial search is not feasible (would require months of compute) for many of the problems we are interested in.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper addresses the problem of optimizing high dimensional functions lying in low dimensional manifolds in a derivative-free setting. The authors develop an online learning framework which jointly learns the nonlinear manifold and solves the optimization. Moreover, the authors present a bound on the convergence rate of their algorithm which improves the sample complexity upon vanilla random search. The paper is overall well written and the core idea seems interesting. However, the reviewer has a few concerns which needs to be addressed.\\n \\n1) Methodology: This work depends on deep networks to learn the nonlinear manifolds which is justifiable by the power of deep nets. However, several issues may arise.\\n\\n1.1) Globally optimizing the loss function of a deep network is no easy task and according to the authors, their theoretical results holds only if equation (6)--which includes the loss function of a deep net-- is globally optimized. \\n\\n1.2) Even if one could globally minimize the loss function up to a tolerance, this will require a large number of epochs resulting in a high overhead cost for each update of the algorithm. This cost should be considered during the evaluation of the performance of the algorithm. \\n\\n1.3) Finally, although the authors mention that : \\\" Experimental results suggest that neural networks can easily fit any training data\\\", the success of neural networks highly depends on their architecture and carefully tuning their several hyperparameters including the number of hidden layers, the number of nodes in each such layer, the choice of activation function, the choice of optimization method, learning rate, momentum, dropout rate, data-augmentation parameters, etc. One evidence around the necessity of carefully tuning the neural networks lies in appendix B where the authors mention their specific choice of hyperparameters for each experiment as well as the cross validation range they have used. Again, the overhead cost of finding a good deep network through cross-validation or any other method of choice (such as Bayesian optimization or Hyperband) should be considered towards the total cost of the algorithm.\\n\\n* Note that complex nonlinear manifolds might be better captured by complex yet flexible architecture as the authors also state that: \\\"If the function of interest is known to be translation invariant, convolutional networks could be deployed to represent the underlying manifold structure\\\". Hence, a simple fully connected network with fixed hyperparameters is suboptimal in capturing the different manifolds over various problems. This highlights the importance of exploring the space of hyperparameters.\\n\\n2) Experiments: The results are reported solely over the number of episodes (function evaluations) while the cost of each episode might be significantly different among different methods. Thus, for a thorough examination, reporting the performance over wall-clock time is recommended and required, ideally in both serial and parallel settings . It does not matter whether the time is spent for a function evaluation or for reasoning about the manifold through training the deep network, it should be taken into account.\", \"minor_issues\": \"1. On page 2, there is a typological error in the footnote in defining the L-Lipschitz concept (replace \\\\mu with L).\\n\\n2. On page 3, section 3.1, at the end of the second line, g should be a function of both \\\\mathbf{r} and psi.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this paper, the authors first improve the gradient estimator in (Flaxman et al., 2004) zeroth-order optimization by exploiting low-rank structure. Then, the authors exploit machine learning to automatically discover the lower dimensional space in which the optimization is actually conducted. The authors justified the proposed algorithm both theoretically and empirically. The empirical performances of the proposed estimator outperforms the current derivative-free optimization algorithms on MuJoCo for policy optimization. \\n\\nThe paper is well-motivated and well-organized. I really like this paper, which provide an practical algorithm with theoretical guarantees (although under some mild conditions). The empirical comparison also looks promising, for both RL problems and zeroth-order optimization benchmark. \\n\\nI have roughly checked the proofs. The main body of the proof looks reasonable to me. However, I have some questions about one detail: In the proof of lemma 1, how the forth equation comes form third equation is not clear. Only manifold stokes' theorem might not enough since there is Us in side of f while U^*s outside of f. I think there should be one more bias term. \\n\\n\\nFor the empirical experiment, it is a pity that the algorithm is not compared with Bayesian optimization, which is also an important baseline. I am expecting to see the performances comparison between these two kinds of algorithms.\", \"minor\": \"The \\\"unbiasedness\\\" of the gradient should be more clear. It is NOT unbiased gradient w.r.t. the original function, but the smoothed version. \\n\\n=====================================================================\\n\\nThanks for the reply. The comparison between the proposed algorithm and BO looks promising. \\n\\nI will keep my score. \\n\\nI am still confused about the proof of lemma 1. Following the notations in the paper, I was wondering the unbiased gradient should be \\n\\n$E_{S}[f(x + \\\\delta U^*s)U^*s]$\\n\\nThen, the lemma should characterize the difference between \\n\\n$E_{S}[f(x + \\\\delta Us)Us]$ and $E_{S}[f(x + \\\\delta U^*s)U^*s]$.\\n\\nHowever, current lemma 1 is not bounding this error.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The computation times for random search methods depend largely on the total dimension of the problem. The larger the problem, the longer it takes to perform a single iteration. I believe the main reason why many people use deep reinforcement learning to solve their problems is due to its dimension-independence. I am not aware of a paper that tries to minimize the sample complexity. Thus, I think the idea in this paper is novel and may have influence on the literature (maybe an encouragement for a shift from deep reinforcement learning to derivative-free optimization methods).\\n\\nIn terms of presented results I think that there is not much that they could do wrong. They show in Figure 1 that the reward they achieved with their method is only outperformed by Augmented Random Search (ARS) on the Ant task. On all other tasks, their method at least performs on par with ARS which is a good result.\\n\\nIn Table 1 they show the number of episodes that are needed to achieve the reward threshold. Their method required less episodes than all other methods, but I think this is not the only criteria they should have looked at. So, it might be the case that their iterations take longer to compute than the iterations of the ARS and thereby making it slower. \\n\\nThe authors have showed that their method has a lower sample complexity, which is their goal of the research (\\u201cOur major objective is to improve the sample efficiency of random search.\\u201d). However, I am not sure whether this means that it also has a lower computational complexity. They address this issue briefly by stating that \\u201cOur method increases the amount of computation since we need to learn a model while performing the optimization. However, in DFO, the major computational bottleneck is typically the function evaluation. When efficiently implemented on a GPU, total time spent on learning the manifold is negligible in comparison to function evaluations.\\u201d This would mean that their iterations are performed in less computation time than the ARS, but I would have personally liked to see a number attached to this. \\nIf we thus assume that this is the case, then their results are sound. However, I do not see this reduced complexity reflected in the results. If I look at the ratios between the number of episodes it takes to solve the tasks, they seem to be similar to the ones from the ARS. The number of episodes reduces by roughly 50% for all tasks but this keeps the ratio between the different tasks identical. I would have assumed that the ratios would increase in the favor of the larger problems like the Humanoid task. In other words, I still see the influence of the larger dimension in the results. Maybe I am too critical, but to me if they would have just found a faster method without the reduced sample complexity, they would have achieved similar results. \\n\\nOf course, this problem would not be present if the computation time increases with the number of iterations. In that case, the computation time would not reduce by a \\u201cfixed\\u201d ratio and would therefore decrease relatively much on the tasks with a higher dimension. But that would require an exact comparison between the computation times for all tasks for both their method and the ARS which I do not see in their results. If all these things are common knowledge, then their results are sound and they have found a large improvement to the already well performing ARS.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Contributions:\\n\\t-Authors have proposed a methodology to optimise high dimensional functions in a derivative-free setup by reducing the sample complexity by simultaneously learning and optimising the low dimensional manifolds for the given high dimensional problem. \\n\\nAlthough, performing dimensionality reduction to learn the low dimensional manifolds is popular in the research community, the extensions made and the approach authors have considered seems to be novel.\", \"comments\": [\"Authors have talked about the utilization of domain knowledge on the geometry of the problem. How feasible is to expect the availability of the domain knowledge? Authors have not discussed the downsides of the proposed method if the domain knowledge is not available, and a possible strategy to overcome the same.\", \"Authors have said that they are specifically interested in random search methods. Is there any motivating reason to stick to the random search methods? Why not consider other sample efficient search methods?\", \"\\u201c\\u2026\\u2026.random search scale linearly with the dimensions\\u201d, why one should not consider other sample efficient methods that grow sub-linearly as against random search?\", \"Srinivas, N., Krause, A., Kakade, S. M., and Seeger, M. Gaussian process optimization in the bandit setting: No regret and experimental design. International Conference on Machine Learning, 2010\", \"Please derive Lemma 1 in the appendix for the sake of completeness.\", \"I am missing a discussion about manifold parameters like \\u201c\\u03bb\\u201d in the important equations.\", \"Authors have made a strong claim that neural networks can easily fit any training data, but it may be not be true for many datasets.\", \"Authors have claimed that they have fast and no-regret learning by selecting mixing weight \\u03b2=1/d. Author might want to discuss more on this as this is an important metric.\", \"\\u201c \\u2026. total time spent on learning the manifold is negligible\\u2026. \\u201d \\u2013 any supporting results for this claim.\", \"\\u201c\\u2026\\u2026.communication cost from d+2k to d+2k+kd\\u2026 \\u201d \\u2013 curious to know if there is any metric like wall-clock time to talk about the optimisation time.\", \"Authors have restricted the comparisons to only three important methods, but it is always comparing with other baselines in the same line. Authors should consider Bayesian optimisation as it is a good candidate for the performance comparison, even though the researchers are interested only in random search methods (just like CMA\\u2013ES).\", \"Kirschner, Johannes, Mojm\\u00edr Mutn\\u00fd, Nicole Hiller, Rasmus Ischebeck, and Andreas Krause. \\\"Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces.\\\" arXiv preprint arXiv:1902.03229 (2019).\", \"It is seen from the results that the proposed method is not performing better for low dimensional problem like \\u201cSwimmer\\u201d function. But according to the initial claim, method was supposed to work better in low dimensional problems. Is it because of the fact that the problem space is not drawn from high dimensional data distributions?\", \"\\u201c\\u2026..improvement is significant for high dimensional problems\\u201d \\u2013 It will be better if the authors compare their proposed method with some more derivative-free optimisers that are proven to be good in high dimensions (like high dimension Bayesian optimisation).\", \"\\u201cThe no-learning baseline outperforms random search \\u2026\\u2026\\u2026.\\u201d \\u2013 this statement is not very clear, does it mean like the proposed method works only when the problem is reduced from higher dimensions to lower dimensions and not on the lower dimensional problem itself?\", \"\\u201cPerformance profiles represent how frequently a method is within the distance T of optimality\\u201d \\u2013 Any thumb rule considered for the choice of T?. Can we think of any relation with standard metrics like simple regret or cumulative regret that are used to measure the optimisation performance?\", \"\\u201cAlthough BO methods typically do not scale\\u2026\\u2026 \\u201d \\u2013 Authors have made a strong assumption here. In the literature, we see active research in the context of high dimensional optimisation.\", \"Rana, Santu, Cheng Li, Sunil Gupta, Vu Nguyen, and Svetha Venkatesh. \\\"High dimensional Bayesian optimization with elastic gaussian process.\\\" In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2883-2891. JMLR. org, 2017.\"], \"minor_issues\": \"\\u201cImprove the sample complexity\\u201d may not convey the meaning very clearly to the readers, something like \\u201cImprove the sample efficiency\\u201d or \\u201cReduce the sample complexity\\u201d would add more clarity.\\n\\tInconsistency in the terms used in Proposition 1 and Proposition 2. What does \\u201cI\\u201d signify in the formula? Was that supposed to be \\u201ct\\u201d?\\n\\tEven though the constants are mentioned in the appendix, it is always better to mention the constants used in the algorithm like \\u201c\\u03b1\\u201c as step size for quick understanding.\\n\\t\\u201cFollow The Regularized Leader (FTRL)\\u201d is more appropriate than \\u201cfollow the regularized leader (FTRL)\\u201d\\n\\t\\u201cnoise table in pre-processing\\u201d \\u2013 Should it mean something relevant to the paper? \\n\\t\\u201cWe use widely used\\u2026..\\u201d \\u2013 may be consider rephrasing the sentence here\\n\\t\\u201ctreshold\\u201d \\u2013 Typo in Table 1\\n\\tY \\u2013 Axis in Figure 2 is missing\", \"appendix_b\": \"\\u201cWe also perform grid search \\u2026. \\u201c would look better\\n\\tMuJoCo Experiments \\u2013 is the parameter space continuous and what is the search space considered for n, \\u03b1 and \\u03b4. Do we deal with smaller search spaces in every problem? Any other way of searching the parameter space to further improve the efficiency?\"}" ] }
SJlEs1HKDr
Attentive Sequential Neural Processes
[ "Jaesik Yoon", "Gautam Singh", "Sungjin Ahn" ]
Sequential Neural Processes (SNP) is a new class of models that can meta-learn a temporal stochastic process of stochastic processes by modeling temporal transition between Neural Processes. As Neural Processes (NP) suffers from underfitting, SNP is also prone to the same problem, even more severely due to its temporal context compression. Applying attention which resolves the problem of NP, however, is a challenge in SNP, because it cannot store the past contexts over which it is supposed to apply attention. In this paper, we propose the Attentive Sequential Neural Processes (ASNP) that resolve the underfitting in SNP by introducing a novel imaginary context as a latent variable and by applying attention over the imaginary context. We evaluate our model on 1D Gaussian Process regression and 2D moving MNIST/CelebA regression. We apply ASNP to implement Attentive Temporal GQN and evaluate on the moving-CelebA task.
[ "meta-learning", "neural processes", "attention", "sequential modeling" ]
Reject
https://openreview.net/pdf?id=SJlEs1HKDr
https://openreview.net/forum?id=SJlEs1HKDr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "hA7jyFYSW", "rylOAyonsB", "BkeKZHc2oS", "BJlWHUD3iH", "r1emYWw3sS", "BJl48Ww2ir", "Sylh5bkbir", "ryeKOFZW5S", "rkgjX7waFr", "S1lXjVv2KH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735736, 1573855184062, 1573852416571, 1573840440784, 1573839227477, 1573839180019, 1573085587679, 1572047216955, 1571808035273, 1571742875350 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1912/Authors" ], [ "ICLR.cc/2020/Conference/Paper1912/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1912/Authors" ], [ "ICLR.cc/2020/Conference/Paper1912/Authors" ], [ "ICLR.cc/2020/Conference/Paper1912/Authors" ], [ "ICLR.cc/2020/Conference/Paper1912/Authors" ], [ "ICLR.cc/2020/Conference/Paper1912/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1912/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1912/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This manuscript outlines a method to improve the described under-fitting issues of sequential neural processes. The primary contribution is an attention mechanism depending on a context generated through an RNN network. Empirical evaluation indicates empirical results on some benchmark tasks.\\n\\nIn reviews and discussion, the reviewers and AC agreed that the results look promising, albeit on somewhat simplified tasks. It was also brought up in reviews and discussions that the technical contributions seem to be incremental. This combined with limited empirical evaluation suggests that this work might be preliminary for conference publication. Overall, the manuscript in its current state is borderline and would be significantly improved wither by additional conceptual contributions, or by a more thorough empirical evaluation.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response A1.1\", \"comment\": \"* \\u201cIs ASNPK implemented by replacing C_t by C_t and C_<t in the SNP equation -- specifically Eq. (2) in the original version of the manuscript?\\u201d\\n\\nYes, ASNPK is an extended SNP that is also allowed to perform attention on a memory buffer that stores all the context history $C_t$ and $C_{<t}$ as key-value set. We would like to emphasize that vanilla SNP does not have attention mechanism and what we implement as ASNPK uses attention on the context history.\\n\\n\\n* \\u201cWhile the result on the synthetic data looks positive, it would be a lot more convincing if you also show how it would perform in the moving MNIST dataset.\\u201d \\n\\nWith more time until camera ready, we will compute these results. 1D regression is a good representative of the problem setting and we expect the result trend to be similar in 2D settings also.\\n\\n\\n* \\u201c \\u2018Apparently, this can come across trivially by replacing C_t with both C_<t and C_t in Eq. (2). This is in fact very similar to what the authors did in Eq. (4) which summarizes the generative process of ASNP\\u2019 -- if you could, please refer to the original version or elaborate if somehow it is no longer relevant.\\u201d\\n\\nASNPK extends SNP in the following way i.e. attending on $C_{\\\\le t}$ rather than just $C_t$. But vanilla SNP does not have attention mechanism so the transformation from SNP to ASNPK is not trivial. \\n\\nFurthermore, between ASNPK (which the reviewer seems to refer to by saying Eq.2) and ASNP (which reviewer is referring by saying Eq. 4), we contend against saying that the two are similar. In general, we would like to refrain from reductively bringing the two generative model equations into the same form. In ASNPK, $C_{<t}$ is a static, stale and unoptimized representation of the past stored in a memory. On the other hand, ASNP has $C_t = C_t^i \\\\cup C_t^r$ and the imaginary context is sequentially updated and optimized representation whose transition model is implemented by a different attentive mechanism.\\n\\n* \\u201cIn Figure 10, how many context points are being used per time step for ASNP? In the rebuttal you mentioned that ASNP only generated 25 imaginery context points but in the latest manuscript, you mentioned that it generated 75 context points. Also for ASNPK, is the total number of context points 100 or 150? -- the latest manuscript said 150.\\u201d\\n\\nThe values in the manuscript are more accurate and they override the ones mentioned the previous response. But the key point is that the imaginary context is a more efficient storage in terms of size while performing better than the memory buffer.\"}", "{\"title\": \"About comparison between ASNP and ASNPK (SNP variant that attends to the entire context history)\", \"comment\": \"Thank you for the extra experiments.\\n\\nIs ASNPK implemented by replacing C_t by C_t and C_<t in the SNP equation -- specifically Eq. (2) in the original version of the manuscript?\\n\\nWhile the result on the synthetic data looks positive, it would be a lot more convincing if you also show how it would perform in the moving MNIST dataset.\\n\\nI have made a point (see below) earlier on the seemingly trivial technical extension to go from SNP to ASNP and after reading the rebuttal, I am still not very convinced that this extension is non-trivial (since the rebuttal does not elaborate on this point technically) -- could you discuss more on this?\\n \\n\\\"Apparently, this can come across trivially by replacing C_t with both C_<t and C_t in Eq. (2). This is in fact very similar to what the authors did in Eq. (4) which summarizes the generative process of ASNP\\\" -- if you could, please refer to the original version or elaborate if somehow it is no longer relevant.\", \"more_minor_questions\": \"In Figure 10, how many context points are being used per time step for ASNP? \\n\\nIn the rebuttal you mentioned that ASNP only generated 25 imaginery context points but in the latest manuscript, you mentioned that it generated 75 context points. \\n\\nAlso for ASNPK, is the total number of context points 100 or 150? -- the latest manuscript said 150.\"}", "{\"title\": \"Global Response\", \"comment\": \"We thank all the reviewers for their insightful comments. While we also address each reviewer individually, we feel that re-emphasizing our motivation and contributions would orient the reviewers to better assess our work and our responses and to possibly adjust our scores.\\n\\n* Motivation\\nSNP is a new class of models that can deal with a broad range of new modeling problems i.e., sequential meta-transfer learning. Given that this model has many useful potential applications, it is an important problem to study the challenges that arise in scaling it to more real and complex settings. Considering that its non-temporal version, NP, suffers from significant under-fitting, an important unanswered question arises with regard to SNP is whether SNP suffers from under-fitting or not (it does) leading to other follow-up questions: if so, how severely? (Significantly). To resolve this, can the existing solutions (like ANP) be directly used? (No). If not, could the naive extension of the existing solution work? (Sub-optimally). Can we do better? (Yes). Our contribution in this paper is thus not simply to apply attention to SNP but to analyze and study the model to answer all of the above questions and propose a model that claims a better modeling hypothesis than the existing contemporary solutions.\\n\\n* Our Contributions\\nWe provide empirical evidence that underfitting indeed severely deteriorates SNP by showing that the standard SNP significantly improves with our proposed solution on various tasks. We found that without our attention mechanism, SNP can only provide very suboptimal performance. Our work shows not only that attention can resolve this but that it is also not enough to perform attention on a memory buffer that simply stores all the observed contexts. Instead, attention should be performed on a memory that is also sequentially updated. Consequently, a) this memory would learn to store a temporally optimized representation of the past geared towards better prediction and b) this memory would require fewer storage locations as it does not need to naively store each and every observed context point but provides an optimized set of learned memory buffer. And indeed, we empirically show that SNP extended with naive attention on a memory buffer of all the past contexts under-performs the proposed ASNP even if it has both sequential encoding and memory, and that the use of a sequentially \\u2018imagined\\u2019 memory is a better choice. Our experiments also show that fewer storage locations are needed in the imagined memory while furnishing superior performance as compared to the naive memory buffer. Lastly, in a comprehensive set of experiments on 1D and 2D regression and rendering tasks, we demonstrate ASNP's performance gains over NP, ANP, SNP, GQN, and TGQN in different context regimes.\\n\\n* Additional Experiments:\\nWe would like to highlight some additional experimental results that were computed based on the reviewers\\u2019 suggestions that made our claim stronger. Our existing results showed that ASNP outperforms the non-sequential frameworks i.e., NP and ANP and also the sequential one i.e. SNP. While we maintained that our novel imaginary context led to our improved results, reviewers raised some alternative hypotheses which prompted us to experimentally investigate:\\n a) Baselines with a larger number of parameters could achieve similar results. (Anon. Reviewer 3)\\n b) SNP with attention to the lossless memory of all contexts could achieve similar results. (Anon. Reviewer 1)\\nThese alternatives are put to rest in our new results showing that ASNP still outperforms them. The plots for the new experiments on the 1D regression tasks have been added to the appendix of the updated manuscript. More details about these experiments are also described in the respective responses to the reviewers.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for the positive review. Yes, we agree that demonstrating ASNP on more challenging tasks would be a nice and fruitful future work.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for the detailed review. With these points, we have revised our paper in numerous ways. We address the raised questions in the following points.\", \"why_attend_past_contexts\": \"SNP and ASNP are meta-transfer learning frameworks that require fewer observations from the current contexts because they also simultaneously use the information learned in the past. Although the contexts of the past come from a different stochastic process, they are still related to the current stochastic process through the underlying transition dynamics. For instance, consider an agent playing soccer. Its sight is focussed on the ball in the front gathering only a limited observation in the current moment. But using the past knowledge, the player still maintains a dynamic representation of the entire field and especially of the important information (like the locations of the key players) which is useful for making predictions/actions. In the additional experiment (see the response to the next question) where we allow SNP to only attend its own time-step (i.e. K=1) and then increase the K to allow it to attend the past, its performance, not surprisingly, improves with increasing K (although still underperforming against ASNP). So attending to the past contexts is useful.\", \"why_not_attend_on_the_entire_history_of_contexts\": \"We hypothesize and also empirically show that it is a better design choice to have a sequentially updated memory than a simple memory buffer that stores all the observed context points. A sequentially updated memory has the benefit that the model learns to optimize the memory contents for its usefulness in predictions. Another benefit is that it requires fewer storage locations as it does not naively store each and every incoming context point. As mentioned, we compared the proposed ASNP against SNP endowed with attention on lossless memory of all context points gathered in the most recent K time-steps. Although the performance of the latter improves with increasing K = 1 -> 3 -> 5 -> infinity, it quickly saturates at infinity while still under-performing the proposed ASNP clearly highlighting the benefits of the imaginary context. Another interesting point is that when K=infinity, the lossless memory buffer can collect up to 100 or more context points while ASNP outperforms this by attending only on 25 imaginary context points at any given time-step -- clearly highlighting that the imagined context is more size-efficient.\", \"why_analogy_to_the_human_brain_and_need_for_sequentially_updated_memory\": \"We have reduced the emphasis on the brain analogy in the updated manuscript. Our work is inspired by the under-fitting in SNP that hinders its wider usage. The analogy to the human brain supports our hypothesis that an imagination process for recalling the past is effective and the right way forward to resolve under-fitting.\", \"technical_exposition_is_too_vague\": \"Thank you for pointing out. We have worked on making the ANP description and the technical exposition of ASNP clearer (see updated manuscript). At the same time, due to space limitations, the finer details have been delegated to the appendices.\", \"the_arbitrariness_of_the_design_choices\": \"The main idea is in introducing the imaginary context via $P(C_t^i | C_{<t}, C^r_t)$ and our implementation design choices realize that idea -- demonstrated by our better performance on a variety of tasks. The finer design choices are a result of empirical model selection but some broad design choices were hypothesized as follows. A. Imaginary queries should complement the available real context and therefore should depend on them.\\nB. Having imagination-tracker RNNs should be beneficial for prediction using the inferred knowledge of the underlying dynamics. \\nC. Attention on the tracker RNN hidden states helps capture the pairwise interactions between the context points and also updates the imagined memory with the more correct information from the real contexts.\", \"key_technical_challenge\": \"As responded in a previous question, making use of the past contexts is necessary for the attention to operate on. Realizing that a simple memory buffer of the context history is a sub-optimal design choice, developing the idea and the implementation of the imaginary context was the key technical challenge.\", \"attention_without_imaginary_context\": \"As answered in an earlier question, it is possible to truncate the stored contexts to hold the K most recent ones. We have also tested this with K=infinity for the 1D regression tasks as described in the response for all reviewers, we found that our proposed ASNP still outperforms.\\n\\nNP/ANP not much different from Eq.3: We politely disagree. Eq.3 depicts how the imaginary queries and values can be propagated from one time-step to the next and it is implemented using attention mechanisms. This is clearly different from NP which does not use attention and also different from ANP because it can neither produce nor propagate imaginary contexts.\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We are grateful for the comments. We have modified the paper and addressed them as follows.\", \"time_complexity_comparison\": \"In the appendix, we show the training curves against the wall clock time. It shows that the proposed ASNP converges the fastest among the baselines.\", \"parameter_size_comparison\": \"We tested the proposed ASNP with representation size n=128 against the baselines NP, ANP, and SNP with n=128 and n=512. While the baselines improve going from n=128 to n=512, they still underperform ASNP. In scenario c) which tests how well a model accumulates contexts over time, this gap is clear. The reason we choose size=512 in the baselines is that it shows a similar performance when size=1024 in ANP (Kim et al.) and also needs a shorter training time that allowed us to produce these new findings within the rebuttal period. From these findings, we can say that having a bigger latent size or equivalently more parameters in the baselines is not sufficient and imaginary context itself plays a useful role.\", \"figure_label_inconsistency\": \"We have fixed the inconsistent labels of the figures.\", \"incremental_research\": \"The problem we address (as also described in the response for all reviewers) is real and an important one. From this perspective, we believe our performance gains are a significant step forward. NP and SNP are crucial meta-learning frameworks with nice properties which were originally demonstrated on relatively simpler tasks. Addressing under-fitting is the key to making them usable in realistic settings. Our imaginary context shows that attention on a sequentially-updated memory outperforms using a lossless copy of the past while also being more size-efficient.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper deals with the underfitting problem happening in neural process and sequential neural process (SNP). The idea is to incorporate the attention scheme in SNP and carry out the so-called attentive sequential neural process (ASNP) for sequence learning.\", \"strength\": \"1. A combination of attention into SNP.\\n2. Some formulations were provided.\\n3. Different tasks were evaluated to investigate the merit of this method.\", \"weakness\": \"1. The comparison for time complexity and parameter size was missing.\\n2. The labels in figures were inconsistent.\\n3. An incremental research.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper combines ideas from attentive and sequential neural processes to incorporate an attention mechanism to the existing sequential neural process, which results in an attentive sequential neural processes framework.\\n\\nWhile the idea is somewhat interesting, I think this paper is technically vague and not well-motivated, which makes it hard for me to feel convinced that the problem exists and is non-trivial, and that the proposed solution is significant. Let me elaborate on my thoughts below:\\n\\nFirst, the authors stated that SNP is subject to the underfitting problem that plagues NP but it is not clear to me why, in the temporal context of SNP, do we need to focus our attention on past contexts, which are no longer relevant. Could the authors please motivate this with a concrete application scenario? Without a concrete scenario, I do not feel very convinced that the problem exists.\\n\\nSecond, the argument that augmenting SNP with an attention mechanism is not trivial is somewhat contrived. In particular, the reason for this non-triviality is that (in the authors' own words) SNP assumes that it cannot store the past context as is -- so what if we simply store the past context & condition the representation on the entire history of past context instead? \\n\\nApparently, this can come across trivially by replacing C_t with both C_<t and C_t in Eq. (2). This is in fact very similar to what the authors did in Eq. (4) which summarizes the generative process of ASNP -- the only difference is the generation of imaginary contexts, whose necessity is again questionable, as I elaborate next.\\n\\nThird, the motivation for imaginary context is pulled from a very distant literature on how a human brain memorizes past experiences in a lossy memory consolidation, which only retains the most important sketches. In the context of ASNP, it is not, however, clear to me why this mechanism is necessary given that entire lossless memory can be stored except that without a lot of contexts, there is not a need for an attention component (as implied in first paragraph of Section 3) which is a contrived motivation.\\n\\nFourth, the technical exposition of this paper is too vague. Given that the key contribution here is about an attention component, the background review on ANP is surprisingly informal with no technical detail at all. For the other parts, the technical part is also mostly abstracted away -- what is presented is therefore not that much different from a typical generative model with latent variables, which makes it unclear whether there is a technical challenge here. \\n\\nIn fact, from what I see, going from Eq. (2) to Eq. (4) is not much of a conceptual challenge and the execution of Eq. (4) (particularly the attention component described in Section 3.2) seems like a bunch of arbitrary engineering ideas which were put together to substantiate Eq. (4). \\n\\nIs there a technical challenge in the entire pipeline that should have been highlighted?\\n\\nFor the experiment, could the author compare the performance between ASNP and ASNP without the imaginery component (but with the attention mechanism)? It would be a good experiment to see if the imaginery component is necessary.\\n\\nTo summarize, I believe the paper in its current state is not well-motivated and appears very incremental given the prior works of SNP and ANP. Even its imaginery component, which is the key contribution here, is, if I understand Eq. (3) correctly, not much different from context sampling of a NP.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Authors present a method to address the problem of underfitting found in sequential neural processes. They cover the literature appropriately in regards to neural processes and developments pertaining to tackling the underfitting problem by applying an attention mechanism. Although, this has successfully been achieved with Neural Processes, the case is different with sequential neural processes, as they cannot store the past context.\\nAuthors addressed this problem by introducing an attention mechanism and model, i.e. Attentive sequential neural processes, which incorporates a memory mechanism of imaginary context. This imaginary context is generated through an RNN network and are treated as latent variables.\\nThe results presented show some promising improvements over other methods used and more results have been included in the appendix. It would be nice to demonstrate the performance in more challenging tasks as well, however the results presented and the new context-imagination introduced are quite promising indeed.\\nI have read the rebuttal carefully. I appreciate the extra effort put by the authors to address the issues raised from the other reviewers. I think, albeit not ground-breaking research, it could be a good addition to the programme nonetheless.\"}" ] }
S1e4jkSKvB
The intriguing role of module criticality in the generalization of deep networks
[ "Niladri Chatterji", "Behnam Neyshabur", "Hanie Sedghi" ]
We study the phenomenon that some modules of deep neural networks (DNNs) are more critical than others. Meaning that rewinding their parameter values back to initialization, while keeping other modules fixed at the trained parameters, results in a large drop in the network's performance. Our analysis reveals interesting properties of the loss landscape which leads us to propose a complexity measure, called module criticality, based on the shape of the valleys that connect the initial and final values of the module parameters. We formulate how generalization relates to the module criticality, and show that this measure is able to explain the superior generalization performance of some architectures over others, whereas, earlier measures fail to do so.
[ "Module Criticality Phenomenon", "Complexity Measure", "Deep Learning" ]
Accept (Spotlight)
https://openreview.net/pdf?id=S1e4jkSKvB
https://openreview.net/forum?id=S1e4jkSKvB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "K9e1ZJNn3Kk", "bqlxhX1Y2E", "ByxYbrVioB", "Skl10NNsor", "B1gk2JoDoH", "ByeWYJswsH", "BJxOkkjvsH", "ByxzqndmsB", "SJlrtEmAKS", "SJeMgKD5tS" ], "note_type": [ "comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1599252107409, 1576798735708, 1573762304634, 1573762247030, 1573527462900, 1573527416934, 1573527263542, 1573256329902, 1571857532682, 1571612905600 ], "note_signatures": [ [ "~Cemal_Gurpnar1" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1911/Authors" ], [ "ICLR.cc/2020/Conference/Paper1911/Authors" ], [ "ICLR.cc/2020/Conference/Paper1911/Authors" ], [ "ICLR.cc/2020/Conference/Paper1911/Authors" ], [ "ICLR.cc/2020/Conference/Paper1911/Authors" ], [ "ICLR.cc/2020/Conference/Paper1911/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1911/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1911/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Architecture Selection\", \"comment\": \"First of all, I would like to thank you for your grateful work. My question is about architecture selection. If we want to use network criticality measure for an architecture selection, should I train the architectures that I try to make a selection in on the data and after that make a selection or can I use the network criticality measures you found in your paper for any dataset?\\n\\nIf I summarize my question, are the network criticality measures you found in your paper for architectures like ResNet 18, VGG 16 etc. generic and can be usable for any dataset?\\n\\nThank you.\"}", "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The paper analyses the importance of different DNN modules for generalization performance, explaining why certain architectures may be much better performing than others. All reviewers agree that this is an interesting paper with a novel and important contribution.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Added Densenet experiments.\", \"comment\": \"Apart from the changes listed above we have also updated our submission to include DenseNets in our experiments.\"}", "{\"title\": \"Added Densenet experiments.\", \"comment\": \"Apart from the changes listed above we have also updated our submission to include DenseNets in our experiments as you suggested.\"}", "{\"title\": \"Author response\", \"comment\": [\"Thank you for your positive comments on the paper. We have updated the paper to address both of your suggestions adequately and we hope you consider increasing your score if you find these changes satisfactory.\", \"Intuitively moving closer to the initialization values would indicate that the effective function class is smaller and hence the network should generalize better. For example, in the extreme case where none of the weights change from their initialization value the function class would be a single function (the initial function) and the generalization error would be very low (~0%) as both the train and test error would be very high (but equal).\", \"Thank you for your suggestion. We have added 3 networks: ResNet50, VGG11 and another fully connected network to our experimental section. We have also repeated our experiments on the CIFAR100 dataset.\"]}", "{\"title\": \"Author response\", \"comment\": \"Thank you for your very encouraging remarks and useful feedback. We have added the discussions and experiments you suggested (and even more) to the paper. In light of this and your strong review, we hope you would consider increasing your score to \\u201caccept\\u201d.\", \"we_next_address_your_questions_in_detail\": \"(1) Thanks for this insightful question! As you correctly pointed out, the choice of module decomposition is somewhat arbitrary. One could choose a module to comprise of a single scalar weight or the entire network and this would lead to different generalization scores. To answer your question more directly, the theoretical results hold for any decomposition. Why did we choose the modules this way? Since we are looking at a linear combination of parameters with their initialization, it makes sense to choose modules to be transformations that are linear in the parameters. Otherwise, the output would be very sensitive to a linear combination with initialization. Among all such decompositions, we chose the one with the minimum number of modules. In a sense, modules are chosen to be the largest well-behaved units in the network (and that happens to be the most natural choice). It is certainly possible that a smarter choice for module exists for each architecture which could lead to a lower network criticality measure. We are encouraged by your question and will add more discussion around this issue to the paper.\\n\\n(2) Thank you for your suggestions. We have added 3 networks, ResNet50, VGG11 and another fully connected network, to our experimental section. We have also repeated all our experiments on the CIFAR100 dataset. Moreover, we have added another complexity metric to our table of comparison. Therefore, our current empirical results are much stronger than the submitted version. \\n\\n\\nThe network criticality for ResNets is inversely correlated on the CIFAR10 dataset, but this is not true in the CIFAR100 dataset (which we just added). Here ResNet101 has the lowest generalization error and that is reflected by the network criticality score. We believe that the reason ResNet101 has higher generalization error for CIFAR10 in our experiments is that we do not use batch normalization to train. We make this choice since it is not clear how to accurately rewind batch normalization layers (we have tried several natural choices and they did not work). \\n\\n(3) Since criticality measure correlates with generalization, we would ideally want to encourage this measure to be low. This can happen in two different ways: 1) We can design regularizers based on the criticality measure and add them to the objective function. This would ensure that the trained network has low criticality. 2) We can explicitly design architecture with low criticality measure. This for example can happen by adding an explicit change in the modules (such as rewinding them to their initialization) to make sure the the learned network has low criticality. Another potential application is architecture search. As you mentioned, the model selection is usually done using a validation set but the search is over a large number of models, the result would overfit to the validation set. Therefore, measures such as the criticality score can be used in these scenarios.\\n\\nWe have corrected the typos that you identified in our updated draft. Thanks for pointing them out!\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for your positive remarks about both our theoretical contributions and our experimental study. We are a bit puzzled that your positive remarks are not reflected in the final score (weak reject). We hope that \\u201cweak reject\\u201d is chosen by mistake; otherwise, we would be happy to answer any concerns/questions you may have about our submission.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper introduces concept of \\\"module criticality\\\" to understand the role played by several modules in a model and how this affects the generalization of the models. This is quite an important problem to study as this helps to develop better understanding of the current architectures and potentially reduce their size without suffering the accuracy drop. This is a great theoretical contribution.\\n\\nThe paper studies this per module compared to previous works where the entire architecture is rewounded. The paper also studies this for ResNet models which are more widely/practically used than just the fully connected layers alone. This helps better understand the model as a whole. \\n\\nThe authors do a good robust experimental study for different network initialization, various CNN models like ResNet18, 34, 101, VGG16 and also FCN. The results demonstrate the module criticality to be a good metric for generalization of models.\\n\\nOverall, a good paper.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces a new way to reason about neural network generalization using a module criticality measure. The measure is tangible and intuitive. It leads to some formal bounds on the generalization of deep networks, and is able to better rank trained image classification architectures than previous measures.\\n\\nI am leaning to accept, as I expect this to be a significant theoretical contribution with several potential practical applications. With a few additional details, this could be a very strong submission:\\n\\n(1)\\tChoice of module decomposition. Having each module be a single convolutional or fully-connected layer makes intuitive sense, but is there some theoretical motivation for this choice? If the only requirement for a module is that it includes some linear transformation, in the extreme, a module could consist of a single weight, or the entire network. Would those choices change the generalization bounds or relative criticality across different architectures?\\n(2)\\tScope of experimental results. The ranking results would be much more compelling if they included a broader range of architectures, including more recent models with more branching, e.g., DenseNet. Is there some reason ResNet101 has higher generalization error than 18 and 34? Net. Criticality for ResNets is inversely correlated with the number of layers; is there an explanation for this? Is this true for other very deep models?\\n(3)\\tPractical use. To compute the criticality measure, we must train the model; but, if we train the model, we can compute generalization directly. So, what is the practical application of the measure? Is there some way it could be used to save computation? Could it help in the case of a small validation dataset, which we do not want to look at many times during model selection?\", \"minor_typos\": [\"Section 2.2: \\u201cAn stable phenomena\\u201d\", \"Section 2.3: \\u201c\\u2026an the\\u2026\\u201d\", \"In appendix: \\u201cResNet101: ResNet34 architectures\\u2026\\u201d\", \"----------------------------\"], \"after_rebuttal\": \"The authors have addressed my concerns, and I've increased my rating. There are still a few points I'd like to see addressed in the final version:\\n\\n1. The fact that the approach cannot yet be applied to batch normalization is a big practical drawback. Some discussion of approaches you tried, why they didn't work, and possible future directions for overcoming this would be appreciated.\\n\\n2. Clarify in the paper that the \\\"PAC Bayes\\\" approach used for comparison (Table 1) is a your method, i.e., an ablated version of criticality. As is, someone reading the paper quickly may think all you've done is add an alpha parameter to an existing \\\"PAC Bayes\\\" approach, which does fairly well on its own.\\n\\n3. Visualizing the experimental tables as scatterplots could make them easier for a reader to interpret.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper builds upon the \\\"module criticality\\\" phenomenon and proposes a quantitative approach to measure this at the module and the network level. A module's criticality is low if when it is switched to its initialization value, the error does not change drastically.\\n\\nThe paper uses a convex combination of the initial weights and the final weights of a layer/module to define an optimization path to traverse. The authors quantitatively define the module criticality such that it depends on how much closer the weights can get to the initial weights on this path while still being robust to random permutations. The network critically is defined as the sum of the module criticality measure of all the layers. \\n\\nEmpirical results on CIFAR10 show that the network's criticality is reflective of the generalization performance. For example, increasing resnet depth leads to improved generalization and low criticality. Though intuitively, it is not clear why moving closer to the initial values and thus lower average criticality indicated better generalization. It will be useful ot have a discussion on this issue. Results on other datasets will also be useful.\\n\\nOverall, the network criticality measure appears a useful tool to predict the generalization performance compared to other measures such as distance from initialization, weight spectrum, and others.\"}" ] }
rygEokBKPS
Yet another but more efficient black-box adversarial attack: tiling and evolution strategies
[ "Laurent Meunier", "Jamal Atif", "Olivier Teytaud" ]
We introduce a new black-box attack achieving state of the art performances. Our approach is based on a new objective function, borrowing ideas from $\ell_\infty$-white box attacks, and particularly designed to fit derivative-free optimization requirements. It only requires to have access to the logits of the classifier without any other information which is a more realistic scenario. Not only we introduce a new objective function, we extend previous works on black box adversarial attacks to a larger spectrum of evolution strategies and other derivative-free optimization methods. We also highlight a new intriguing property that deep neural networks are not robust to single shot tiled attacks. Our models achieve, with a budget limited to $10,000$ queries, results up to $99.2\%$ of success rate against InceptionV3 classifier with $630$ queries to the network on average in the untargeted attacks setting, which is an improvement by $90$ queries of the current state of the art. In the targeted setting, we are able to reach, with a limited budget of $100,000$, $100\%$ of success rate with a budget of $6,662$ queries on average, i.e. we need $800$ queries less than the current state of the art.
[ "adversarial examples", "black-box attacks", "derivative free optimization", "deep learning" ]
Reject
https://openreview.net/pdf?id=rygEokBKPS
https://openreview.net/forum?id=rygEokBKPS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "F-82D30Ew", "SJlZXsL2iH", "BJxAAmW9oH", "rJxFN8ywsS", "SkeUlIkviH", "Bylcoryvir", "HylN_kKRKr", "SkgdbG96KH", "rkxAlx2iKS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735679, 1573837592753, 1573684181965, 1573479985073, 1573479917848, 1573479841667, 1571880811623, 1571820031987, 1571696630119 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1910/Authors" ], [ "ICLR.cc/2020/Conference/Paper1910/Authors" ], [ "ICLR.cc/2020/Conference/Paper1910/Authors" ], [ "ICLR.cc/2020/Conference/Paper1910/Authors" ], [ "ICLR.cc/2020/Conference/Paper1910/Authors" ], [ "ICLR.cc/2020/Conference/Paper1910/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1910/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1910/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes a new black-box adversarial attack based on tiling and evolution strategies. While the experimental results look promising, the main concern of the reviewers is the novelty of the proposed algorithm, and many things need to be improved in terms of clarity and experiments. The paper does not gather sufficient support from the reviewers even after author response. I encourage the authors to improve this paper and resubmit to future conference.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Update of the paper\", \"comment\": \"We updated a version of the paper according to the remarks of all the three reviewers.\"}", "{\"title\": \"Comment about \\\"There are No Bit Parts for Sign Bits in Black-Box Attacks\\\"\", \"comment\": \"Compared to SignHunter, for epsilon=0.05, max 10000 queries, and ImageNet Inceptionv3, SignHunter reaches 2% failure rate with average of 578.6 queries, whereas we reach with continuous CMA and tile size of 30, 1.1% failure rate with average of 589 queries. We seem to have slightly better results than they have.\"}", "{\"title\": \"Answer to reviewer 1\", \"comment\": \"We thank reviewer 1 for its comment\\n\\nWe applied Adversarial training on CIFAR10 dataset, which is up to our knowledge the most efficient defense method so far.\\n\\nWe agree that many box-constraint handling methods exist. However here the point is not the handling of such constraints: we have that\\u00a0 box constraints and optimal points are close to the frontiers. Based on an extensive set of experiments, using Nevergrad, CMA-ES and (1+1)-ES reveal themselves to be more competitive on this type of problems.\\n\\nThis type of fomulation is not new. In machine learning, this was used e.g. in Zoph et al for instance. We agree that our paper combines existing approaches, even though, up to our knowledge, these types of evolutionary strategies have never been used so far in this context.\\nThere is a typo in Formulation (2), it is : max_{\\\\tau} L(f(x + \\\\epsilon tanh(\\\\tau)), y).\", \"answers_to_questions\": \"\", \"p5\": \"How are the original images to be attacked selected for Fig 2?\\n\\nThe images are selected at random in ImageNet dataset\", \"p6\": \"\\\"we highlight that neural neural networks are not robust to l\\u221e tiled random noise. \\\" Isn't it the contribution of (Ilyas et al., 2018b)?\\n\\nIlyas et al. introduced the tiling trick based on the observation that the gradient does not vary that much for two close points. Here we exhibit that convolutional neural nets are not robust to random tiled noise. This property helps in speeding up a subfamily of evolutionary algorithms based on pure random search in the first steps. This explains the good results obtained by (1+1)-ES and CMA-ES.\", \"p7\": \"What are the number of queries in Figure 3 and Table 1? Are they the number of queries spent until these algorithms found an adversarial example which is categorized to a wrong class for the first time?\\n\\nIn Figure 3, the number of queries are the number of queries spent until our algorithms find an adversarial example. In Table 3, we reported the mean and the median of these numbers of queries. We make it clearer in the updated version of the paper.\"}", "{\"title\": \"Answer to reviewer 2\", \"comment\": \"We thank reviewer 2 for its comments on the paper.\\n\\n1. We thank reviewer 2 for pointing out these articles. We decided to compare to what is, up to our knowledge, the state of the art in black-box attacks (at least in published papers in Neurips 2019, 2018, ICML 2019, etc.), which is the Parsimonious attack [Moon et al., 2019]. Bandits is often taken as reference for black-box attacks [Ilyas et al., 2018], so we took it as a reference. We read the papers you provided to us. It comes out that [1] would be a good baseline to compare with too. Note that\\u00a0this paper is also submitted to ICLR (https://openreview.net/forum?id=SygW0TEFwH&noteId=SJx_zBx6tH) and we were not aware of the existence of this paper, so thank you. The results reported in [3] are not competitive to those obtained by Parsimonious attacks. The attack designed in [2] is an L2 one.\\u00a0 It requires the training of an autoencoder, which is not fair for comparison with black-box attacks our algorithms belong to.\\n\\n\\n2. The method we propose is for a Linf bounded problem, it is not usual to compare with other distortions. But clearly our methods aim to reach the boundary of linf ball, so the distortion might be large. That's why we also compare to Linf attacks too.\"}", "{\"title\": \"Answer to reviewer 3\", \"comment\": \"We thank reviewer 3 for its comments.\\n\\nCMA uses a second order approximation of the shape of level sets. This is computationally expensive, but leads to an optimal use of a restricted budget.\\nDiagonal CMA is a computationally faster version, thanks to a diagonal covariance (the reader might think of a diagonal Hessian matrix).\\nThe (1+1) ES is even simpler; the covariance is proportional to the identity (corresponding to a Hessian matrix with all eigenvalues equal). It is therefore relevant for very low budget as it does not have to learn any matrix; on the other hand it is weaker for greater budget as the sampling does not match the shape of level sets. \\n\\n- In section 3.2, is the form of the discretized problem a standard way to transform from continuous to discrete one? What is the intuition of using a and b? Have you considered using only one variable to do it?\\n\\nWe designed this formulation ourselves, but it won\\u2019t be surprising that it has already been used elsewhere. Using this formulation the solution to (3) is already in the corners of the Linf ball which is intuitively more likely to fool the network,\\u00a0 We tried the two implementations (with one or two variables) and the results are very similar.\\n\\n- In section 3.3.2 what do you mean by \\u201cwith or without softmax, the optimum is at infinity\\u201d? I hope the authors could further explain it.\\n\\nSorry for this unclear statement. The optima of the ball constrained problem (1), would be close to the boundary or on the boundary of the Linf ball.\\u00a0 In that case, the optimum of the continuous problem (2) will be at infty or \\u201cclose\\u201d to it. On the discrete case (3) it is easy to see that the optimum is when a_i or b_i -> infty. We reformulate this sentence accordingly\\u00a0in the updated version of the paper.\\n\\n- In eq (2), do you mean max_{\\\\tau} L(f(x + \\\\epsilon tanh(\\\\tau)), y) ?\\n\\nThank you for having spotted this typo. \\n\\n- In section 3.3.1, the authors said (1+1)-ES and CMA-ES can be seen as an instantiation of NES. Can the authors further elaborate on this?\\n\\nNES strategies are optimizations strategies based on Natural gradient [Ollivier et al., 2017, Wierstra et al., 2008]. It consists in iteratively updating a search distributions (the distribution of the optima). CMA-ES consists in updated the mean and the covariance of the distributions. (1+1)-ES updates the mean and add constraints on the covariance matrix is isotropic. The underlying optimisation is not the function, but its quantiles regarding the distributions. \\n\\n[Wierstra et al., 2008] https://arxiv.org/pdf/1106.4487.pdf\\n[Ollivier et al., 2017] https://arxiv.org/pdf/1106.3708.pdf \\n\\n- Can the authors provide algorithm for DiagonalCMA?\\n\\nThe DiagonalCMA version is when the updates are only on diagonal coefficients of the covariance matrix, hence making a faster computation. We can provide an algorithm; it is a simple modification of CMA discussed in [Ross & Hansen, 2008] https://hal.inria.fr/inria-00270901/document. We will make it clearer in the paper.\\n\\n- It is better to put the evolution strategy algorithms in the main paper and discuss it. \\n\\nYes we can do so.\\n\\n- Can the authors also comment/compare the results with the following relevant paper?\\nLi et al. \\\"NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks.\\\"\\nChen et al. \\\"A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks.\\\"\\n\\nFor both papers, the reported results are not competitive w.r.t the parsimonious attack. However, we are eager to include them in our benchmark if the reviewer wants so.\\n\\n- In Table 1, why for Parsimonious and Bandit methods, # of tiles parts are missing? I think both of the baselines use tilting trick? And they should also run using the optimal tiling size? The result seems directly copied from the Parsimonious paper? It makes more sense to rerun it in your setting and environment cause the sampled data points may not be the same. Since CMA costs significantly more time, it makes a fair comparison to also report the attack time needed for each method.\\n\\n\\nWe reported the results from the paper parsimonious, but we did not rerun the experiments in our proper setting, because they use the same architecture. However as suggested by the reviewer, we will re-run the experiments in our setting.\\u00a0\\n\\nParsimonious attack divide progressively the image in tiles but do not use a proper tile size. Bandits, itself uses a tile size. In the updated version we make it clearer.\\n\\nCMA-ES takes indeed quite a lot of time, we will update the paper with the reported runtime - the diagonal one is much faster (but needs more evaluations).\\n\\n- In Table 3, why did not compare with Bandit and Parsimonious attacks? \\n\\nIn [Moon et al.,2019], they compare on a different architecture which is WideResNet 32x10, as we did not run the experiments in their setting, it is unclear that the results are the same.But we will give the corresponding results in the updated version of our pape.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed a DFO framework to generate black-box adversarial examples. By comparing with Parsimonious and Bandits, the proposed approach achieves lower query complexity and higher attack success rate (ASR).\", \"i_have_two_main_concerns_about_the_current_version\": \"1) Some important baselines might be missing. In addition to (Ilyas et al., 2018b) and (Moon et al., 2019), the methods built on zeroth-order optimization (namely, gradient estimation via function differences) were not compared. Examples include \\n[1] There are No Bit Parts for Sign Bits in Black-Box Attacks\\n[2] AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks\\n[3] SIGNSGD VIA ZEROTH-ORDER ORACLE\\n \\n2) In addition to attack success rate and query complexity, it might be useful to compare different attacks in terms of $\\\\ell_p$ distortion, where $p \\\\neq \\\\infty$. This could provide a clearer picture on whether or not the query efficiency and the attack performance are at the cost of increasing the $\\\\ell_1$ and $\\\\ell_2$ distortion significantly.\\n\\n\\n########### Post-feedback ##############\\nThanks for the response and the additional experiments to address my first question. However, I am not satisfied with the response \\\"But clearly our methods aim to reach the boundary of linf ball, so the distortion might be large\\\" to the second question.\\n\\nI am Okay with the design of $\\\\ell_\\\\infty$ attack. However, if the reduction in query complexity is at a large cost of perturbation power, e.g., measured by $\\\\ell_2$ norm, then it is better to demonstrate this tradeoff. Furthermore, if the $\\\\ell_2$ norm is constrained, will the proposed $\\\\ell_\\\\infty$ attack outperform the others? This is also not clear to me.\\n\\nThus, I decide to keep my score.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a black box adversarial attacks to deep neural networks. The proposed approaches consist of tiling technique proposed by Ilyas et al (2018) and derivative free approaches. The proposed approaches have been applied to targeted and untargeted adversarial attacks against modern neural network architectures such as VGG16, ResNet50, and InceptionV3 trained on ImageNet and CIFAR10 datasets. Experimental results show higher attack success rate with a smaller number of queries.\\n\\nThe experimental results look quite promising, i.e., revealing the vulnerability of the deep neural network against black-box adversarial attacks. A possible weakness in the experimental design is that the authors haven't apply any defense methodology to the classification models to be attacked. Yet the results are promising. \\n\\nFrom the viewpoint of technical soundness, the approach is a simple combination of the existing approaches. The tiling technique is used in Ilyas et al (2018) combined with a bandit approach. The current paper simply replaces the bandit with evolution strategies. The introduction of the evolution strategies is motivated by their good performance as a zeroth order optimization algorithm. \\n\\nA small novelty appears in a way to handle a bounded search space. The authors claim that many DFO algorithms are designed for unbounded real search space and need some constraint handling. The authors proposed two ways of transforming the bounded search space to the unbounded real search space. However, there must be existing approaches for this type fo constraint (rectangle constraint) in DFO settings. I can not list such approaches here as there are huge number of papers addressing the constraint of this type. There is not enough discussion in the paper why these two proposed approaches are promising. Formulation (2) makes the problem ill-posed and technically the optimal point may not exist. Formulation (3) with softmax representation makes the optimization problem noisy, hence it may annoy the optimizer. Nonetheless, I believe the combination of these constraint handling technique and evolutionary approaches are not new.\\n\\nSome minor comments / questions below:\", \"p5\": \"How are the original images to be attacked selected for Fig 2?\", \"p6\": \"\\\"we highlight that neural neural networks are not robust to l\\u221e tiled random noise. \\\" Isn't it the contribution of (Ilyas et al., 2018b)?\", \"p7\": \"What are the number of queries in Figure 3 and Table 1? Are they the number of queries spent until these algorithms found an adversarial example which is categorized to a wrong class for the first time?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposed a new query efficient black-box attack algorithm using better evolution strategies. The authors also add tiling trick to make the attack even more efficient. The experimental results show that the proposed method achieves state-of-the-art attack efficiency in black-box setting.\\n\\nThe paper indeed presented slightly better results than the current state-of-the-art black-box attacks. It is clearly written and easy to follow, however, the paper itself does not bring much insightful information.\", \"the_major_components_of_the_proposed_method_are_two_things\": \"using better evolution strategies and using tiling trick. The tiling trick is not something new, it is introduced in (Ilyas et al., 2018) and also discussed in (Moon et al., 2019). The authors further empirically studied the best choice of tiling size. I appreciated that, but will not count it as a major contribution. In terms of better evolution strategies, the authors show that (1+1) and CMA-EA can achieve better attack result but it lacks intuition/explanations why these helps, what is the difference. It would be best if the authors could provide some theories to show the advantages of the proposed method, if not, at least the authors should give more intuition/explanation/demonstrative experiments to show the advantages.\", \"detailed_comments\": [\"In section 3.2, is the form of the discretized problem a standard way to transform from continuous to discrete one? What is the intuition of using a and b? Have you considered using only one variable to do it?\", \"In section 3.3.2 what do you mean by \\u201cwith or without softmax, the optimum is at infinity\\u201d? I hope the authors could further explain it.\", \"In eq (2), do you mean max_{\\\\tau} L(f(x + \\\\epsilon tanh(\\\\tau)), y) ?\", \"In section 3.3.1, the authors said (1+1)-ES and CMA-ES can be seen as an instantiation of NES. Can the authors further elaborate on this?\", \"Can the authors provide algorithm for DiagonalCMA?\", \"It is better to put the evolution strategy algorithms in the main paper and discuss it.\", \"Can the authors also comment/compare the results with the following relevant paper?\", \"Li, Yandong, et al. \\\"NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks.\\\" ICML 2019.\", \"Chen, Jinghui, Jinfeng Yi, and Quanquan Gu. \\\"A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks.\\\" arXiv preprint arXiv:1811.10828 (2018).\", \"In Table 1, why for Parsimonious and Bandit methods, # of tiles parts are missing? I think both of the baselines use tilting trick? And they should also run using the optimal tiling size? The result seems directly copied from the Parsimonious paper? It makes more sense to rerun it in your setting and environment cause the sampled data points may not be the same. Since CMA costs significantly more time, it makes a fair comparison to also report the attack time needed for each method.\", \"In Table 3, why did not compare with Bandit and Parsimonious attacks?\", \"======================\", \"after the rebuttal\", \"I thank the authors for their response but I still feel that there is a lot more to improve for this paper in terms of intuition and experiments. Therefore I decided to keep my score unchanged.\"]}" ] }
SJgXs1HtwH
TreeCaps: Tree-Structured Capsule Networks for Program Source Code Processing
[ "Vinoj Jayasundara", "Nghi Duy Quoc Bui", "Lingxiao Jiang", "David Lo" ]
Program comprehension is a fundamental task in software development and maintenance processes. Software developers often need to understand a large amount of existing code before they can develop new features or fix bugs in existing programs. Being able to process programming language code automatically and provide summaries of code functionality accurately can significantly help developers to reduce time spent in code navigation and understanding, and thus increase productivity. Different from natural language articles, source code in programming languages often follows rigid syntactical structures and there can exist dependencies among code elements that are located far away from each other through complex control flows and data flows. Existing studies on tree-based convolutional neural networks (TBCNN) and gated graph neural networks (GGNN) are not able to capture essential semantic dependencies among code elements accurately. In this paper, we propose novel tree-based capsule networks (TreeCaps) and relevant techniques for processing program code in an automated way that encodes code syntactical structures and captures code dependencies more accurately. Based on evaluation on programs written in different programming languages, we show that our TreeCaps-based approach can outperform other approaches in classifying the functionalities of many programs.
[ "Program Classification", "Capsule Networks", "Deep Learning" ]
Reject
https://openreview.net/pdf?id=SJgXs1HtwH
https://openreview.net/forum?id=SJgXs1HtwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "n48TE60Q7p", "B1g1azRooB", "SklWo-RsjH", "HJxpWZAiiS", "r1eiYk0oiH", "S1xcQm8k5B", "SklkAftTtH", "r1llKxqKYH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735648, 1573802678821, 1573802392589, 1573802245089, 1573801859057, 1571935010190, 1571816134713, 1571557495617 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1909/Authors" ], [ "ICLR.cc/2020/Conference/Paper1909/Authors" ], [ "ICLR.cc/2020/Conference/Paper1909/Authors" ], [ "ICLR.cc/2020/Conference/Paper1909/Authors" ], [ "ICLR.cc/2020/Conference/Paper1909/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1909/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1909/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes an application of capsule networks to code modeling.\\n\\nI see the potential in this approach, but as the reviewers pointed out, in the current draft there are significant issues with respect to both clarity of motivating the work, and in the empirical results (which start at a much lower baseline than previous work). I am not recommending acceptance at this time, but would encourage the reviewers to clarify the issues raised in the reviews for future submission.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewer #3\", \"comment\": \"We would like to thank the reviewer for his valuable time, helpful feedback and insightful suggestions to further improve our study.\", \"q3_1\": \"Intuition behind the use of Capsule Networks for Program Classification\", \"response\": \"We thank the reviewer for pointing out these issues.\\n\\nRespectfully, we believe that Nghi D. Q. BUI is one person.\\n\\nFor Xinyi Zhang, the reviewer is right that Zhang should be the family name.\", \"q3_2\": \"Empirical Results\", \"q3_3\": \"Minor revisions\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"We would like to thank the reviewer for his valuable time, helpful feedback and insightful suggestions to further improve our study.\", \"q1_1\": \"Convolutions may drop significant amount of semantically-interesting information\", \"response\": \"We acknowledge the reviewer\\u2019s concern with respect to the experimental results.\\n\\n1) The results presented in Section 6.3 were intended as an ablation study, to demonstrate the effects of different aspects of TreeCaps such as the proposed variable to static algorithm and the dimensionality of the classification capsule output. Apart from the experiments with varying dimensionalities of the code capsule output (which was intended as a demonstration of under or redundant latent representation), we did not conduct any dataset-specific hyperparameter tuning to improve the performance. Each result shown consist of the mean and the standard deviation of 3 independent trials with random initialization. Thus, we do not believe that the gains are due to a trivial case of overfitting. \\n\\n2) As the reviewer has presumed correctly, we did not conduct any optimization for Dataset C. We used these datasets due to the limited availability of suitable datasets. (with respect to resource constraints, etc.) As the reviewer has kindly suggested, we intend to conduct further experiments to establish the robustness of TreeCaps with other large datasets in our future studies.\\n\\n3) We plan to conduct additional experiments and compare TreeCaps performance with other existing approaches such as code2vec and code2seq.\", \"q1_2\": \"Experiment Results\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"We would like to thank the reviewer for his valuable time, helpful feedback and insightful suggestions to further improve our study.\", \"q2_1\": \"It doesn\\u2019t appear that some of the motivation for capsule networks on images didn\\u2019t seem to transfer neatly to this setting; for example, there is no equivalent of inverse graphics as there is no reconstruction loss.\", \"response\": \"We acknowledge the reviewer\\u2019s concern with respect to the empirical results. The primary reason for the ambiguity between TBCNN [Mou et al. (2016)] and our re-implementation is the initial embeddings, as explained in Section 6.2. Mou et al. (2016) have used custom-trained initial embeddings for a small set of about 50 AST node types defined specifically for C language only, while our approach generates the initial embeddings for a much larger vocabulary of more than three hundred unified AST node types for both C and Java. We decided to follow a more generalized approach across programming languages, at the expense of performance gain resulting from small, specific vocabularies.\\nWe believed that it would be more general and fairer to compare across datasets in more than one programming language by using the same (and larger) set of AST node vocabulary used in our approach.\\n\\nWe acknowledge the reviewer's perspective on the fairness of the results and potential errors or discrepancies in our re-implementation of TBCNN [Mou et al. (2016)]. Retrospectively, in addition to using the larger set of AST node vocabulary, we should have also applied our approach directly to the initial embeddings with the same smaller set of AST node vocabulary used in TBCNN [Mou et al. (2016)] and ASTNN [Zhang et al. (2019)] etc. for the dataset in C language so that we may have a clearer comparison.\", \"q2_2\": \"Variable to Static Routing Algorithm\", \"q2_3\": \"Empirical Results\"}", "{\"title\": \"Response to Reviewer#2 Q#1 and Reviewer#3 Q#1: Intuition behind the use of Capsule Networks for Program Classification\", \"comment\": \"We would like to thank the reviewers for their valuable time, helpful feedback and insightful suggestions to further improve our study.\", \"q2_1\": \"It doesn\\u2019t appear that some of the motivation for capsule networks on images didn\\u2019t seem to transfer neatly to this setting; for example, there is no equivalent of inverse graphics as there is no reconstruction loss.\", \"q3_1\": \"Intuition behind the use of Capsule Networks for Program Classification\", \"response\": \"Among others, the primary motivation behind the use of capsule networks for program source code classification is the hypothesis that they automatically learn dependency relationships existing among entities that are not spatially co-located, due to the proposed variable to static routing. It is widely accepted that dependency information can greatly aid program source code related tasks. Most graph networks need the dependency information to be externally integrated [BUI et al.(2019)]. Even GraphCaps [Zhang & Chen (2019)] does not address the dependency relationships in their study.\\n\\nVariable to static routing recognizes the capsules representing the entities with the highest probability of existence, and routes the capsules which have similar vector outputs to them. As a result,capsules representing entities with dependencies will be routed together, and in the subsequent layers, this dependency information can be utilized for prediction. Hence, we hypothesis that TreeCaps learn the relevant useful dependency relationships while the network is training, without explicitly providing additional information or constraints.\\n\\nHowever, we acknowledge that we require additional experiments not included in the manuscript to justify the hypothesis with respect to the dependency relationships (whether TreeCaps learns the dependencies among entities as expected), despite the performance gain of TreeCaps in comparison to a few other existing approaches. We are currently conducting studies to justify this hypothesis, and we summarize the procedure as follows. We integrate a back-tracking mechanism after a forward pass with a given test case, which identifies the primary variable capsules with k-highest coupling coefficients, connected to a given primary static capsule. We then trace the entities in the source code corresponding to the identified primary variable capsules and consider them as the entities with dependency relationships as identified by the TreeCaps network. We subsequently compare related pieces of code identified by TreeCaps to program dependencies identified by program analysis techniques to validate our hypothesis.\\n\\nWe acknowledge that we have not used reconstruction loss in this study and thank the reviewer for the kind suggestion. We believe that reconstruction loss does not enforce the inverse graphics concept alone, instead, it functions as a regularizer which boosts the routing performance by enhancing the pose encoding (in the case of images). Existing studies on CapsNets for text classification does not use reconstruction loss, yet, manage to capture child-parent relationships well [Zhao et al.(2018)]. However, we agree that the use of a reconstruction loss would have certainly boosted the performance, and aided the learning of dependency relationships. We plan to add the reconstruction loss to TreeCaps, as we mentioned in Section 6.4.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes a neural architecture for summarizing trees inspired by capsule networks from computer vision. The authors re-use a tree convolution from previous work for the bottommost layer, and then propose adaptations to the dynamic routing from capsule networks so that it can be applied to variable-sized trees. The paper applies the proposed architecture to three different program classification datasets, which are in three different languages. The paper reports empirical gains compared to two architectures proposed by previous work.\\n\\nI think that it's interesting to apply the capsule network architecture to tree classification, but unfortunately it doesn't appear that some of the motivation for capsule networks on images didn't seem to transfer neatly to this setting; for example, there is no equivalent of inverse graphics as there is no reconstruction loss (as pointed out by the authors in Section 6.4).\\n\\nAlso, the variable-to-static capsule routing indeed appears novel, but I was a bit confused by its internal details. It appears that the outputs of the previous layer which occur most often will get routed (considering lines 6-8 of Algorithm 1 which up-weights each of the $\\\\hat{u}_i$ based on its similarity to $v_j$; the $v_j$ are initially a re-numbered subset of $\\\\hat{u}_i$), without any prior transformation of the previous layer first. It seems to me that this doesn't allow for the prior layer to predict more complex features about the input that the subsequent layer is expected to capture. In fact, for certain code classification tasks, it may be that rare capsule outputs from the initial layer are the most important to preserve.\\n\\nMy biggest concern has to do with the empirical results. The source of Dataset C (Mou et al 2016, https://arxiv.org/pdf/1409.5718.pdf) reports 94.0% accuracy in Table 3 on their TBCNN method on the same dataset, whereas this paper reports 79.40% accuracy for TBCNN. I understand that the later result comes from a reimplementation, but it seems fairer to compare against (or additionally report) the results from the original authors of the method.\\n\\nAlso, the paper cites ASTNN (Zhang et al 2019, https://dl.acm.org/citation.cfm?id=3339604) in the introduction, and even though that paper reports (in table 2) 98.2% accuracy on Dataset C, the results table of the paper under review does not mention this in the evaluation section. I don't think that a paper necessarily has to achieve empirical results beating all previous ones in order to merit acceptance, but the way that the comparison is currently set up doesn't seem to facilitate a clear comparison of the pros and cons of this method versus other ones in the literature.\\n\\nFor the above reasons, I vote to reject the paper. For future submissions, it would be good to see a more comprehensive empirical comparison of the proposed method compared to others, and also to have more explanations about the design of the network.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes a capsule-network-based architecture for predicting program properties and is evaluated on three tasks for predicting an algorithm from a code snippet.\\n\\nTechnically, the paper aims to transfer the idea of convolution from images and apply it to abstract syntax trees of programs. To do this, two dimensions describing the position of a node in a tree position are used - the depth of a node in a tree and its index in the list of children of its parent. This choice, however, is similar to image convolutions only at a very artificial level and drops significant amount of semantically-interesting information for programs from the index of the node at the parents, while keeping the total depth (which rarely matters in programs, as code is usually semantically similar no matter how nested in other code it is).\\n\\nThe experiments are small (on two small and one slightly larger dataset) and inconclusive:\\n1) Given the number of experiments done for tuning parameters on Dataset B (with ~640 examples), it is not clear that we are not observing some trivial case of overfitting. The improvement over GGNN is quite small and mostly due to ensembles.\\n2) The problem of small evaluation datasets make the results inconclusive. Only Dataset C is sufficiently large, if I assume no optimization like for Dataset B was performed.\\n3) Furthermore, it looks like the considered tasks are may be better handled by models such as code2vec or code2seq than by GGNN. The paper needs to include stronger baselines.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a tree-structured capsule network for program source code processing (essentially a program classification task with three datasets).\\n\\nThe idea of incorporating tree structures into the design for capsule networks is not wrong. However, I am not sure why a capsule network is even needed in program classification. The authors follow the clich\\u00e9 of the importance of tree structures, but show little insight into the use of capsule networks in program analysis. The authors started rationalizing the capsule networks by saying \\\"Capsule Networks itself is a promising concept ...\\\" Being a promising concept itself doesn't necessarily mean it has/is suitable to be applied in program classification.\\n\\nThe treatment in Sec. 5.1 of the tree structures is pretty the same as in Mou et al. [2016], linearly weighting a token by its position. Sec. 5.2 is extremely hard to understand. It starts with presenting an algorithm and its line-by-line interpretation. I know how to program, but I wish to get some intuition of why capsule networks are needed for program classification, and how it is different from a generic capsule network and/or a graph capsule network [Xinyi & Chen, 2019]. Given a graph capsule network is in place, I found the contribution of this paper (tree capsule network) is limited. \\n\\nThe experiments are very thin. The authors only compare their results to TreeCNN and Gated Graph NN (GGNN). It's unclear if TreeCaps is better than other existing models, such as Transformer, TreeTransformer, GraphCap, etc.\\n\\nWhile the authors experimented on three datasets, the evidence is actually limited. Dataset A is saturated (99.3%--100%). Dataset B shows some performance improvement (compared with GGNN and TreeCNN only). Dataset C basically shows TreeCaps is similar to GGNN. The gap between 89.41% and 86.52% is largely due to model ensembles. But the performance of GGNN ensembles is unknown. \\n\\nIn summary, the paper applies Capsule Network to tree structures. The authors mainly follow the clich\\u00e9 of tree structures, but are not too excited about the capsule stuff. I am not excited either. \\n\\n==\", \"minor\": \"Nghi D. Q. BUI -> misformatted. Probably they are two people.\\nZhang Xinyi and Lihui Chen --> Not sure if Xinyi is the last name.\"}" ] }
SJeQi1HKDH
Learning with Social Influence through Interior Policy Differentiation
[ "Hao Sun", "Bo Dai", "Jiankai Sun", "Zhenghao Peng", "Guodong Xu", "Dahua Lin", "Bolei Zhou" ]
Animals develop novel skills not only through the interaction with the environment but also from the influence of the others. In this work we model the social influence into the scheme of reinforcement learning, enabling the agents to learn both from the environment and from their peers. Specifically, we first define a metric to measure the distance between policies then quantitatively derive the definition of uniqueness. Unlike previous precarious joint optimization approaches, the social uniqueness motivation in our work is imposed as a constraint to encourage the agent to learn a policy different from the existing agents while still solve the primal task. The resulting algorithm, namely Interior Policy Differentiation (IPD), brings about performance improvement as well as a collection of policies that solve a given task with distinct behaviors
[ "Reinforcement Learning", "Social Uniqueness", "Policy Differentiation" ]
Reject
https://openreview.net/pdf?id=SJeQi1HKDH
https://openreview.net/forum?id=SJeQi1HKDH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "dQwIjMh1wH", "BkxvJHLDsB", "B1xKqSYboB", "HkladEF-jH", "HkxNv7YZjr", "BklTKgR-qB", "r1lkFUpCKr", "S1g9D58Rtr", "ryeNTMTUOr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1576798735618, 1573508319149, 1573127569394, 1573127284655, 1573127003966, 1572098181390, 1571898999143, 1571871330513, 1570325180094 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1908/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1908/Authors" ], [ "ICLR.cc/2020/Conference/Paper1908/Authors" ], [ "ICLR.cc/2020/Conference/Paper1908/Authors" ], [ "ICLR.cc/2020/Conference/Paper1908/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1908/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1908/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1908/Authors" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes a mechanism for obtaining diverse policies for solving a task by posing it as a multi-agent problem, and incentivizing the agents to be different from each other via maximizing total variation.\\n\\nThe reviewers agreed that this is an interesting idea, but had issues with the placement and exact motivations -- precisely what kind of diversity is the work after, why, and what accordingly related approaches does it need to be compared to.\\nSome reviewers also found the technical and exposition clarity to be lacking.\\n\\nGiven the consensus, I recommend rejection at this time, but encourage the authors to take the reviewers' feedback into account and resubmit to another venue.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you for the rebuttal. After consideration, I'm keeping my score the same, as I am still not convinced by the utility of the policy diversity argument. I'd encourage the authors to explore their method in a concrete setting where this has demonstrable advantages.\"}", "{\"title\": \"Re: Official Blind Review #1\", \"comment\": \"Thank you for the insightful comments.\\n\\n1) The Motivation for Policy Diversity:\\nThe motivation of our work is to find different policies for the same task with high efficiency.\\nThere are applications where the diversity of policies is useful. 1. It may help empower agents with different styles or characters, which is especially useful in traffic simulations and video games where each instance is expected to behave differently. 2. Sometimes different policies themselves are important: applying diversity seeking methods in quantitative trading may help to discover different trading strategies (e.g., momentum and reversion), with each unique policy act as an *Alpha* to earn extra profit. 3. The performance w.r.t. a previously assigned reward function only represents the reward function\\u2019s preference. While learning diverse policies can help the agent to develop different strategies and become less reliant on the reward function. We will clarify the motivation in the Intro. section.\\n\\n2) The Inspiration from Heess et al. 2017:\\nRetrospect the success of Heess et al. 2017, their agent learns to move forward with different poses according to the obstacles in front of them. Those running, jumping and climbing policies can be regarded as diverse policies for the task *moving forward*. The forward bonus is the only reward they utilized. Attributing their success to Darwinism (i.e., the survival of the fittest), we are inspired to generate different policies with different termination signals, and we further proposed to couple the uniqueness constraints with the environment, eventually forming the IPD.\\nRegarding the traditional RL learning paradigm as Darwinism, our proposed learning paradigm enables agents not only interact with the environment but also with its peers, which is the reason we introduce the concept of social uniqueness in our work. We will reduce the usage of social influence in the revision. \\n\\n3) More on Performance Improvement:\\nGiven the motivation to find diverse policies in (1), we demonstrate our proposed method of generating diverse policies efficiently rather than generating better-performed policies. i.e., we embrace all diversities including the poor-performed ones and regard them equal as peers. When the peers perform well (e.g. 10 policies trained with PPO in Walker), a new agent trying to be different from its peers will perform poorly. On the contrary, when the peers perform poor in general (e.g. 10 policies trained with PPO in Hopper and HalfCheetah), a new agent trying not to resemble its peers will tend to perform better and result in performance improvement. As our paper focuses on learning different policies, we only regard the performance-boosting in some cases as byproducts. Combining the learned diverse policies and enforcing different policies to have better performance are promising topics in future work.\\n\\n4) For the Claim and Other Concerns:\\nWe have revised some of the mentioned parts of the paper. \\n- As for the *inconsistency*: in our main results (Table 1. and Fig.3 ), all of the experiments start with 10 PPO policies as peers. But in the results shown in Fig.4, the experiments started with 1 PPO policy, as follows: the first policy is trained with PPO, the second policy is trained with IPD to be different from its first peer, and so on. And we repeat the WHOLE PROCESS 5 times to get average results. Consequently, as there are more poor policies in the 10 PPO peers for Hopper and HalfCheetah, the performance of IPD surpasses PPO in those two environments. \\n- In Heess et al. 2017, the diversity of learned moving strategies is limited by the number of different kinds of terrains and obstacles, i.e., their agent can not learn to jump forward without an obstacle in front of it. Thus, the *diversity* (if we regard their different skills as policies, although this is not true for such skills are combined in one policy and only get triggered when facing corresponding terrains or obstacles) is limited by the diversity of environment and is hard to produce different policies in batch.\\n- In TNB, the final performance will rely on the scale of intrinsic reward, and we provide a detailed analysis of WSR, TNB in Appendix G to analyze their deficiencies. With the analysis, TNB can be revised with a condition to avoid too much optimization in the direction of $r_{novel}$ or $L_{novel}$.\\n- The success rate in Table 1 is defined as *surpasses the baseline during training process* so that PPO sometimes gets 100% for they always achieve similar performances during training.\\n- In TNB and WSR, the same metric is used\\n- We will try our method on more environments as well as in loosed environments that permit the agent to survive in more states.\"}", "{\"title\": \"Re: Official Blind Review #2\", \"comment\": \"Thank you for the insightful comments.\\n\\n1) The Motivation of IPD:\\nIn previous diversity seeking approaches, the diversity reward or novelty reward is always considered as an extra term of the learning objective, leading to a deformation of the loss function landscape, and therefore deliberate justification is needed. To tackle such a problem, our proposed method draws key insight from the social uniqueness motivation of human society. Specifically, given the same primal task, people tend to reach the objective in different ways, which is defined as social uniqueness motivation in the psychology literature (Chan et al. 2012). Such uniqueness motivation is not an explicit objective but a constraint, providing inspiration to our work on how to avoid distorting the loss landscape. This is also the motivation we rewrite the problem from Eq.(6) to Eq.(7) in our work.\\n\\n2) Relation with DIAYN, and Performance Analysis: \\nThe work of DIAYN can be regarded as a kind of curiosity-driven method that motivates an agent to explore more previous unseen states. While DIAYN categorizes ** different skills within a policy ** conditioned on the latent variable $Z$, our approach concentrates on increasing the differences among policies, i.e., different behaviors between policies. \\nDIAYN, as a sort of meta-learner, learns skills in an unsupervised way and can be used as pre-training in various tasks to get performance-boosting ( thus in reward sparse DIAYN is especially useful). \\nOn the other hand, we demonstrate our proposed method of generating diverse policies efficiently rather than generating better-performed policies. i.e., we embrace all diversities including the poor-performed ones and regard them equal as peers. When the peers perform well (e.g. 10 policies trained with PPO in Walker), a new agent trying to be different from its peers might perform poorly. On the contrary, when the peers perform poor in general (e.g. 10 policies trained with PPO in Hopper and HalfCheetah), a new agent trying not to resemble its peers will tend to perform better and result in performance improvement. \\n\\n3) Experimental Settings:\\nWe provide an algorithm box in Appendix G to make our method more clear.\\nThe results of main experiments (Table 1. and Fig.3 ) are executed as follows: first, we train 10 PPO policies, and then train another 10 policies with different methods, i.e., WSR, TNB or IPD separately. The performance (reward) shown in Table 1 are averaged over 10 policies in each method so that they correspond to the Y-axis of Fig.3.\\nThe experimental results in Fig.4 come as follows: the first policy is trained with PPO, the second policy is trained to be different from the former peer, and so on. And we repeat the WHOLE PROCESS 5 times to get averaged results (shown in Fig.4). The results in Fig.4 show great variance, and we interpret such variance from the reliance of later policy performance on its peers (When peers perform well, a new agent trying to be different from its peers will perform poorly...). Intuitively, as the number of peers increases, the new policy will get more constraints so that it will get harder to find a feasible policy, especially in simple tasks where diversity is limited by the environment (e.g. the Hopper). And we do observe a clear decrease in the curve for Hopper in Fig.4.\\n\\n4) Details on Baselines\\nRestricted by the page limitation, we put a detailed introduction to WSR and TNB into Appendices. We have updated some analysis of those methods, mainly on their correspondence between constrained optimization problems, in Appendix G. We will move more information into the main text in our revision.\\n\\n5) On the Metric in Sec.3.1\\nThe metric is important for it enables fast and rigorous computation of differences between policies, i.e., it guarantees the self-consistency of the distance between policies. Moreover, it lays the foundation for Proposition 1, based on which we implement our algorithm with single trajectory estimation and further improve the learning efficiency (compared with sampling states from the whole state space directly).\"}", "{\"title\": \"Re: Official Blind Review #3\", \"comment\": \"Thank you for the insightful comments.\\n\\nWe will try to make the second contribution of our work more clear in the revision. Specifically, the IPD method for policy differentiation. The IPD provides a general framework that can be applied whenever there is more than one objective in RL to optimize, e.g., learning with demonstrations (where there is an extra behavior cloning loss), curiosity-driven learning (where there is an extra curiosity bonus), etc. \\nIn short, it considers the extra loss or bonus in the sample-collection process, executed by early termination, so that policies trained with such samples will naturally satisfy the constraints.\\n\\n1) Eq.7:\\nYes. In practice, we should not expect a new agent to be different from others at every timestep, where a moving average is utilized. In fact, our uniqueness metric is based on sampled trajectories (and Proposition 1 shows we can use a single trajectory to get unbiased estimation). Thus, $r_{int}$ is kind of a moving average naturally. We have revised Eq.7 to be more precise and try to avoid being misled. We also provide more implementation details in Appendix D (paragraph: Threshold Selection) and Appendix F.\\n\\n2) Algorithm and the IPD Method:\\nWe provide an algorithm box in Appendix G to make our method more clear. Inspired by the Interior Point Methods, our proposed method tackles the uniqueness credit assignment problem in a quite different but natural way. i.e., we need not assign reward but only need to send termination signals to the agent during training when the constraints are broken. Moreover, based on our proposed uniqueness metric, the algorithm is easy to implement and will be easy to apply to any other prevailing RL algorithms.\\nIn Appendix G, we also provide an analysis of all the three methods (WSR, TNB, and IPD) on their relationship between constrained optimization problems, namely the WSR\\u2014Penalty Method, TNB\\u2014Feasible Direction Method, and IPD\\u2014Interior Point Method. \\n\\n3) Variance and Experimental Settings\\nAs our method is proposed to seek uniqueness (diversity), the randomness in the learning process is a must. Moreover, as we train policies sequentially, the randomness will be accumulated, leading to a high variance in Fig. 4. \\nThe experimental results Fig.4 show comes as follows: the first policy is trained with PPO, the second policy is trained to be different from the former peer, and so on. And we repeat the WHOLE PROCESS 5 times to get averaged results (shown in Fig.4). Intuitively, suppose the first 5 policies are poor, the 6th policy will have a larger chance to perform better (because more poor policies are considered as *peers it should not be similar with*), and vice versa.\\nIn our main results (Table 1. and Fig.3 ), the experiments start with 10 PPO policies as peers. Consequently, if there are more poor policies in the 10 PPO peers (e.g., in Hopper and HalfCheetah), the performance of IPD will surpass PPO. Otherwise, if the 10 PPO peers perform quite good, IPD might only able to find poorly performed policies to be different from the good ones. As our paper focuses on learning different policies, we only regard the performance-boosting in some cases as byproducts. And combining the learned diverse policies and enforcing different policies to have better performance are promising topics in future work.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presents a new algorithm for maximizing the diversity of different policies learned for a given task. The diversity is quantified using a metric, where in this case the total variation is used. A policy is different from a set of other policy if its minimum distance to all the other policies is high. The authors formulate a new constraint optimization problem where the diversity to previous policies is lower bounded in order to avoid a tedious search for combining task reward and diversity reward. The algorithm is evaluated on different Mojoco locomotion tasks.\", \"positive_points\": [\"The idea of maximizing the minimum total variation is novel and interesting\", \"The approach seems to work better than current SOTA approaches for generating diverse behavior\"], \"negative_points\": [\"The paper needs to be improved in terms of writing as in particular some of the main parts of the algorithm are unclear\", \"The definition of Eq 7 does not make too much sense to me (see below)\", \"The results have high variance and some conclusion drawn from it are hard to verify given the plots\"], \"more_comments\": [\"Eq 7 does not seem to be a very good choice to me. Why does the total variation needs to be different at *every* time step? We can certainly generate very diverse behavior even if the policy is exactly the same for some states. It could even be the case that for some states, only one action does not lead to a failure. In this case, Eq 7 would completely fail to produce any valid policy (?)\", \"In general the writing is clear, it gets however quite unclear for the main part of the algorithm (after Eq. 7). It is unclear how equation 8 is obtained and why the limit of alpha going to 0 should lead to the same solution as Eq 7 (if alpha is 0 than it should be the same as optimizing just the task reward??). While this might be obvious for experts of the interior point method, it needs to be explained in much more detail in this paper. I think it is always a good strategy to make a paper self-contained, in particular for the main parts of the algorithm.\", \"Also the termination mechanism needs to be much better explained. What reward is given in this case? The current formulation sounds quite heuristic to me, but maybe a better explanation can fix that.\", \"while Fig 3 shows a clear advantage of the method, the section about better policy discovery would need better data to verify their claims. Fig 4 shows very noisy results and while for the hopper there might be a clear improvement of performance for number of policies > 2, this does not seem to be very significant for half cheetah. Given the amount of noise in the results many more trials would be needed to really make such statements.\"]}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a new way to incentivize diverse policy learning in RL agents: the key idea is that each agent receives an implicit negative reward (in the form of an early episode termination signal) from previous agents when an episode begins to resemble prior agents too much (as measured by the total variational distance measure between the two policy outputs).\", \"results_on_three_mujoco_tasks_are_mixed\": \"when PPO is combined with the proposed objective for training diverse policies, it results in very strong performance boosts on Hopper and HalfCheetah, but falls significantly short of standard PPO on Walker 2D. I would have liked to see a deeper analysis of what makes the approach work in some environments and not in others.\\n\\nExperimental comparisons in the paper are only against alternative approaches to optimize the same diversity objective as the proposed approach (with weighted sum of rewards (WSR) or task novel bisection(TNB)). Given that this notion of diversity is itself being claimed as a contribution, I would expect to see comparisons against prior methods, such as in DIAYN. There are other methods that have been proposed before in similar spirit to induce diversity in the policies learned. Aside from the evolutionary approaches covered in related work, within RL too, there have been methods such as the max-entropy method proposed in Eysenbach et al, \\\"Diversity is All You Need...\\\". These methods, evolutionary and RL, could be compared against to make a more convincing experimental case for the proposed approach.\", \"the_experimental_setting_is_also_not_fully_clear_to_me\": [\"throughout experiments, are the diversity methods being evaluated for the average performance over all the policies learned in sequence to be different from prior policies? Or only the performance of the last policy? Related, I would be curious to know, if K policies are trained, the reward vs the training order k of the K policies. This is close to, but not identical to the study in Fig 4, to my understanding.\", \"Aside from the above points being unclear, the paper in general could overall be better presented. While I am not an expert in this area, I would still expect to be able to understand and evaluate the paper better than I did.\", \"Sec 3.1 makes a big deal of metric distance, but never quite explains how this is key to the method.\", \"The exact baselines used in experiments are unhelpfully labeled \\\"TNB\\\" (with no nearby expansion) and \\\"weighted sum of rewards (WSR)\\\", with further description moved to appendix. In general, there are a few too many references to appendices.\", \"The results in Fig 2 are difficult to assess for diversity, and this is also true for the video in the authors' comment.\", \"There is an odd leap in the paper above Eq 7, where it claims that \\\"social uniqueness motivates people in passive ways\\\", which therefore suggests that \\\"it plays more like a constraint than an additional target\\\".\", \"Sec 5.1 at one point points to Table 1 for \\\"detailed comparison on task related rewards\\\" but says nothing about any important conclusions from the table.\", \"There are grammar errors throughout.\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a new method for learning diverse policies in RL environments, with the ultimate goal of increasing reward. The paper develops a novel method, called interior policy differentiation (IPD), that constrains trained policy to be sufficiently different from one another. They test on 3 Mujoco domains, showing improved diversity in all of them and improved performance in 2 of them.\\n\\nOverall, this paper is very well executed. The explanation of the method is thorough, and the paper is well-written and polished. I like the idea of enforcing a constraint on the policy diversity via manipulating the transitions that the agents can learn from. The experiments section compares to two other methods of increasing policy diversity and IPD outperforms both of them. I think this is a solid contribution to the literature on improving policy diversity.\\n\\nThat being said, I have some concerns about the paper:\\n1) The motivation for explicitly encouraging diverse policies is a bit confusing, and isn\\u2019t very convincing. The paper draws inspiration from social influence in animal society, and say they formulate social influence in RL. First, the term social influence already has an established meaning in RL (see e.g. Jaques et al. (2018)) and refers to agents explicitly influencing others in a causal way in a multi-agent environment. Second, I think calling policy diversity a form of social influence is a bit of a stretch (and anthropomorphizes the agents unnecessarily). I think the paper should scrap the \\u2018social influence\\u2019 angle and instead frame it as \\u2018increasing policy diversity\\u2019. \\n\\nThe paper also motivates itself in comparison to Heess et al. (2017), which uses a set of environments to get diverse policies. However, the goal of these works are different: in Heess et al., the goal is to train agents that can exhibit complex behaviours in relatively simple environments (the focus is more on complexity of behaviours vs. the fact that agents in the same environment learn diverse policies). In this work, the goal is not to develop any more complex policies, but to have different agents on the same task learn diverse policies (and since the experiments are in Mujoco, the degree of diversity is limited). Thus, while the works are related, I don\\u2019t think the Heess et al. paper is good motivation for this work. \\n\\nI think the primary motivation that makes sense for explicitly encouraging diversity is to improve final performance on the task. Thus, I think it would be best for the paper to clarify the introduction by focusing on this. The paper could also give some reasons why having diverse policies is inherently a good thing (maybe for some applications with humans-in-the-loop it could be helpful?), but currently this is absent. \\n\\n2) Given that improving the final reward of an RL agent is the main goal, it\\u2019s not clear that the experiments (in 3 simple Mujoco settings) are enough to show this reliably. Specifically, it\\u2019s unclear whether encouraging diversity in this way will generalize to more complex tasks or domains (e.g. tasks in Mujoco with sparser reward, or environments with larger state spaces). It is possible that the success of the technique is most prevalent when there is only a small observation space. \\n\\n3) I\\u2019d like to see more discussion / analysis of *why* we\\u2019d expect diverse policies to lead to better rewards. In work on intrinsic motivation / curiosity for better exploration, it\\u2019s clear that encouraging agents to visit unseen states will lead to a better exploration of the state space, and thus will make them more likely to stumble upon rare rewards. But is this also true for policy diversity? Currently, the paper speculates that encouraging diversity could help agents not all fall into the same failure mode. But I could also imagine that it could lead agents to avoid a successful strategy that another agent learned. For example, if a certain sequence of moves is necessary at the beginning to avoid termination, the first agent could find this sequence of moves, but the other agents might avoid this sequence for the sake of diversity (depending on the threshold). Does something like this happen in practice? In my opinion, the environments considered aren\\u2019t rich enough to know. \\n\\n4) There are also some inconsistencies in results section. Specifically:\\n- There seems to be a disagreement between the results in Table 1 (which uses 10 peers) and the ablation over number of peers in Figure 4, which shows that the performance with 10 agents is roughly the same as it is with 1 agent (and overall shows little positive trend between the number of peers and performance). \\n- If \\u2018success rate\\u2019 means \\u2018percentage of time beating average PPO policy\\u2019, why does PPO sometimes get 100% in Table 1? \\n\\nGiven the concerns above, I\\u2019d assess the paper as being borderline for accept. I\\u2019m currently erring on the side of rejection, but I\\u2019d consider changing my score if some of the above points are addressed.\", \"smaller_concerns_and_questions\": \"- There are a couple of instances where I found the claims of the paper with respect to related work to be over-stated. For example:\\n\\u2018Yet designing a complex environment requires a huge amount of manual efforts\\u2019 -> not necessarily. There is an initial engineering overhead, but it\\u2019s possible to generate environments programmatically with different properties, resulting in different agent behaviours.\", \"also\": [\"On the Task-Novelty Bisector method of (Zhang et al., 2019): \\u2018the foundation of such joint optimization is not solid\\u2019. This is given without any explanation --- how is it not solid?\", \"In the TNB and WSR implementation, what metric is being used? Is it the same as is defined in Section 3?\", \"It would be nice to have some videos of the agents behavior to be able to more easily assess the learned policy diversity.\"], \"small_fixes\": \"\\u2018and similar results can be get\\u2019 -> and get similar results.\"}", "{\"comment\": \"Dear reviewers and general audience,\", \"please_download_the_demo_video_following_this_dropbox_link\": \"https://www.dropbox.com/s/rrs08dicidcim2l/ICLR20.mp4\", \"title\": \"Demo Video\"}" ] }
SyxGoJrtPr
SPROUT: Self-Progressing Robust Training
[ "Minhao Cheng", "Pin-Yu Chen", "Sijia Liu", "Shiyu Chang", "Cho-Jui Hsieh", "Payel Das" ]
Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy and reliable machine learning systems. Current robust training methods such as adversarial training explicitly specify an ``attack'' (e.g., $\ell_{\infty}$-norm bounded perturbation) to generate adversarial examples during model training in order to improve adversarial robustness. In this paper, we take a different perspective and propose a new framework SPROUT, self-progressing robust training. During model training, SPROUT progressively adjusts training label distribution via our proposed parametrized label smoothing technique, making training free of attack generation and more scalable. We also motivate SPROUT using a general formulation based on vicinity risk minimization, which includes many robust training methods as special cases. Compared with state-of-the-art adversarial training methods (PGD-$\ell_\infty$ and TRADES) under $\ell_{\infty}$-norm bounded attacks and various invariance tests, SPROUT consistently attains superior performance and is more scalable to large neural networks. Our results shed new light on scalable, effective and attack-independent robust training methods.
[ "robustness", "robust training", "trustworthy machine learning" ]
Reject
https://openreview.net/pdf?id=SyxGoJrtPr
https://openreview.net/forum?id=SyxGoJrtPr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "dIMVu9eEDe", "rJlmCMu3sH", "SJgRZwmisS", "B1llENCFsB", "S1ltyVRKiS", "r1eLHmRYsS", "SklbGQCtjS", "rkgYpift9B", "HkeBBIr3KS", "ryxPtG13KB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735589, 1573843658737, 1573758725922, 1573671976462, 1573671905368, 1573671742326, 1573671689506, 1572576192654, 1571735101205, 1571709566606 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1907/Authors" ], [ "ICLR.cc/2020/Conference/Paper1907/AnonReviewer5" ], [ "ICLR.cc/2020/Conference/Paper1907/Authors" ], [ "ICLR.cc/2020/Conference/Paper1907/Authors" ], [ "ICLR.cc/2020/Conference/Paper1907/Authors" ], [ "ICLR.cc/2020/Conference/Paper1907/Authors" ], [ "ICLR.cc/2020/Conference/Paper1907/AnonReviewer5" ], [ "ICLR.cc/2020/Conference/Paper1907/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1907/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes a new training technique to produce a learned model robust against adversarial attacks -- without explicitly training on example attacked images. The core idea being that such a training scheme has the potential to reduce the cost in terms of training time for obtaining robustness, while also potentially increasing the clean performance. The method does so by proposing a version of label smoothing and doing two forms of data augmentations (gaussian noise and mixup).\\n\\nThe reviewers were mixed on this work. Two recommended weak reject while one recommended weak accept. All agreed that this work addressed an important problem and that the proposed solution was interesting. The authors and reviewers actively engaged in a discussion, in some cases with multiple back and forths. The main concern of the reviewers is the inconclusive experimental evidence. Though the authors did demonstrate strong performance on PGD attacks, the reviewers had concerns about some attack settings like epsilon and how that may unfairly disadvantage the baselines. In addition, the results on CW presented a different story than the results with PGD. \\n\\nTherefore, we do not recommend this work for acceptance in its current form. The work offers strong preliminary evidence of a potential solution to provide robustness without direct adversarial training, but more analysis and explanation of when each component of their proposed solution should increase robustness is needed.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thanks for the constructive questions and here are our responses.\", \"comment\": \"Response to extra questions/comments:\\n\\nWe also thank the reviewer for your responsiveness and efforts for reviewing our submission. We did find your comments very helpful in further strengthening our research findings and in improving the presentation of this paper. We have managed to perform all the extra comments the reviewer suggested. The point-to-point response is as follows:\\n\\na) Following your suggestion, we have included two more experiments in Appendix A.6 of the revised version: (1) 100-step PGD-Linfinity attack with 10 random restarts on the CW loss and epsilon=0.03; (2) 100-step PGD-Linfinity attack with 10 random restarts on the cross-entropy and epsilon=0.03. We find that SPROUT could still achieve (1) 51.23% accuracy with 100-step PGD-Linfinity attack with 10 random restarts on the CW loss and (2) 61.18% accuracy with 10 random restart on cross-entropy loss. As the reviewer pointed out, since many papers have already identified the drop in robustness accuracy with more attack iterations or random starts, this trend will also be observed in other robust training methods such as Adv train and TRADES. For example, in the Madry\\u2019s Lab CIFAR-10 challenge leaderboard (https://github.com/MadryLab/cifar10_challenge), it is reported that 20-step PGD-Linfinity attack on the cross-entropy loss with 10 random restarts reduces the robust accuracy of Adv train to 45.21% (while SRPOUT attains 61%). In the interest of rebuttal deadline, we will report the full attack results of other methods in the next revision.\\n\\nb) There are several reasons that we believe can explain why the current ImageNet results of SPROUT are as not substantial as CIFAR-10 results. (1) First of all, due to limited rebuttal time and computation resource, we did not optimize the hyper-parameters (e.g. \\\\alpha) of SPROUT for ResNet-50. Instead, we deploy the default settings of ResNet-152 (Table 3) for SPROUT training. We believe the robust accuracy of SPROUT can be improved with careful hyperparameter optimization. (2) Second, as the reviewer pointed out, the number of classes in ImageNet is indeed more than that of CIFAR-10. Nonetheless, in terms of training robust models, our results show that CIFAR-10 still has a large room for improvement. Moreover, many of the ImageNet class labels are semantically very similar (e.g., different dog specifies). Therefore, the \\\\beta parameters along in Dirichlet distribution may not have sufficient expressive power to characterize the differences among semantically similar class labels in ImageNet. We expect that by incorporating more complex label smoothing functions, such as hierarchical Dirichlet distribution or Bayesian Dirichlet distribution, this question will be better understood. We also note that this extension still fits into the VRM framework of SPROUT and is a future direction that we will be exploring.\\n\\nc) Yes. We also note that the trend of robust accuracy is similar to Figure 2, where on ResNet SPROUT\\u2019s robust accuracy can be slightly worse than other methods when the epsilon value is small, while SPROUT becomes much more robust than others once the epsilon value passes a threshold. The observed threshold varies by network architectures and attack methods. For example, in the case of PGD-Linfinity attack on VGG the threshold is 0.01, and in the case of CW-Linfinity attack on ResNet the threshold is 0.03. We also note that the robust training of SPROUT is operated in a self-progressing manner, so different from adversarial training methods, we did not specify a perturbation threshold (nor an attack) to train a robust model.\\n\\nd) Following your suggestion, we have included the robust accuracy of uniform label smoothing+Gaussian augmentation+Mixup in FIgure 5 (with legend name GA+Mixup+LS). We find that SPROUT significantly outperforms GA+Mixup+LS (e.g. when epsilon = 0.03 our robust accuracy outperforms by at least 15%), implying the importance and effectiveness of Dirichlet label smoothing.\"}", "{\"title\": \"Thanks and some more questions\", \"comment\": \"Thank you very much for adding these extra experiments -- they were a lot and I really appreciate your responsiveness. I hope that you also find that these new experiments could be helpful for evaluating the method which seems to work okay (doesn't seem to be more robust than TRADES and ADV training on CIFAR when attacking the CW loss but is extremely faster than them) for CIFAR-10 but not the best for ImageNet.\\n\\nAfter carefully reading your rebuttal and reviewing the revision, I have some other extra questions/comments. I understand that the rebuttal time is limited so I am going to prioritize them and have the more important ones at the top:\\n\\na) According to A.6, the random restart experiments are done using a 20 step PGD attack on the cross-entropy loss. Can you please do 2 more extra experiments evaluating the robustness against 8/255 l-infinity attacks by doing 100-step PGD with 10 random restarts on the CW loss and also 100 step PGD on with 10 random restarts on the cross-entropy? It seems like the CW loss is a better objective for attacking the proposed smoothing method.\\n\\nb) According to A.7, the adversarially trained ImageNet is generally more robust. Do you have any thoughts on why this might be the case? On CIFAR-10 the results are very good but from A.7 the method doesn't seem to generalize to ImageNet with 1000 classes. Might this be because of the number of classes?\\n\\nc) Are the adversarially trained and TRADES models trained to resist eps=8/255? If yes, according to A.5, they are better for that eps (and also smaller perturbations).\\n\\nd) For Fig.5, I meant adding label smoothing with all the other elements of SPROUT without Dirichlet (I apologize for the ambiguity -- this won't really affect my review given the limited rebuttal time left.). However, I do think it has value and it would be great if it is added to the future versions.\"}", "{\"title\": \"Response to Reviewer #5 (2/2)\", \"comment\": \"6. We thank the reviewer for bringing the paper \\u201cAdversarial Training for Free!\\u201d (Free Adv train) to our attention, which is a recently accepted paper to NeurIPS\\u201919. We agree that it can be used as a good baseline for performance comparison, as it features similar robust accuracy to adversarial training with greatly reduced training time. In the revised version, we have included two sets of experiments as follows. (1) On CIFAR-10, we train the robust wide resnet 28 models using the default settings in the authors\\u2019 github. The performance comparison is added to Figure 2 and Table 5 of the revised version. In terms of robust accuracy, Free Adv train indeed has a similar performance as adversarial training. We also note that since Table 5 reports the 10-epoch run-time of each method, the advantage of Free Adv train over adversarial training may not be apparent, which we have emphasized in Section 4.5. (2) On ImageNet, we used the pre-trained robust ResNet-50 model shared by the authors and compared its robust accuracy in Appendix A.7 of the revised version.\\n\\n7. We corrected an image plotting bug for producing Figure 4 (a) and have updated it in the revised version. The image used for loss landscape visualization is data sample #2233 in the testset. \\n\\n8. Following the reviewer\\u2019s suggestion, in the revised version we have included the run-time analysis of \\u201cAdversarial Training for Free!\\u201d in Table 5 and Section 4.5.\\n\\n9. Following the reviewer\\u2019s suggestion, in the revised version we have included the performance of uniform label smoothing in Figure 5.\\n\\n\\nWe hope our responses addressed the reviewer\\u2019s concerns. We also would like to make the most of the openreview platform and are happy to take any additional questions the reviewer may have during the author rebuttal phase.\"}", "{\"title\": \"Response to Reviewer #5 (1/2)\", \"comment\": \"We thank the reviewer for providing the review comments and suggestions. In the short rebuttal period, we have managed to include all the additional experiments suggested by the reviewer and updated the results in the revised version. Please find our point-by-point response as follows:\\n\\n\\n1. In SPROUT, \\\\beta associates with the parameter of the Dirichlet distribution, which controls the statistical properties of generated label distributions. Specifically, consider the case z=Dirichlet(\\\\beta). As described in equation (7), the mean of the s-th generated label value in z is proportional to the s-th entry of \\\\beta divided by the total sum of the \\\\beta entries. In other words, z=Dirichlet(\\\\beta) generates a label distribution on the probability simplex, and the mean of z is \\\\beta normalized by the sum of \\\\beta entries. Therefore, in SPROUT we do not need to constrain the value of \\\\beta, as the mean of the Dirichlet distribution will be properly normalized. Moreover, due to the normalization effect of the Dirichlet distribution, putting an additional constraint on \\\\beta can be made equivalent to a particular \\\\alpha value while keeping \\\\beta unconstrained. \\n\\n2. The batch size for ImageNet is 256. As described in Algorithm 1, when updating \\\\beta we used the conventional stochastic optimization approach with the batch gradient. While it is possible that some classes are not sampled in a batch, similar to learning the model weights \\\\theta, in the long run \\\\beta can still be optimized properly based on stochastic optimization. Regarding the reviewer\\u2019s suggestion of using random \\\\beta values, it is unclear to us what random functions should be used for a fair and meaningful comparison, given that random \\\\beta values are not aiming to maximize the training loss during the iterations of model weight optimization process. Nonetheless, in the ablation study (Figure 5), we have shown that Dirichlet label smoothing (i.e., stochastic gradient ascent on \\\\beta) significantly outperformed uniform label smoothing (i.e., fixed and uniform \\\\beta values) in robust accuracy, which signifies the importance and effectiveness of stochastic optimization on \\\\beta.\\n\\n3. Following the reviewer\\u2019s suggestion, we have included Appendix A.6 in the revised version, where we set the number of random start from 1 to 10 and report the robust accuracy. Although there are some small performance variations, SPROUT can still achieve over 61% robust accuracy under PGD-Linfinity attack with epsilon=0.03 constraint, which clearly outperforms other methods.\\n\\n4. Following the reviewer\\u2019s suggestion, we have included the results of CW-Linfinity attack in Appendix A.5 of the revised version. We find that the trend of robust accuracy is similar to that of PGD-Linfinity attack, where SPROUT shows a significant gain in robust accuracy for large epsilon values.\\n\\n5. We agree with the reviewer that label leaking is not the right motivation in our setup, and we are sorry for the confusion. As many ImageNet class labels carry similar semantic meanings (e.g., different dog specifies as class labels), on ImageNet we follow the same setup as the ICML\\u201918 paper \\u201cObfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples\\u201d to generate meaningful adversarial examples for robustness evaluation using PGD-$\\\\ell_\\\\infty$ attacks with randomly targeted labels. We have revised the descriptions in our paper accordingly.\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer for the precise summary of our work and for the keen observation on the complementary robustness of the three main ingredients. We totally agree with your comment that in addition to ablation studies, more detailed analysis is needed to explain their joint complementary robustness. In fact, in our original submission, we have already provided an explanation using the similarity analysis of the input gradients. Specifically, inspired by the diversity evaluation metric used in Kariyappa et al. for evaluating adversarial robustness of ensembles, in Appendix A.3 we have reported the pairwise cosine similarity of input gradients among the three ingredients in SPROUT (Dirichlet Label Smoothing, Gaussian Augmentation, and Mixup). We find that the cosine similarity between module pairs is indeed quite small (< 0.103), suggesting large diversity of these modules. We believe that this provides a strong implication: the diversified modules can provide complementary benefits to robustness improvement using our proposed co-training approach. In addition to diversity analysis, their complementary robustness can also be explained from each integradient\\u2019s unique contribution to model training. That is, Gaussian augmentation only perturbs the data samples, Dirichlet label smoothing only adjusts the training labels, and Mixup improves the generalization of the interpolated data samples based on the training data.\\n\\nWe hope our responses addressed the reviewer\\u2019s concerns. We also would like to make the most of the openreview platform and are happy to take any additional questions the reviewer may have during the author rebuttal phase.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"Response to AnonReviewer2:\\nWe thank the reviewer for acknowledging the contributions of our work. Please find our point-by-point response as follows:\\n\\n1.We just use normal testing data in the inference time. We don\\u2019t make any changes on the testing data.\\n\\n2.Yes. We have conducted several experiments to examize obfuscated gradients. Specifically, following the methods suggested in the ICML\\u201918 paper \\u201cObfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples\\u201d, we (1) vary the PGD attack iterations (Figure 6(a)); (2) report the robust accuracy with respect to different perturbation budgets (epsilon values); and (3) implement transfer attack to test our model (Table 2). Our robustness gain is comprehensive and consistently better than other methods. In addition, in Figure 4 of Section 4.2, we have provided a visualization plot of the loss landscape with respect to the adversarial gradient direction and random direction. Among the hyperlane build by those two directions, our model achieves a much less loss compared with both adversarial training and TRADES. These results suggest that our robust training method does not cause obfuscated gradients.\\n\\n3.Due to space limitation, in the original submission we have already provided some analysis about the learned label correlation from beta in Appendix A.2. In short, on CIFAR-10 we observed some clustering effect of class labels that are semantically close, and we also found the learned beta values are indeed not uniformly distributed.\\n\\nWe hope our responses addressed the reviewer\\u2019s concerns. We also would like to make the most of the openreview platform and are happy to take any additional questions the reviewer may have during the author rebuttal phase.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #5\", \"review\": \"This work proposes training robust models without explicitly training on adversarial examples and by \\\"smoothing\\\" the labels in an adversarial fashion and by using Dirichlet label smoothing. Training robust models without adversarial training is indeed an important problem as mentioned by the authors since it can potentially (as the authors demonstrate) result in faster model training and less drop in clean accuracy. Overall the idea is interesting but I have some concerns mainly about evaluations and baselines which I am including below. If the authors can address my concerns, I am willing to increase my score:\\n\\n1. Based on equations (9) and (10), if we set \\\\alpha to be large, then the network is not trainable (since the worst-case adversary will increase the loss on the image by flipping the label during training). As a consequence, we can see that the value of hyper-parameters that the authors use is indeed very small (0.01 and 0.1). Even between these small values, the smaller value results in a better model. In the extreme case, where \\\\alpha is zero there is no regularization and \\\\beta becomes irrelevant. This illustrates that the performance of the model is very sensitive to \\\\alpha. On the other hand, we can prevent the model from not learning anything by constraining \\\\beta in equation (10) \\u2014 similar to adversarial training where we constrain \\\\delta. It seems that without constraining \\\\beta, if the step-size for \\\\beta is large, \\\\beta can grow and completely mess up the labels even when \\\\alpha is tiny. If we constrain \\\\beta on the other hand, we can make sure that in no case, the top label for any augmented image is an incorrect label. Can the authors elaborate on why they did not set any limit on the value of \\\\beta?\\n2. What is the batch-size used for ImageNet? The reason that I am asking is that you compute the gradient of \\\\beta for the previous mini-batch but use it for the next mini-batch. Is it possible that the previous mini-batch's \\\\beta is not accurate for the current mini-batch? For CIFAR, since the number of classes is 10, I would assume that you can update the statistics for the class (\\\\betas) using the previous mini-batch since you always see examples from all the classes using any reasonable batch-size. What happens if you do the same but for a dataset with more classes but have the mini-batch be smaller than the number of classes. In this case, your \\\\beta is getting updated only using information from a few classes and not all classes at once. In that case, what happens if you just use a random \\\\beta every step? \\n3. For the white-box attacks, I also have a few questions. Do you use multiple random restarts? It is known that random restarts can be more effective than increasing the number of PGD attacks. See for example the leaderboards for MNIST and CIFAR-10 challenges by Madry. I would like to see a table where you plot how the accuracy changes by doing 100 step PGD attacks and by increasing the number of random restarts from 1 to 10 for example.\\n4. Do you do L-infinity CW attacks? I see that you have done L-2 CW attacks but I can't find any L-infinity CW attacks. It would be great to show numbers for that and also compare it with TRADES and PGD adversarial training. In previous smoothing methods, the L-infinity CW attack seems to be a stronger attack compared to PGD.\\n5. For the ImageNet task, the authors state that the evaluation of non-targeted attacks can result in label leaking. Label-leaking happens when one trains on adversarial examples built using a single-step attack and it means that the accuracy of adversarial examples is higher than natural examples at test-time. For this, I do not understand why the authors mention that they only evaluate targeted attacks while they are not doing any adversarial training.\\n6. Also, for ImageNet, there are recent methods such as Adversarial training for Free! where the authors do adversarial training on ImageNet with no overhead cost compared to natural training. Maybe this could be added as a better base-line than a naturally trained model.\\n7. In Figure 4. (a), why is the loss for the validation image illustrated so high? What image is this from the validation set?\\n8. In terms of Scalability, its good to mention new scalable methods such as YOPO and Adversarial Training for Free.\\n9. In the ablation study, including Dirichlet Smoothing indeed results in a huge boost compared to having no smoothing. However, it would be better to show that Dirichlet smoothing is indeed better than label-smoothing or adversarial smoothing by including results for other smoothing methods in Fig. 5.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a novel training method to build robust models. A new framework SPROUT is introduced to adjust label distribution during training. It also integrates mixup and Gaussian augmentation to further improve the robustness. The proposed method is built upon the Vicinity Risk Minimization (VRM) framework. Experiments show that the proposed method significantly outperforms the existing best methods in terms of robustness against attacks.\\n\\nOverall, this paper proposes a novel method with good robustness performance. The proposed approach is built upon the VRM framework, and summarizes a lot of existing methods under this framework (Table 1). Experimental results are also very strong to prove the effectiveness of the proposed method. \\n\\nOn the other hand, I have some concerns about this paper. Since the performance improvement is significantly large over the current best methods, I need to see those concerns addressed to give a final rating.\\n\\n1. How do you perform inference given testing data? Do you use Gaussian augmentation or mixup during inference?\\n\\n2. Do you check that whether the robustness comes from obfuscated gradients? It's very important to examine the true robustness of the propose method.\\n\\n3. What's the final distribution of \\\\beta? Does it have a semantic meaning?\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors proposed a hybrid method for defending against adversarial attacks called SPROUT. The proposed defense method consists of three main ingredients:\\n\\n1. label smoothing with a learnable Dirichlet distribution\\n2. adding Gaussian noise to input examples\\n3. mixup: augment training examples with their linear combinations \\n\\nThe authors' main argument for their method is the speed over adversarial training and its effectiveness. \\n\\nIndividually, none of these ingredients are known to be strong defense against adversarial examples in the literature. Indeed this is corroborated by Figure 5, when the individual defenses do not have more than 10% accuracy under PGD100 attacks for epsilon=0.4. However, when all three are used together the accuracy jumps to close to 60%. This is very surprising. Another surprising fact is that in Figure 2, the method beats the benchmark adversarial PGD by more than 20% on white-box attacks, given the difficulty of beating adversarial PGD. \\n\\nGiven the surprise in these experimental results, I believe the authors should perform a more detailed analysis on how these ingredients for their SPROUT defense interact to produce such a strong predictor, in addition to doing ablation studies. An attempt should be made to explain why they work so well together when they are quite weak individually as defenses. It is difficult for me to recommend acceptance of this paper without an attempt to explain why it works.\"}" ] }
Hyxfs1SYwH
Alleviating Privacy Attacks via Causal Learning
[ "Shruti Tople", "Amit Sharma", "Aditya Nori" ]
Machine learning models, especially deep neural networks have been shown to reveal membership information of inputs in the training data. Such membership inference attacks are a serious privacy concern, for example, patients providing medical records to build a model that detects HIV would not want their identity to be leaked. Further, we show that the attack accuracy amplifies when the model is used to predict samples that come from a different distribution than the training set, which is often the case in real world applications. Therefore, we propose the use of causal learning approaches where a model learns the causal relationship between the input features and the outcome. Causal models are known to be invariant to the training distribution and hence generalize well to shifts between samples from the same distribution and across different distributions. First, we prove that models learned using causal structure provide stronger differential privacy guarantees than associational models under reasonable assumptions. Next, we show that causal models trained on sufficiently large samples are robust to membership inference attacks across different distributions of datasets and those trained on smaller sample sizes always have lower attack accuracy than corresponding associational models. Finally, we confirm our theoretical claims with experimental evaluation on 4 datasets with moderately complex Bayesian networks. We observe that neural network-based associational models exhibit upto 80% attack accuracy under different test distributions and sample sizes whereas causal models exhibit attack accuracy close to a random guess. Our results confirm the value of the generalizability of causal models in reducing susceptibility to privacy attacks.
[ "Causal learning", "Membership Inference Attacks", "Differential Privacy" ]
Reject
https://openreview.net/pdf?id=Hyxfs1SYwH
https://openreview.net/forum?id=Hyxfs1SYwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "4DDFyn0A5m", "BklhfeIKjr", "HyxvmkIFjH", "SygxVTHYiH", "rJxRKNwujS", "rJlBIGPOjS", "S1xL2WPdjS", "SJxQuPxuir", "SyeCQLgdjr", "SyePtPLvjB", "HkgBXMUDoS", "B1eaRRSPoS", "SklDkRrDir", "HJlDQaBvjB", "BJxTSqHDjH", "ByxkWp-WjS", "ByxFctTrcS", "r1lAKWKcKH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735560, 1573638163873, 1573637918660, 1573637416181, 1573577862255, 1573577292969, 1573577133821, 1573549930621, 1573549605562, 1573508990711, 1573507613239, 1573506773343, 1573506526895, 1573506335144, 1573505605445, 1573096694712, 1572358545121, 1571619205781 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1906/Authors" ], [ "ICLR.cc/2020/Conference/Paper1906/Authors" ], [ "ICLR.cc/2020/Conference/Paper1906/Authors" ], [ "ICLR.cc/2020/Conference/Paper1906/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1906/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1906/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1906/Authors" ], [ "ICLR.cc/2020/Conference/Paper1906/Authors" ], [ "ICLR.cc/2020/Conference/Paper1906/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1906/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1906/Authors" ], [ "ICLR.cc/2020/Conference/Paper1906/Authors" ], [ "ICLR.cc/2020/Conference/Paper1906/Authors" ], [ "ICLR.cc/2020/Conference/Paper1906/Authors" ], [ "ICLR.cc/2020/Conference/Paper1906/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1906/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1906/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"Maintaining the privacy of membership information contained within the data used to train machine learning models is paramount across many application domains. Moreover, this risk can be more acute when the model is used to make predictions using out-of-sample data. This paper applies a causal learning framework to mitigate this problem, motivated by the fact that causal models can be invariant to the training distribution and therefore potentially more resistant to certain privacy attacks. Both theoretical and empirical results are provided in support of this application of causal modeling.\\n\\nOverall, during the rebuttal period there was no strong support for this paper, and one reviewer in particular mentioned lingering unresolved yet non-trivial concerns. For example, to avoid counter-examples raised the reviewer, a deterministic labeling function must be introduced, which trivializes the distribution p(Y|X) and leads to a problematic training and testing scenario from a practical standpoint. Similarly the theoretical treatment involving Markov blankets was deemed confusing and/or misleading even after careful inspection of all author response details. At the very least, this suggests that another round of review is required to clarify these issues before publication, and hence the decision to reject at this time.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response: Generating process for a distribution may be different from the causal process\", \"comment\": \"The short answer is that the generative mechanism for a particular data distribution is not always the causal mechanism. Based on our answer on the need for the definition of a causal Markov Blanket, we really need to find the causal mechanism to construct a causal MB. In causal inference literature, this phenomenon is known as selection bias when the data-generating mechanism has components that do not correspond to the causal mechanism. Since this is a fundamental question, let us respond by first clarifying the formal definitions, and then continuing with the colored MNIST example.\", \"causal_mechanism\": \"A causal mechanism can be formalized through a causal graph, where each edge $A \\\\rightarrow B$ has a specific interventional interpretation: changing A will change B. Formally, causal mechanism specifies Pr(B|do(A)).\\n\\nIn general, the same causal mechanism can be present in multiple data distributions P(A, B) and thus $P(B|A) \\\\neq Pr(B|do(A))$ for every distribution P. \\n\\n\\nIn our continuing MNIST example, let us assume A=digit and B=color. If you only observe data from the $P$ distribution, then you can correctly detect $P(Color|Digit)$ as the data-generating process. You may also be tempted to declare that Digit causes Color, however, there is incomplete evidence to determine the causal mechanism as defined above. To determine a causal relationship, you have to ask the question whether changing Digit(A) will change the color (B)?\\n\\nIn general, given only data from $P$, it is impossible to determine whether this $A \\\\rightarrow B$ relationship is causal (without making other assumptions outside of the data). Outside of doing an actual intervention or experiment, our paper builds on recent work suggesting that we can use the invariance property of causal relationships---a causal relationship should be invariant across many different data distributions. In our continuing example, we found a $P^*$ where Color is no longer associated with Digit, and thus observing data from both $P$ and $P^*$ confirms that Digit->Color is not invariant, and thus not a causal relationship. Therefore, it should not be in included in the causal Markov Blanket (but is included in the associational Markov Blanket for P, see our response to associational versus causal Markov Blanket above). \\n\\nMore generally, it is always possible to have a generating mechanism for a data distribution that is not a causal mechanism, and thus does not correspond to the causal Markov Blanket. Thus, in our paper (and theorem 1), we assume the existence of pre-specified Causal Graph based on outside domain knowledge.\"}", "{\"title\": \"Response: Causal versus associational Markov Blankets\", \"comment\": \"A Markov Blanket is an observational notion only under two strong assumptions: Causal Markov Condition and Faithfulness. Pellet et al. provide a good description of these conditions and define a \\\"perfect map\\\" (Definition 6). This is also mentioned in the reference you shared, Flairs03-07.pdf.\\n\\nThe Causal Markov Condition states that if two variables are d-separated (independent) in the causal graph, then they should also be independent in the data distribution.\\nThe faithfulness property states that the observed data distribution contains no independences between variables that do not follow from d-separation on the underlying causal graph. \\n\\nTogether, they imply that a causal graph is a \\\"perfect map\\\" of a data distribution. (In)dependence in the graph corresponds to (in)dependence in the data distribution. In practice, however, these assumptions can be violated, and often are. Our paper addresses the setting when these assumptions can be violated.\\n\\nWhen the perfect map property is violated, then the Markov Blanket cannot be uniquely determined from a data distribution. Therefore, we introduced two kinds of Markov Blankets: \\n1. \\\"Associational\\\" MB, that are determined from a data distribution assuming a perfect map, and\\n2. \\\"Causal\\\" MB, that cannot be determined from the data distribution alone and is derived from domain knowledge. \\n\\nFor instance, consider our continuing example of colored MNIST. Using a dataset from P, we will claim that {Shape, Color} is the MB for Digit. Using a dataset from $P^*$ that does not have the association between Color and Digit, we will claim that {Shape} is the MB. \\n\\nNow both cannot be correct. If the true causal graph is Shape->Digit->Color, then $P^*$ violates faithfulness assumption and thus the Markov Blanket estimated from $P^*$ is incorrect. If the true causal graph is instead Shape->Digit, then $P$ violates the Causal Markov Assumption and thus the Markov Blanket from $P$ is incorrect.\\n\\nTo reduce the resulting confusion in nomenclature, we say that the MB derived from observed data distribution is an \\\"associational\\\" MB and can vary across distributions. And the MB derived from a causal graph is the \\\"causal\\\" MB and stays invariant across distributions. \\n\\nThanks again for these questions. To summarize, we need a causal notion when a single data distribution cannot uniquely identify the Markov Blanket. These will help us justify our nomenclature in the revised paper. \\n\\nPellet et al. Using Markov Blankets for Causal Structure Learning. JMLR 2008.\"}", "{\"title\": \"Response: Our specification assumes a deterministic f that ensures same causal models between P and P*\", \"comment\": \"As we mentioned in our reply above, we assume a deterministic f. Thus the variance $\\\\sigma_1(x_c)$ and $\\\\sigma_2(x_c)$ will be zero. Then in your example of a Gaussian $P(Y|X_C)$, we would obtain $h_{c,P}^{OPT}=h_{c,P^*}^{OPT}$. Now we agree that even for an associational model, one of the optimal solutions will be $h_{a,P}^{OPT}=f$, but depending on the data distribution, there can be other equally optimal solutions that are not $f$, and the associational learning algorithm will have no way to distinguish between them (that is, it may pick the incorrect one even with infinite data). We make this exact point in our Theorem 1 proof (page 12, paragraph: \\\"Associational Models\\\").\\n\\nThat said, we do acknowledge that Theorem 1 does not include the more general case where the true labelling function $f$ is not deterministic. \\nWhen $f$ is non-deterministic, the result depends on a number of factors, including the loss function, and the relative scale of changes in P(X) (covariate shift) and the changes in P(Y|X) (concept drift). For instance, your example provides a good illustration of the issues with the choice of loss-function (squared loss works, but not l1). Similarly, the relative amounts of covariate shift and concept drift matter. For example, if $P^*(X) \\\\approx P(X)$ but $P^*(Y|X)$ and $P(Y|X)$ vary by a lot, then a causal model will have lower error. But if $P^*(Y|X) \\\\approx P(Y|X)$, $P^*(X)$ and P(X) vary by a lot, then it is not clear. (note that all of the above statements are for $X$, not $X_C$). \\n\\nOverall, however, we thought that $f$ being deterministic is a reasonable assumption to make, following the domain adaptation literature which also assumes a deterministic $f$. Further, if one assumes that the variance in the true function $f$ is small enough or negligible, then all of the claims of Thm 1 should follow in practice. Thus, we chose the setting of a deterministic $f$ to simplify the proof and present the main conceptual argument. More generally, we believe that rather than a fundamental problem with our formulation, Theorem 1 provides fruitful ground for future work on generalizing to non-deterministic $f$, and the associated trade-offs between covariate shift and concept drift, and on sensitivity to choice of a loss function. \\n\\nThank you for engaging in this discussion. For completeness, we will include the specific definition of f in Theorem 1 and highlight the issues when $f$ is non-deterministic in our paper.\"}", "{\"title\": \"About Markov Blanket - what is the difference between associational Markov Blanket versus Causal Markov Blanket ?\", \"comment\": \"\\\"We consider a \\\"causal\\\" Markov Blanket that is derived from the causal graph, not the \\\"associational\\\" Markov Blanket that is typically derived from a particular data distribution. \\\"\\n\\nCan you please provide a reference for this. Whats the difference between associational Markov Blanket and Causal Markov Blanket?\\n\\nMarkov Blanket is the smallest subset of variables observed conditioned on which the rest of the variables become independent - It is a \\\"purely\\\" observational notion.\\nIn a Bayesian network (Causal or Otherwise) it is the parents, children and co-parents.\\n\\nNow if you moralize the Causal Bayesian Network into an undirected graph, all neighbors of Y in the moralized undirected model will be in the Markov Blanket - this again can be determined by just CI tests - a purely observational notion. Just because in the Causal Bayesian Network, parents + children and coparents form the markov Blanket, it does not mean it is a function of the causal graph - it is a purely observational quantity - See reference https://pdfs.semanticscholar.org/53a7/28fcf178f418a4fc3297f8ab0f04e12c5df7.pdf. Please see definition 1 - what is causal about that definition ? It is a purely observational notion\\n\\nWhy do you need the causal graph to define the Markov Blanket ? There is an algorithm called IAMB (a Markov Blanket discovery algorithm) which works ONLY on observational data to find the Markov Blanket - again see reference https://pdfs.semanticscholar.org/53a7/28fcf178f418a4fc3297f8ab0f04e12c5df7.pdf.\\n\\nAnother reference discussing Markov Blanket (please see the definition in the Background section) - https://www.aaai.org/Papers/FLAIRS/2003/Flairs03-073.pdf.\"}", "{\"title\": \"I disagree with the characterization\", \"comment\": \"You said , for the train, $<=5$ (or $\\\\geq 5$) is given the same color (or may be even noisily given a color with heavy bias).\\n\\nThen Color is generated looking at the digit and adding noise.\\n\\nWhy is the causal graph limited to Shape->digit.\\n\\nYour generating mechanism is Shape -> digit-> color.\\n\\nSo the Markov blanket has the color ??\"}", "{\"title\": \"This specification of f need to be in the Theorem 1 - still it it gives rise to more problems. Learnt causal models might be different between $P$ and $P*$\", \"comment\": \"I would suggest the authors then to specify the definition of f in Theorem 1. It seems to be a crucial detail.\\n\\nSuppose Y was Gaussian with mean $\\\\mu(X_c)$ where $\\\\mu$ is a mean function which is linear $\\\\mu(x_c) = \\\\theta^T x_c$ - while the variance is some $\\\\Sigma(X_c)$. Then clearly your $f(x_c)= \\\\theta^T x_c$ (by your definition)\\n\\n Suppose, P(Y|X_C) is sampled from a Gaussian with the same mean $\\\\mu(\\\\cdot)$ but different variance functions $\\\\Sigma_1(x_c)$ and $\\\\Sigma_2(x_c)$ for two different parts of the domain $D_1$ and $D_2$ and you have one mixture of the domains for P and another mixture for P*.\\nBy the way $P (Y|X_C)$ is invariant here over the entire domain.\\n\\nSuppose the loss function was something *other than* squared loss (say just $\\\\ell_1$ for example) and u train with this loss on the labels in the dataset, then even if you include all linear functions (that indeed captures your labeling function $f$), then again the invariance will not hold.\\n\\nSo still $h_{c,P}^{OPT} \\\\neq h_{c,P*}^{OPT}$ ! \\n\\nI think this formulation may have fundamental problems.\\n\\nNow if you define $L$ with respect to $h$ and $f$ it seems like the label $y$ in the dataset is never used (since both $h$ and $f$ are dependent only on the point). So in what sense is it a good evaluation ? \\n\\nLet us assume as the authors claim that $f \\\\in {\\\\cal H}_C \\\\subset {\\\\cal H}$. $L(f,f)=0$ I assume. So wont the trivial solution in every case be $h^{OPT}=f$ ?? \\nSince, $h^{OPT}$ is defined as $\\\\argmin_{h} L(h,f)$ (as the authors define in Appendix I)\"}", "{\"title\": \"Response for MNIST color example\", \"comment\": \"Thank you for your comment. We provide our explanation below.\", \"on_mnist_example\": \"We are distinguishing between \\\"causal\\\" Markov Blanket that is derived from a structural causal graph, and the associational Markov Blanket that is derived from a probability distribution (or Bayesian network).\\n\\nIn this example, here are the three graphs and corresponding Markov Blankets: \\n1. Causal Graph: Shape -> digit (shape causes digit. MB = {Shape})\\n\\n2. Associational Network, Train: Shape-> digit <--> Color (digit is probabilistically associated with shape and color, Associational MB ={Shape, Color}\\n\\n3. Associational Network, Test: Shape-> digit ( no association of digit with color, Associational MB = {Shape})\\n\\nThus, the causal graph and the causal MB remain the same across train and test distributions, satisfying the assumption in Thm 1. However, train and test do exhibit different probabilistic associations---we observe a correlation between Digit and Color in train dataset. This need not be causal---it could simply be due to selection effects while collecting the train dataset (e.g. for class 1, other colors are not sampled from a full master dataset with all colors). \\n\\nUnder this setup, assuming that a model built using Color feature was equally predictive during training, we posit that even an optimal associational model may choose a function based on Color. Under the causal graph, we would still claim that shape causes the digit, and color happens to be correlated with the digit in a specific dataset.\"}", "{\"title\": \"Response to concerns of Point 2 on f\", \"comment\": \"On definition of $f$: You raise a great point about $f$'s connection to P(Y|X)--thanks for this. For $f$, we are primarily following the structural causal model literature (Pearl 2009) that defines the value of a node variable in a causal graph in the form of a function of its parents. That is, $y=f(y_{parents} + \\\\epsilon$ where $\\\\epsilon$ is independent noise. Here, $f(y_{parents})$ can be considered as the expected value of P*(Y|Y_{parents}). Thus,\\n\\n$$ f(y_{parents}) = E[Y|y_{parents}] $$\\n\\nIn this paper, we simplify the structural causal model to remove any independent noise term, thus following the domain adaptation literature (e.g., Mansour et al.) that assumes a deterministic $f$. Hence, we say $y =f(x_c)$ (and so, $f(x_c)=E[Y|x_c]$ with zero variance). Specifically, we also consider $x_c$ as the full Markov Blanket in addition to parents, as our goal is prediction only. We believe this setup will be relevant for classification problems where it is reasonable to assume a deterministic outcome given the inputs. That said, it should be possible to generalize to the case where $y=f(x_c) + \\\\epsilon$ in future work. \\n\\nMansour et al.:Domain Adaptation: Learning Bounds and Algorithms. Mansour, Mohri, Rostamizadeh (2009)\"}", "{\"title\": \"Regarding MNIST color example - Invariance wrt Markov Blanket is violated\", \"comment\": \"I agree that in practice a classifier trained would look for the easiest spurious correlation. I dont deny that. However, your paper is about optimal associational models without regard to any real world sub-optimal algorithm for obtaining a model. In your example,\\n\\nShape -> digit -> color . Then for the digit (which is the label), according to Markov Blanket definition \\nparent, children and parents of children are included. This means Markov Blanket includes (shape and color).\\n\\nSo Shape and color both are in the markov blanket. Due to no association between color and digit in the test dataset, P(Y |Markov Blanket) is not the same - Infact Markov Blanket itself is different between P and P*. Your example does not satisfy assumptions of Theorem 1 ! Am I missing something ?\"}", "{\"title\": \"Partial feedback to rebuttal points\", \"comment\": \"I appreciate the revision and the feedback to address by concerns. I have issues with point number 1 still. I will come back to it shortly. Let me ask a quicker question to your answer on point 2.\\n\\n2 - \\\"We have now defined $f$ in Theorem 1 statement. We agree that we can define the loss wrt y as $L(h(x),y)$ , but adding provides some conceptual ease during the proof. We can argue that remains invariant across the two distributions, and that the causal model learns the successfully. \\\"\\n\\nI am looking at Theorem 1. \\\" It says Let $f:X_C \\\\rightarrow Y $ be the resultant invariant labeling function such that $y=f \\\\left( X_C \\\\right)$ \\\" Can you mathematically say what it is in terms of $P (Y|X_C)$ ? Thats what I was looking for. The reason I am asking is the following - usually in supervised learning, $Y$ is labeled according to $P(Y|X)$ meaning that the label is actually randomly drawn from the conditional. Now, the map estimate is usually (depends on the loss function $L$) $\\\\argmax_y P(y|x)$ - this is what we use to label a test dataset. That does not imply that another dataset would be generated using a deterministic labeling function.\"}", "{\"title\": \"Response to Review 3\", \"comment\": \"> 1.How likely is it to have errors in learning causal structure using a different dataset?\\n\\nLearning the correct causal graph from a single dataset is still an open problem, so we think it is likely that there will be errors using a different dataset. The Bnlearn library provide a few functions for learning causal structure from a given dataset. We tested a simple hill-climbing algorithm and observed it to return the true causal parents for each of the output variable in our benchmark datasets. Hence, existing methods might be useful in learning causal structures. In addition, several recent research work propose techniques to learn causal models without having to learn the correct causal graph [Arjovsky et al., Ke et al.]. Our work provides a theoretical basis for these techniques to alleviate membership inference attacks. \\n[Arjovsky et al.] -- Invariant Risk Minimization\\n[Ke et al.] -- Learning Neural Causal Models From Unknown Interventions\\n\\n> 2. How long/how much effort does it take to figure out conditional probability table? \\n\\nGiven the structure of the causal graph, it is easy to compute the conditional probability table. In fact, since each variable's probability distribution is conditioned only on its parents and is independent of others' conditional probability distributions, we can find the correct parameters for each one independently using simple maximum likelihood. The dataset itself does not contain the causal structure, but the data distribution may have been derived from the structure of the causal graph.\\n\\n> 3. Your experimental results suggest that the causal model can learn on smaller amounts of data than the DNN. Does this scale for even larger input parameter datasets?\\n\\nWe haven't evaluated on larger datasets. But in principle, since a causal model uses fewer features than an associational model like DNN, it should learn better with fewer number of samples. \\n\\n> 4. Do you believe that your results prove causal models will scale to datasets that contain these kinds of complex causal structures?\\n\\nWe provided the HIV example as a motivating example, and showed a theoretical basis on why it is important to make an effort to learn causal models. However, our work does not focus on the empirical learning of a causal model. While there are some recent works (see our references to Arjovksy et al. and Ke et al.), we believe the field needs to do more work before we can think of implementing causal models in complicated causality settings like health. \\n\\n> 5. Could you provide the layer architectures of all three models used for your experiments? \\n\\nThe associational model is a multilayer perceptron (MLP) with 3 hidden layers of 128, 512 and 128 nodes respectively. The learning rate is set to 0.0001 and the model is trained for 10000 steps. The attacker model has 2 hidden layers with 5 nodes each, a learning rate of 0.001, and is trained for 5000 steps. Both models use Adam optimizer, ReLU for the activation function, and cross entropy as the loss function. We chose these parameters to ensure model convergence.\\n For the causal models, we use the maximum likelihood estimation method to learn the probability table of the model for the output variable using the known causal structure.\\n\\n> 6. How would the causal model perform compared to state of the art techniques for these datasets, in both accuracy and attack protection? \\n\\nSince we know the data-generating process for these datasets, the best possible accuracy on these datasets would be for a model that uses the true causal structure and the true conditional probability values. We implemented that method and found that the accuracies and attack performance are almost identical to the causal model we present in the paper. \\n\\n> 7. I'm not sure that the causal model definitely outperforms DNNs in all cases.\\n\\nIf we understand correctly, the concern on outperforming is with respect to the privacy guarantees. Our experimental results show that for models with higher number of features, associational models tend to overfit on the distribution while causal models learn to predict only using the causal features. Specifically, figure 3b shows that the attack accuracy for DNNs on the water dataset (high number of features) is the highest as compared to all the other datasets, while the attack accuracy for the causal model is close to a random guess. Overall, since causal models generalize well across distributions, membership inference attacks are harder to perform on these models as compared to associational model. That said, when trying to learn the causal structure from data (rather than using known causal structure), we did face difficulty with the Water dataset due to its extreme probabilities, but we emphasize that our main contribution is to show the connection between privacy and causal learning, both theoretically and empirically. We point the reader to related work that presents state-of-the-art methods on learning causal structure.\"}", "{\"title\": \"Response to Review 1\", \"comment\": \"> 1. The main concern of this paper is the results are only confirmed on synthetic data, where all the 4 datasets are generated from known Bayesian networks (i.e., causal graphs).\\n\\nOur main claim is that \\\"true causal models have stronger differential privacy guarantees as compared to associational models\\\". The key contribution of the paper is to show the privacy benefits of causal learning and motivate the adoption of these techniques for sensitive applications. For this, we want to empirically show that these models are robust to membership inference attacks as compared to associational models. Hence, we aim to evaluate on models where the causal structure is known apriori. Therefore, we chose to use the Bayesian networks and alter their probabilities to create distributions from different domain. \\n\\n\\t> 2. More explanations about \\u2018invariance\\u2019 is needed. \\n\\nThe statement on invariance in the paper assumes a true/ideal causal model, where we assume that the model class for training is expressive enough to capture P(Y|X_C) and that the dataset size is large enough to prevent any estimation errors. Under these conditions, as long as $P(Y|X_C) = P*(Y|X_C)$ remains invariant across any two distributions P and P*, then the error on a particular input, say $x_i$ is the same under P and P*. This notion of same error is captured by the \\\"invariant\\\" statement. Note that this may not be true for an associational model, since it may have captured associations P(Y|X) that are not stable across distributions $P(Y|X)!= P*(Y|X)$. We have clarified this in the revised version of the abstract. For completeness, we also mention that causal models are not invariant to covariate shifts in the features (changes in distribution of X or X_C), and thus the overall error on P* can be different from P.\\n\\n\\t> 3. Causal models have similar performance (except Alarm data) with DNN models on test 2. The attack accuracy are no different between causal models and DNN on test 1.\\n\\t\\nAs you rightly point for Figures 2a and 3a, causal model have similar performance to DNNs on the test data. This is likely because the associational DNN models also were invariant in this case i.e., had similar error on train and test across distributions. Finally, to clarify, test 1 has the same distribution as the train dataset, so we expect to see 50% (random guess) attack accuracy for both causal and DNN models. The real benefit of the invariance property is that the attack accuracy continues to be 50% even when the test distribution is changed. \\n \\n\\n\\t> 4. Why only parents of Y are included in causal models in the experiments?\\n\\nThis was just for convenience--wherever possible, we chose outcomes that had no descendants in the causal graph, so that the Markov Blanket includes only parents of Y. Further, the prediction function in the bnlearn library uses only parents and hence was the primary choice for implementation.\"}", "{\"title\": \"Response to Review 2\", \"comment\": \">1. Why is $h_{c,P}^{OPT} = h_{c,P*}^{OPT}$ ? This is claimed for any loss function L (not just Cross Entropy Loss) and for a generic hypothesis class $H_C$ (that depends only on Markov Blanket).\\n \\nWe acknowledge that the proof will not work if the hypothesis class does not include the true $P(Y|X_C)$ (or equivalently, the true function $f$). Therefore, we have updated the proof to assume that the hypothesis class includes f. Even under this condition, there can be multiple associational models depending on the specific distribution P and thus it is possible that $h_{a,P}^{OPT} != h_{a,P*}^{OPT} $. \\nAs an example, consider a colored MNIST data distribution P where the classification task\\nis to detect whether a digit is greater than 5 , and where all digits above 5 are colored with the same color. Then, under a suitably expressive class of models, the loss-minimizing associational model may use only the color feature to obtain zero error, while the loss-minimizing causal model will still use the shape (causal) features to obtain zero error. On any new P\\u2217 that does not follow the same correlation of digits with color, we expect that the loss-minimizing associational model will now be different, probably that uses the shape features. \\nSpecifically, while we agree that the causal model will be one of the loss-minimizing associational models in P, it will not be the only one, and in general, associational models will not be able to distinguish between them if both minimize the loss equally. Thus, the best associational model over P can be different from the best associational model over P*. We have correspondingly updated the proof for Theorem 1 in the paper. \\nOn the choice of loss functions, we do have some restrictions on the loss function---we assume a symmetric loss function that follows the triangle inequality. We appreciate your point on the issue of general loss functions---can you provide some justification on why they may not work?\\n\\n>2. a) What is the max operator over in equation 5? Similar issue occurs in Lemma 1.\\n\\nWe wanted to convey the maximum over all x and x' such that x is part of the training dataset ($x \\\\in S$) and $x'$ is outside the training set but still follows the same causal labelling function (i.e., $y'=f(x_c')$). We have clarified this in the revised statement for Corollary 1. We have updated the statement of Lemma 1 to remove the notion of sampling. The proof remains the same. \\n\\n> c) In theorem 2, ${\\\\cal F}_a$ is an algorithm. Does it mean you add noise to the model parameters ?\\n\\nYes, the noise is added to the model parameters obtained as an output of the learning algorithm. Sorry for the confusion. We have clarified it in the paper.\", \"minor_issues\": \"> 1. https://arxiv.org/pdf/1710.05899.pdf shows explores causality and privacy. \\n\\nThanks for the reference. We have edited our statement to emphasize the scope of our paper in light of the provided reference. We now say, \\\"the connection of effect of causal learning to privacy is yet unexplored.\\\" \\n\\n> 2. Why is the ground truth function f:X->Y (Section 2.2.) relevant when you have distribution P (X,Y) and P*(X,Y) ?\\nWe have now defined $f$ in Theorem 1 statement. We agree that we can define the loss wrt y as $L(h(x), y)$, but adding $f$ provides some conceptual ease during the proof. We can argue that $f$ remains invariant across the two distributions, and that the causal model learns the $f$ successfully. \\n\\n> 3. Markov Blanket is not causal. If the features referred to least causal Parents - then still it would be consistent with the invariance in the Invariant Causal Prediction Literature (Peters et al 2016.) and would be causal. \\n\\nWe consider a \\\"causal\\\" Markov Blanket that is derived from the causal graph, not the \\\"associational\\\" Markov Blanket that is typically derived from a particular data distribution. To clarify, the structural causal graph remains the same across two distributions P and P*, and our definition of the Markov Blanket is based on this causal graph. \\nEach of the two distributions, P and P*, will have their own conditional probabilities, and thus different associational Bayesian networks and corresponding Markov Blankets. For example, in the colored MNIST example above, the color of the digit will also be included in the associational Markov Blanket in P, but is not a part of the causal Markov Blanket. Thus, to the extent that Markov Blanket is derived from a causal graph, we consider it causal. More generally, our goal is prediction, not causal inference. That is, we are interested in constructing a model using stable relationships between X and Y that generalize well. In some cases, that stable relationship may be between Y and its child (e.g., a disease and its symptom). In that case, we believe it is okay to construct a \\\"causal\\\" predictive model using the child of Y (e.g., using the symptom to predict the disease), as long as we are not including correlational features like Age or Income.\"}", "{\"title\": \"General Comment & Updates to paper\", \"comment\": \"We thank the reviewers for their feedback. We provide individual response to each of the reviewers. We outline our main contributions again as follows.\\n\\n\\t1. Our goal in the paper is not to learn a causal structure, but rather evaluate the predictive accuracy of models as the feature distribution changes specifically, we mean domain shift in $P(X)$. \\n\\t2. We want to demonstrate the privacy guarantees that causal learning provide by empirically demonstrating their robustness to membership inference attacks.\\n\\t3. Our aim is to make the community aware of the privacy benefits of causal learning and consider the importance of using causal features for predictions in sensitive applications.\", \"we_summarize_the_changes_made_in_the_updated_version_of_the_paper\": \"1. We updated statement and proof of Theorem 1 to clarify hypothesis class \\n\\t2. We updated statements for Corollary 1 and Lemma 1 to clarify max operator \\n\\t3. We clarified our claim of connecting causal learning and privacy for membership attacks\\n\\t4. We have clarified invariance of causal model in abstract\\n 5. We have clarified the addition of noise to the trained model parameters for making them differentially-private.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\n\\n The authors consider a transfer learning problem where the source distribution is P(X,Y) while the target distribution is P*(X,Y) and classifier is trained on data from the source distribution. They also assume that the causal graph generating the data (X and Y) is identical while the conditional probabilities (mechanisms) could change between the source and the target. Further, they assume that if X_C is the Markov Blanket for variable Y in P and P*, then P(Y|X_C) = P*(Y|X_C). Therefore the best predictor in terms of cross entropy loss for both distributions is identical if it focuses on the variables in the Markov Blanket. Authors define \\\"causal hypothesis\\\" as the one that uses only variables in the Markov Blanket (X_C) to predict Y.\\n\\n In this setting, the authors show two sets of results: a) Out of Distribution Generalization Error is less for the optimal causal predictors from any class of causal hypotheses under any loss function. b) If we tweak the definition of differential privacy where neighboring datasets are defined by replacing one of the variables in the training set by a sample from the test distribution (I have lots of questions about this definition later on), then the authors show that causal classifiers that depend only on the Markov Blanket has lower sensitivity to the change than that of associative classifiers (that use all features). This is used to prove that the causal ones have tighter differential privacy guarantees than the associative ones\\n c) Using the differential privacy results, they also show that optimal causal classifiers are more resistant to membership attacks.\\n\\n\\nThe authors demonstrate the results through membership attack accuracies on causally trained and associative models on 4 datasets where the causal graph and the parameters (conditional probabilities) of the Bayesian Network are known apriori.\", \"major_issues\": \"I have lots of issues with the theory in the paper. Thats the main reason for my recommendation.\\n\\n 1. Why is h_{c,P}^{OPT} = h_{c,P*}^{OPT} ?? (Equation 23 in Page 11) - Authors say that since the markov blanket is the only thing used to predict Y for causal predictors and P (Y|X_C) = P *(Y|X_C), this should be true. But I have problems with this statement/argument. First of all, this is claimed for for any loss function L (not just Cross Entropy Loss) and particularly for a generic hypothesis class H_C (that depends only on Markov Blanket). \\n\\nConsider the following counter example - Suppose all the features are in the Markov Blanket of Y (even a simpler case where all features are causal parents of Y). Suppose the true P (Y|X) is a logistic model with weights w_1 on one part of the domain D_1 and with weights w_2 for another part of the domain D_2. Suppose P is a mixture distribution of D_1 and D_2. P* is another mixture distribution (mixed differently) of D_1 and D_2.\\n\\nSuppose I consider the class of logistic classifiers (but a single logistic model with one weight w) to be my hypothesis class and I use the standard logistic loss (logistic regression) on P, since a single logistic model with one slope cannot match the different slopes in different parts of the domain, it will result in some weight vector w^{opt}_1. Now, since the mixtures of D_1 and D_2 are different in the P* (but P (Y|X ) is identical), the optimal w^{opt}_2 for the P* will be different. \\n\\nSo for arbitrary hypothesis classes (that do not capture the true P(Y|X_c)) and for a non cross entropy loss - clearly this does not hold at all !! Covariate shifts amongst X_C alone with create a different classifier for an arbitrary loss (even if P (Y|X_C) is the same across both). In fact, the only way I see to salvage this is to assume Cross Entropy loss and talk about all soft classifiers (without restrictions to hypothesis class). But even if thats the case, then the best associational model will be the one that uses the Markov Blanket too ! .\\n\\nThis claim about h_C is crucially used in proof of theorem 1 (Page 11) and Proof of Corollary 1 (Page 13). This is fundamental to all theorems later. That brings into question the validity of many (if not all) the theoretical results in the paper. Authors must address this. \\n\\n2. Issues regarding definition of certain quantities. \\n\\na) In equation 5 a quantity max_{x,x'} L_{x sampled from P} (h_c,f) - L{x' sampled from P*} (h_c,f) is defined - the inner quantity is a random quantity that depends on the samples x and x', Then what is the max operator over ?? - What does it mean to have worst case over samples from a distribution ??? Does it mean samples from two different domains ?? \\nEven the quantity does not seem to be well defined. \\n\\nb) Similar issue occurs in Lemma 1 - Neighboring datasets S and S' are created by first sampling S from P and then S' is obtained by replacing an arbitrary point in S by a random point from P*. Then sensitivity is defines as a max over pair of neighboring datasets - again S and S' are random samples, so what is the max over ?? If it is the worst case - why is the sampling coming in there ? Since it uses Corollary 1 - the main result inherits the same fundamental issues the has been pointed out above.\\n\\nc) In theorem 2, {cal F}_a is an algorithm. What does it mean to add noise ? - Does it mean you add noise to the model parameters ?? - This is confusing at best.\", \"minor_issues\": \"1. Authors claim that connection between causality and privacy has not been explored (Page 2). Pls refer to https://arxiv.org/pdf/1710.05899.pdf where differential privacy itself is related to interventional effects in a system. This connection is very different from the scope of the current paper. However, the statement by the authors is strictly not true.\\n\\n2. Why is the ground truth function f:X->Y (Section 2.2.) relevant when clearly you have distribution P (X,Y) and P*(X,Y) ?? We might as well define Loss with L (h(x), y) where (x,y) is drawn according to P. What f is never defined anywhere. Authors seems to mean the suggestion I just said in the paper. Authors could clarify. This confuses stuff in the proofs too.\\n\\n3. Markov Blanket is not causal by any means in my opinion. It is just a minimal set of features conditioned on which Y does not depend on anything else. This only requires conditional independence tests to determine - a purely observational notion - in fact the markov blanket only depend on the moralized graph which does not change across the members of the equivalence class. So calling it causal is a bit confusing. If the features referred to least causal Parents - then still it would be consistent with the invariance in the Invariant Causal Prediction Literature (Peters et al 2016.) and would be causal.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes using causal learning models for alleviating privacy attacks, i.e. membership inference attacks. The paper proves that causal models trained on sufficiently large samples are robust to membership inference attacks; they confirm the theories with experiments on 4 synthetic data.\\nThe paper is well written; theoretical proof seems correct as it combines proof of differential privacy guarantees in Papernot et al. 2017, robustness to membership attacks in Yeom et al. 2018 with the generalization property of causal models from Pearl 2009 and Peters et al. 2017. Results are presented clearly. The paper is novel as the authors claimed they provide the first analysis of privacy benefits of causal models.\\nThe main concern of this paper is the results are only confirmed on synthetic data, where all the 4 datasets are generated from known Bayesian networks (i.e., causal graphs). It doesn\\u2019t matter if these Bayesian nets are complex or not, because most of the experiments are done with the known true causal models except the last experiment in Figure 3c. Even with learnt causal models, they were learning a Bayesian net from too optimistic data that were indeed generated from Bayesian nets, but these are usually not true for real world data. So evaluations on real dataset, or other synthetic data that are not generated from Bayesian nets are necessary for validating the methods.\\nAnother question is about the \\u2018causal models are known to be invariant to the training distribution and hence generalize well to shifts between samples from the same distribution and across different distributions.\\u2019 More explanations about \\u2018invariance\\u2019 is needed. For example, in Figure 2a and Figure 3a, causal models have similar performance (except Alarm data) with DNN models on test 2, where test samples are generated from different distributions than training samples. Also in Figure 3b, the attack accuracy are no different between causal models and DNN on test 1.\\nThe last minor question is why only parents of Y are included in causal models in the experiments, but not the Markov blanket as stated earlier in Figure 1.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"Overview: This paper discusses the risk of membership inference attacks that deep neural networks might face when used in a practical manner on real world datasets. Membership inference attacks can result in privacy breaches, a significant concern for many fields who might stand to benefit from using deep learning in applications. The authors demonstrate how attack accuracy goes up when one dataset is used for training while another altogether is used for testing. They propose the use of causal learning approaches in order to negate risk of membership inference attacks. Causal models can handle distribution shifts across datasets because they learn using a causal structure.\", \"contributions\": \"In the theory part of the paper, the authors provide several proofs demonstrating that causal models have stronger differential privacy guarantees than association modes, that causal models trained on large samples are able to protect the dataset against attacks, and that causal models trained on smaller samples still have higher protection than association models trained on similarly sized samples. In addition to theoretical contributions, the authors also provide an experimental evaluation using 4 accepted experimental datasets.\", \"questions_and_comments\": \"\", \"page_2\": \"\\u201c...while association modes exhibit upto 80%...\\u201d -> \\u201c...up to...\\u201d\\nMy expertise is not in causal learning or structures, so I have a few questions about using it in practice. You mentioned that the datasets used in the experimental section were used in order to avoid errors in learning causal structure. \\nHow likely is it to have these errors using a different dataset? \\nHow long/how much effort does it take to figure out conditional probability table? Is this a significant amount of time compared to training? Is it automatic or manually done by humans?\\nIf it is done by humans, is it plausible to assume that every dataset implicitly contains a causal structure (not including random walks)?\\nYour experimental results suggest that the causal model can learn on smaller amounts of data than the DNN. Does this scale for even larger input parameter datasets as well, such as water?\\nYou mention this potentially being used to prevent attacks on real-world applications, such as HIV patient prediction/classification systems. Do you believe that your results prove causal models will scale to datasets that contain these kinds of complex causal structures? \\nCould you provide the layer architectures of all three models used for your experiments? Are these out-of-the-box solutions from libraries, or something more custom built?\\n\\nHow would the causal model perform compared to state of the art techniques for these datasets, in both accuracy and attack protection? I understand that isn't the main point of this paper, this is me being curious.\\n\\nI give this paper a borderline acceptance, based upon the fact that the above questions need to be addressed. I'm not sure its clear to see how the experimental results demonstrate that the causal model definitely outperforms DNNs in all cases. I would like to hear the author's defense of the method when it comes to datasets with higher numbers of features, specifically the water dataset.\"}" ] }
H1gZsJBYwH
Hybrid Weight Representation: A Quantization Method Represented with Ternary and Sparse-Large Weights
[ "Jinbae Park", "Sung-Ho Bae" ]
Previous ternarizations such as the trained ternary quantization (TTQ), which quantized weights to three values (e.g., {−Wn, 0,+Wp}), achieved the small model size and efficient inference process. However, the extreme limit on the number of quantization steps causes some degradation in accuracy. To solve this problem, we propose a hybrid weight representation (HWR) method which produces a network consisting of two types of weights, i.e., ternary weights (TW) and sparse-large weights (SLW). The TW is similar to the TTQ’s and requires three states to be stored in memory with 2 bits. We utilize the one remaining state to indicate the SLW which is referred to as very rare and greater than TW. In HWR, we represent TW with values while SLW with indices of values. By encoding SLW, the networks can preserve their model size with improving their accuracy. To fully utilize HWR, we also introduce a centralized quantization (CQ) process with a weighted ridge (WR) regularizer. They aim to reduce the entropy of weight distributions by centralizing weights toward ternary values. Our comprehensive experiments show that HWR outperforms the state-of-the-art compressed models in terms of the trade-off between model size and accuracy. Our proposed representation increased the AlexNet performance on CIFAR-100 by 4.15% with only1.13% increase in model size.
[ "quantized neural networks", "centralized quantization", "hybrid weight representation", "weighted ridge", "ternary weight" ]
Reject
https://openreview.net/pdf?id=H1gZsJBYwH
https://openreview.net/forum?id=H1gZsJBYwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "t-pXqA-qMI", "Byei_Y9liB", "B1lF3WYliS", "SJe_ApUejr", "SJlv2o_Jqr", "Byxk-x1AFr", "B1g3htmatr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735530, 1573067122734, 1573061041214, 1573051855863, 1571945391453, 1571839991194, 1571793331678 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1905/Authors" ], [ "ICLR.cc/2020/Conference/Paper1905/Authors" ], [ "ICLR.cc/2020/Conference/Paper1905/Authors" ], [ "ICLR.cc/2020/Conference/Paper1905/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1905/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1905/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes a hybrid weighs representation method in deep networks. The authors propose to utilize the extra state in 2-bit ternary representation to encode large weight values. The idea is simple and straightforward. The main concern is on the experimental results. The use of mixed bit width for neural network quantization is not new, but the authors only compare with basic quantization method in the original submission. In the revised version of the paper, the proposed method performs significantly worse than recent quantization methods such as PACT and QIL. Moreover, writing can be improved, and parts of the paper need to be clarified.\", \"title\": \"Paper Decision\"}", "{\"title\": \"First Replies\", \"comment\": \"Thank you for giving us a chance to improve our paper. We also agree with your concerns and hope our replies would work for you.\\n\\n1) The experiments in the paper only compare with the basic quantization method which makes the comparison not fair enough.\\n>> This is a really great question. Actually, we concentrated on the effect of centralized quantization using sparse-large weights. As you mentioned, there are many state-of-the-art quantization methods. To minimize some unexpected or different effects by using state-of-the-art quantization methods such as PACT[1] or QIL[2] when evaluating the centralized quantization, we selected a simple quantization method which is the Basic Quantization (Sec 3.1). Our future work is that applying the centralized quantization to other state-of-the-art quantization methods. As you mentioned, the quantization methods of PACT[1] and QIL[2] is better than Basic Quantization (Sec 3.1). In a reflection of your advice, we will add other state-of-the-art results. Can you recommend a proper section to add the contents of state-of-the-art quantization methods? i) related work, ii) Sec 3.1 (Basic quantization), ii) experimental results (like ABC-net[3]), or iv) multiple choice.\\n\\n2) There are no experiments or any theoretical supports for the proposed weight ridge method.\\n>> We think that our presentation was unclear. Table 1 and Sec 4.1 show the ablation study investigating the effect of the weighted ridge in full-precision models.\\nLike your opinion, we will separate the main experimental results (table 3, 4) and ablation studies (table 1, 2).\\n\\n3) The comparison with a quantized neural network of 2 bits should be given in the experiments also.\\n>> It seems a considerable point. In my understanding, you suggest that experiments of a quantized neural network of 2 bits should be given. Actually, we do not consider the quantized neural network of 2 bits because we cannot keep the benefit of ternary weights. Ternary weights do not require multiply operations in inference-time, while 2-bits weights require multiply operations. And there is a potential to keep this benefit when using our hybrid representation method by applying the variant of sparse convolution operations[4]. We will add more details about inference process to appendices.\\n\\nWe'll update your addressed points. If you want, we can remind you after revising by adding a comment.\\n\\nIf you have more ambiguous terms or any questions, please claim those for revising our paper.\\n\\nBest Regards.\\n\\n[1] Choi, Jungwook, et al. \\\"Pact: Parameterized clipping activation for quantized neural networks.\\\" arXiv preprint arXiv:1805.06085 (2018).\\n[2] Jung, Sangil, et al. \\\"Learning to quantize deep networks by optimizing quantization intervals with task loss.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.\\n[3] Lin, Xiaofan, Cong Zhao, and Wei Pan. \\\"Towards accurate binary convolutional neural network.\\\" Advances in Neural Information Processing Systems. 2017.\\n[4] Park, Jongsoo, et al. \\\"Faster cnns with direct sparse convolutions and guided pruning.\\\" arXiv preprint arXiv:1608.01409 (2016).\"}", "{\"title\": \"First Replies\", \"comment\": \"Thanks to your advice, we can improve the unclear terms. We also agree with your concerns and hope our replies would work for you.\\n\\n1) What does RELU1 in the first paragraph mean?\\n>> Like your opinion, there are not enough words for ReLU1 in Sec 3.1. The ReLU1 is an activation function, used instead of ReLU activation for quantizing activated values in Basic Quantization (Sec 3.1).\\n\\n2) How are the activations for TTQ in section 4.2 quantized?\\n>> Actually, the original TTQ paper only quantized weights, not activated values. To compare our method and TTQ in a similar bit-precision, we applied the same activation quantization method as written in the Basic Quantization (Sec 3.1). Taking your advice, we will add more explanations about our TTQ implementation and results made by the original TTQ paper.\\n\\n3) How does the proposed method perform when compared with TTQ on ImageNet?\\n>> As above, we will add more TTQ results on ImageNet.\\n\\n4) More comparison with other state-of-the-art methods\\n>> This is a really great question. Actually, we concentrated on the effect of centralized quantization using sparse-large weights. As you mentioned, there are many state-of-the-art quantization methods. To minimize some unexpected or different effects by using state-of-the-art quantization methods such as PACT[1] or QIL[2] when evaluating the centralized quantization, we selected a simple quantization method which is the Basic Quantization (Sec 3.1). Our future work is that applying the centralized quantization to other state-of-the-art quantization methods.\\n\\nActually, we tried to put all the contents in 10 pages. It causes not enough explanations. In a reflection of your advice, we will move some sections to appendices and add more detailed comments. And we will add some citations of state-of-the-art quantization methods. Can you recommend a proper section to add the contents of state-of-the-art quantization methods? i) related work, ii) Sec 3.1 (Basic quantization), ii) experimental results (like ABC-net[3]), or iv) multiple choice.\\n\\n5) Does the proposed quantization method cause extra burden to memory access and inference time?\\n>> That is exact. Using more states in quantization causes an extra burden. Specifically, SLW needs a decoding process at inference-time. To minimize the computational cost of additional bits, we sparsified large weights. In sparse matrix multiplication, multiply operations of convolution layers are skipped when the value of weights is zero. Likewise in our method, almost multiply operations of convolution layers can hold the advantages of ternary weights. We will also add more details about inference to appendices.\\n\\nWe'll update your addressed points. If you want, we can remind you after revising by adding a comment.\\n\\nIf you have more ambiguous terms or any questions, please claim those for revising our paper.\\n\\nBest Regards.\\n\\n[1] Choi, Jungwook, et al. \\\"Pact: Parameterized clipping activation for quantized neural networks.\\\" arXiv preprint arXiv:1805.06085 (2018).\\n[2] Jung, Sangil, et al. \\\"Learning to quantize deep networks by optimizing quantization intervals with task loss.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.\\n[3] Lin, Xiaofan, Cong Zhao, and Wei Pan. \\\"Towards accurate binary convolutional neural network.\\\" Advances in Neural Information Processing Systems. 2017.\"}", "{\"title\": \"First Replies\", \"comment\": \"Thanks for your advice!\\n\\nWe also agree that my writing is more focused on the audiences who have some experiences in quantization. Actually, we tried to put all the contents in 10 pages. It causes not enough explanations for the background of quantization. We are going to move some sections (about Sec 3.5 and table 2) to appendices and give more account of the background. After revising, we can remind you by adding a comment if you want.\\n\\nIf you have any questions for understanding other details, please write comments.\\n\\nBest Regards.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper is about quantization, and how to represent values as the finite number of states in a low bit width, using discretization. Particularly, they propose an approach to tackle the problems associated with previous ternarization which quantize weights to three values. Their approach is a hybrid weight representation method, which uses a network to output two weight types: ternary weight and sparse-large weights. For the ternary weight, they need 3 states to be stored with 2 bits. The one remaining state is used to indicate the sparse-large weight. They also propose an approach to centralize the weights towards ternary values. Their experiments show that their approach outperforms other compressed modeling approaches, and show an increase in AlexNet performance on the CIFAR-100 while increasing model size by 1.13%.\", \"Overall, this is an interesting paper, offering a novel solution to tackle the degradation in accuracy occuring in ternary quantization techniques because of the number of quantization steps. Their method seems technically sound, however I am not familiar with this area, so I would trust more the opinion of other reviewers - subject experts in the matter.\", \"Their results on AlexNet and ResNet do show an improvement in terms of model accuracy, with only a slight increase in model size. They have also provided extensive experiments studying the benefits of their quantization method, the tradeoff among accuracy and model size.\", \"I find the abbreviations, being used too often, to be confusing\", \"The paper is targeted for a very focused audience, and does not give enough background for readers not familiar with \\\"ternarizations\\\". Even the abstract could benefit from a motivation/ problem statement sentence, as well as less abbreviations being used.\", \"I would vote for acceptance of this paper, although it does seem a too-targeted paper on a specific audience. I would urge the authors to revise writing to make it broader accessible.\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper works on weight quantization in deep networks. The authors propose to utilize the extra state in 2-bit ternary representation to encode large weight values. The authors also propose to use a weighted ridge regularizer which contains a \\\"part of L1\\\" term to make the weights with large values sparse.\\n\\nThe idea is simple and straightforward. However, the paper is not written very well with some typos and some terms defined unclearly. For instance, in the basic quantization method in Section 3.1, 1. What does RELU1 in the first paragraph mean? \\n\\nClarification in the experiment section can be further improved. How are the activations for TTQ in section 4.2 quantized? The original TTQ paper also has results for ImageNet, how does the proposed method perform when compared with TTQ on ImageNet?\\n\\nOne major concern is that some popular recent quantization methods are not compared. For instance, [1] also quantized both weights and activations. Can the proposed method outperform it? More comparison with these methods can better illustrate the efficacy of the proposed method. \\n\\nAnother concern is that, though the proposed method has accuracy gain compared with the full-precision baseline and TTQ, the quantization becomes much more complex due to the usage of SLW, does the proposed quantization method cause extra burden to memory access and inference time?\", \"others\": \"1. In Tables 3 and 4, \\\"Top-1 Error\\\" => \\\"Top-1 Accuracy\\\"?\\n\\n[1]. Choi, Jungwook, et al. \\\"Pact: Parameterized clipping activation for quantized neural networks.\\\" arXiv preprint arXiv:1805.06085 (2018).\\n\\n---------- post-rebuttal comments-----------\\nI really appreciate the authors for their detailed response and additional experiments. However, from the revised manuscript, the proposed method performs significantly worse than some recent quantization methods like PACT and QIL. Thus I keep my rating unchanged.\\n--------------------------------------------------------\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\", \"the_paper_proposes_a_hybrid_weights_representation_method_where_the_weights_of_the_neural_network_is_split_into_two_portions\": \"a major portion of ternary weights and a minor portion of weights that are represented with different number of bits. The two portions of weights are differentiated by using the previous unused state of a typical ternary neural network since only three states are used out of the four states given 2-bit representation. The experiments are solid based on the selected baseline model on CIFAR-100 and Imagenet dataset.\", \"pros\": \"\\u2022\\tThe idea of using the previous unused state in ternary neural network is interesting\\n\\u2022\\tOverall, the paper is well written. The proposed method is presented clearly with proper graph illustration.\", \"cons\": \"\\u2022\\tThe idea of using mixed bit width for neural network quantization is not new. However, the experiments in the paper only compare with basic quantization method which makes the comparison not fair enough. For example, in ABC-net[1], a few full precision coefficients are used to binarize the network. With 3 bit for both weights and activations, it achieves 61% top 1 classification on ImageNet dataset with ResNet-18 as backbone model. This is around 3% higher than the paper\\u2019s proposed method with 2/4 bits for weights and 4 bits for activations. \\n\\u2022\\tIn the paper, it claims that the proposed weight ridge method \\u201ccan obtain better accuracy than L2 weights decay\\u201d. However, there are no experiments or any theoretical supports for it.\\n\\u2022\\tAfter utilizing the forth state of a ternary neural network, it implies that all four states provided by 2 bit representation are used. Hence, the comparison with a quantized neural network of 2 bits should be given in the experiments also.\\n\\n[1] Lin, Xiaofan, Cong Zhao, and Wei Pan. \\\"Towards accurate binary convolutional neural network.\\\" Advances in Neural Information Processing Systems. 2017.\"}" ] }
Hyx-jyBFPr
Self-labelling via simultaneous clustering and representation learning
[ "Asano YM.", "Rupprecht C.", "Vedaldi A." ]
Combining clustering and representation learning is one of the most promising approaches for unsupervised learning of deep neural networks. However, doing so naively leads to ill posed learning problems with degenerate solutions. In this paper, we propose a novel and principled learning formulation that addresses these issues. The method is obtained by maximizing the information between labels and input data indices. We show that this criterion extends standard cross-entropy minimization to an optimal transport problem, which we solve efficiently for millions of input images and thousands of labels using a fast variant of the Sinkhorn-Knopp algorithm. The resulting method is able to self-label visual data so as to train highly competitive image representations without manual labels. Our method achieves state of the art representation learning performance for AlexNet and ResNet-50 on SVHN, CIFAR-10, CIFAR-100 and ImageNet and yields the first self-supervised AlexNet that outperforms the supervised Pascal VOC detection baseline.
[ "self-supervision", "feature representation learning", "clustering" ]
Accept (Spotlight)
https://openreview.net/pdf?id=Hyx-jyBFPr
https://openreview.net/forum?id=Hyx-jyBFPr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "hFo5slIPXK", "m-VzsYePwhL", "Qpf8iek1jUq", "o9ZUt2i6sn", "_pBGmh67F9", "lRaciCLxks", "IYdzVaVzvL", "_PVfSvyYiB", "6TCjfPPnak", "Q0qpdigPU", "Hyxdbiu3iH", "BJxkFWAesH", "rJeZ_eAeir", "rJgW41CxsH", "ByeCQC6xoH", "SylFxBcV5S", "B1gMQL-2tB", "H1g4TD85Fr" ], "note_type": [ "official_comment", "official_comment", "comment", "comment", "official_comment", "official_comment", "comment", "comment", "comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1594118385610, 1594118284576, 1593955387131, 1593952313440, 1582735457441, 1582735142767, 1580671790183, 1579727167137, 1578444498364, 1576798735499, 1573845760431, 1573081462743, 1573081192914, 1573080872888, 1573080613950, 1572279537143, 1571718682472, 1571608508020 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper1904/Authors" ], [ "ICLR.cc/2020/Conference/Paper1904/Authors" ], [ "~Haohang_Xu1" ], [ "~Haohang_Xu1" ], [ "ICLR.cc/2020/Conference/Paper1904/Authors" ], [ "ICLR.cc/2020/Conference/Paper1904/Authors" ], [ "~Paul_Greene1" ], [ "~Amjad_Almahairi1" ], [ "~John_Richard_Corring1" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1904/Authors" ], [ "ICLR.cc/2020/Conference/Paper1904/Authors" ], [ "ICLR.cc/2020/Conference/Paper1904/Authors" ], [ "ICLR.cc/2020/Conference/Paper1904/Authors" ], [ "ICLR.cc/2020/Conference/Paper1904/Authors" ], [ "ICLR.cc/2020/Conference/Paper1904/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1904/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1904/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Answer\", \"comment\": \"Re the questions: please see here, where we have discussed this: https://github.com/yukimasano/self-label/issues/7\"}", "{\"title\": \"Answer\", \"comment\": \"For those wondering: We've discussed it here: https://github.com/yukimasano/self-label/issues/7\"}", "{\"title\": \"Some questions about transition from the probability matrix Q to the labels.\", \"comment\": \"From your answer to question 1 : 1) How do you transition from the probability matrix Q to the labels? Do you just use argmax to assign labels to data points?\\n\\nDo you mean that in step1(representation learning), the Q of eq.6 is actually a one-hot matrix by applying argmax on probability matrix $Q^*$ (the $Q^*$ here means the direct solution of eq.7 in step2)?\\n\\nIf I do not understand correctly, what do you mean about transition from the probability matrix Q to the labels?\\n\\nIf I understand correctly, how about use $Q^*$ directly in step1 to compute the cross-entropy loss, but not use $argmax(Q^*)$ ? If a soft label will be better?\"}", "{\"title\": \"Question about Eq. 5\", \"comment\": \"I think Eq.5 should be writen as : log E(p,q) + log N = log<Q, -log P>. Can you give some points about how Eq.5 holds on?\\n\\nThank you very much.\"}", "{\"title\": \"answer\", \"comment\": \"Hi,\\n\\nThank you for the comment. We have updated the notation in the final (camera-version) of the paper where everything should match up properly.\\nThank you for pointing this out. \\nRegarding the algorithm, you can find our implementation here: https://github.com/yukimasano/self-label/blob/master/sinkhornknopp.py#L101 \\n\\nBest\"}", "{\"title\": \"answer\", \"comment\": \"Hi,\\nThank you for your kind comments. I've also replied to your email but got a bounce back so here I post the reply:\\n1) yes.\\n2) please find the SK algorithm in our github repo: https://github.com/yukimasano/self-label/blob/master/sinkhornknopp.py#L101 \\n3) we initialize them by np.ones(N) * 1/N and np.ones(K)*1/K\\n4) they are accuracies using the same architecture implementation and evaluation protocol as the rest in the table. This is to ensure comparability. I\\u2019m not sure how the current SOTA in CIFAR is done, but probably with tencrops, more extensive augmentation and maybe a ResNet.\\n5) https://github.com/yukimasano/self-label\\n\\nThanks for the wait!\"}", "{\"title\": \"Question about the Q matrix\", \"comment\": \"I would like to first thank you for this paper. I was wondering if you could answer a couple of questions that I have related to the paper.\\n\\n1) How do you transition from the probability matrix Q to the labels? Do you just use argmax to assign labels to data points?\\n2) This might be a silly question but : where exactly do you use the Sinkhorn-Knopp algorithm in the paper? Is it used for initializing the Q matrix (meaning that we initialize Q randomly and then apply the algorithm) ? I searched the whole internet but couldn't find an implementation for the Sinkhorn-Knopp algorithm for non-diagonalizable matrices like the Q matrix (the only condition given in the paper is that \\\"K divides N exactly\\\"). The frequently cited (Cuturi, 2013) paper only talks about square matrices.\\n3) How do you initialize the \\u03b1 and \\u03b2 scaling vectors? Just randomly? In that case, how do you make sure that the Q matrix remains a probability matrix after \\\"Step 2: self-labelling\\\" is applied?\\n4) What are the scores in Table 6? Are they accuracies? The current CIFAR-10 supervised SOTA is 99.00% and CIFAR-100 supervised SOTA is 91.70%, but the Table says that they are 91.8% and 71.0% ?\\n5) Are you planning on releasing the code for the paper?\\n\\nThank you very much.\"}", "{\"title\": \"Question about solution to Eq. 6\", \"comment\": \"I would like first to congratulate you for this nice paper. One thing that would be nice to clarify is why we get an integral solution to the linear program in Eq. 6. It doesn't seem that obvious to me, so it would be nice to make a comment on that (or even a proof or reference) in the paper. Also, would you also get an integral solution with regularized version and in practice?\"}", "{\"title\": \"Edits\", \"comment\": \"Isn't matrix-vector multiplication O(NK)?\\n\\nI can't find a resource for the nonsquare Sinkhorn-Knopp algorithm, did you have one?\\n\\nIt seems like the assignments of P to the transport polytope (after Equation 3) may be missing a minus sign... or I am missing something? Correct me if I'm wrong, but you should need this for positivity (for Sinkhorn-Knopp to apply) and for the definition of cross entropy to coincide with the transport cost.\\n\\nThanks for a nice paper.\"}", "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The paper focuses on supervised and self-supervised learning. The originality is to formulate the self-supervised criterion in terms of optimal transport, where the trained representation is required to induce $K$ equidistributed clusters. The formulation is well founded; in practice, the approach proceeds by alternatively optimizing the cross-entropy loss (SGD) and the pseudo-loss, through a fast version of the Sinkhorn-Knopp algorithm, and scales up to million of samples and thousands of classes.\\n\\nSome concerns about the robustness w.r.t. imbalanced classes, the ability to deliver SOTA supervised performances, the computational complexity have been answered by the rebuttal and handled through new experiments. The convergence toward a local minimum is shown; however, increasing the number of pseudo-label optimization rounds might degrade the results. \\n\\nOverall, I recommend to accept the paper as an oral presentation. A more fancy title would do a better justice to this very nice paper (\\\"Self-labelling learning via optimal transport\\\" ?).\", \"title\": \"Paper Decision\"}", "{\"title\": \"Final paper update\", \"comment\": [\"Additional to the previous update, we have revised our paper again with the following changes based on the reviewers\\u2019 feedback:\", \"New ResNet results that use 10 heads (Tab.10), beating the SOTA on a standard ResNet-50 at the time of submission, with Top-1 accuracy of 59.2 as opposed to BigBiGAN\\u2019s 55.4.\", \"New AlexNet results (Tab.9), where we followed R3\\u2019s suggestion of combining methods and were able to achieve SOTA on AlexNet, with Top-1 accuracy of 49.6 for ImageNet linear probing, and on Pascal VOC detection with MAP of 59.2.\", \"Extended the Pascal VOC table (Tab. 7) by one column (classification, fine-tuning only fc6-8), where our method also achieves SOTA.\", \"Additional imbalance ablations (Sec. 4.4) on CIFAR as requested by reviewers R1 and R3. The results show that our method works well even in heavy imbalance scenarios and the proposed label optimization via Sinkhorn-Knopp significantly outperforms k-means (see below for details).\", \"Incorporated some minor changes in the text reflecting the new findings.\", \"Due to the short rebuttal period, for the newly trained models, minor, auxiliary evaluations (Places linear probing) have not yet finished and are marked as \\u201cevaluating\\u201d in the paper revision. However, all main benchmarks have completed and are reported.\", \"Summarizing, our method achieves state of the art representation learning performance for AlexNet and ResNet-50 on SVHN, CIFAR-10, CIFAR-100 and ImageNet and when transferred to Pascal VOC.\", \"Imbalance experiments\", \"As promised, we have added an analysis on imbalanced data (Sec.4.4). We compare a light and a heavy imbalance scenario against using the full (balanced) CIFAR-10 dataset. The main results are:\", \"We find that in both scenarios our method works well and only suffers small performance losses, inline with the decreased amount of training data. We find that our method consistently outperforms experiments which use k-means (with k-means++ initialization) instead of our Sinkhorn-Knopp based clustering. Furthermore, we find that even in the worst imbalance setting (10% of class 1, 20% of class 2, etc.), our proposed method not only outperforms the experiment which uses k-means on the same data, but also that of k-means with the full data set. This confirms the mathematical clarification provided in Sec. 3.2 that the equipartition is equivalent to maximizing information, regardless of a true label distribution.\"]}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for their time and their detailed comments. We provide comments in the same order.\\n\\n1a. In our updated paper we provide the suggested plot with the NMI against the previous iteration, where we observe that indeed our method reaches higher values on this measure (NMI against previous of up to 90%) whilst using significantly less cluster optimizations (80 vs 400). This is because DeepCluster uses k-mean it is forced to discard and re-initialize the last layer every epoch, as the cluster-IDs change. We keep stable clusters, and thus do not use reinitializations that slow learning. In Figure A.1 we also see that using a comparable setting (10k clusters), our method reaches an NMI with the ImageNet validation set of >60% in the first ten epochs, which DeepCluster does not even reach after full training. We further confirm this fast training by having conducted one experiment where we linearly probe our \\u201cdefault\\u201d AlexNet [3k x 1] at the 200 epoch mark, (i.e. half-way) and we find that the Top-1 convolutional layer is 43.7%, i.e. only 1% point short of its final performance, indicating fast convergence. \\n\\n1b. The reviewer raises the important point about the equipartitioning assumption. However, as we show in Section 3.2, this is less of a constraint and more of a regularizing factor, pushing the network to maximize information between image index and pseudo-label. Furthermore, we use a relatively large number of clusters --- this \\u201coverclustering\\u201d likely allows the method to decompose large clusters until the individual subclusters have a similar mass overall. Still, we would like to do our best to fully answer this question so we are currently running class imbalance experiments; we hope that we will be able to update the paper with this additional experiments in the following days.\\n\\n2. Please note that, compared to the AlexNet SOTA (RotNet+Retrieval, Feng 2019) in the updated paper's Table 8, conv4 is the best layer for both methods and the performance difference is less than 1%. We think it is fair to say that we are relatively close (while still training on a single task).\\n\\nHowever, the reviewer\\u2019s suggestion is definitely valid and we would like to experiment with a hybrid method too. We are now running this experiment and hope to be able to update the table in the next few days.\\n\\nFurthermore, in the current version of the paper, as R1 suggested, we include a new set of experiments on CIFAR-10/100, SVHN where we significantly outperform the SOTA.\\n\\n3. We thank the reviewer for raising an interesting point. We can respond to this theoretically and empirically.\\n\\nTheoretically, a main motivation for our work is to base the alternate optimization on a single energy function. This, at least compared to the original DeepCluster, guarantees that the method monotonically optimizes an energy, iteration after iteration, and thus the clustering should gradually improve (at least based on the energy).\\n\\nThe relative frequency of the two optimisation steps does not affect the argument above. However, it might affect the quality of the solution since the optimization is still not globally-optimal. This can only be answered empirically.\\n\\nIn order to do so, we have trained a model without label optimizations (i.e. optimizing for fixed, random labels), and only achieve a Top-1 performance of 21.0% for ImageNet linear probing, which is significantly worse and on par with a random network (see the updated ablation table in the paper --- Table 1). Hence, label optimization is definitely important. Empirically, we found that our method yields good results for 40-160 optimizations spread out over the 400 epochs (also Table 1).\\n\\nOne key insight that we have is that augmentations make learning a random assignment much harder. Memorizing random labels for ImageNet without augmentations is easily doable [2]. However, once we add augmentations, the training loss never reaches 0. This is an indication that the task is hard enough to provide a meaningful signal in each iteration. \\n\\n[1] A. Kolesnikov et al. \\\"Revisiting Self-Supervised Visual Representation Learning.\\\" In CVPR. 2019.\\n[2] C. Zhang, et al. \\\"Understanding deep learning requires rethinking generalization.\\\" In ICLR 2017\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for their useful comments. Before addressing the specific comments, we would like to emphasize that our goal is not clustering per se, but using clustering as a pretext for self-supervising a deep network. Hence our evaluation assesses primarily this aspect of the method.\", \"we_address_the_comments_in_the_same_order\": \"1. Self-supervised learning does not need to be done via clustering, e.g. [1] train a CNN by having it predict a relative location of a patch of an image, or [2] train a CNN self-supervisedly by trying to retrieve the image instance. Vice-versa, clustering can be used outside the setting of self-supervised learning for all types of exploratory data analysis.\\n\\n2. First, note that our primary objective is to use clustering as a way to self-supervise a CNN feature extractor. For this to work well, our clusters do not need to have a perfect correspondence with \\u201cnatural\\u201d data clusters. Equipartition is better thought of as a form of regularization which maximises information (see also R2\\u2019s review), and empirically this outperforms the k-means algorithm that DeepCluster uses for self-supervision.\\nSecond, we show via ablation that using 3000 clusters for a dataset that \\u201cnaturally\\u201d has 1000 (in the sense that there are 1000 ImageNet classes) is better. This can be seen as overclustering and thus as a way of sidestepping the uniform cluster size regularization. As suggested by R3, we will provide CIFAR-10 experiments on unbalanced datasets to further explore this empirically.\\n\\n3. Please note that our main goal is not clustering per-se, but how it can be used to self-supervise image features. In the updated version of the paper, we now provide results for further datasets (CIFAR-10, CIFAR-100 and SVHN, see Table 6) and show that we also achieve state-of-the-art results in self-supervision on these smaller datasets. \\n\\n4. We do not stress clustering metrics because our goal is to use clustering as a mean (pre-text) to self-supervision of image features. However, note that we did provide the plot of Normalized Mutual Information (NMI) to the ImageNet validation set against training time in the Appendix (Fig. A.1). We now have included a further table with the NMI, Adjusted NMI and the Adjusted Rand Index of our models in the updated Appendix (see Table A.1). \\n\\n5. In our paper we show that our method scales to more than 1M images (ImageNet contains 1.2M training images). The core of our method relies on matrix-vector multiplies the size of $NK^2$, where $N$=number of images and $K$=number of clusters, and so it scales linearly with the number of images. We have added this analysis to the paper. \\n\\n[1] D. Pathak, et al. \\u201cContext Encoders: Feature Learning\\u201d by Inpainting. In CVPR 2016.\\n[2] Z. Wu, et al. \\\"Unsupervised feature learning via non-parametric instance discrimination.\\\" In CVPR 2018.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for their thorough reading and lucid understanding of the key ideas in the paper.\", \"regarding_k_means\": \"We have seen k-means described both as generative (e.g. see the referenced Bishop\\u2019s book) and discriminative and k-means can be derived as a special case of GMM (where all variances are isotropic and equal). However, we have revised the manuscript to address this and to avoid confusion with GAN-based generative approaches in self-supervised learning.\"}", "{\"title\": \"Updated version\", \"comment\": [\"We thank the reviewers for their time and their careful analysis of the work presented.\", \"Based on the feedback, we have uploaded an updated version of the paper with the following main changes:\", \"Experiments on CIFAR-10, CIFAR-100 and SVHN, where we achieve SOTA by a large margin.\", \"Additional ablations such as [5k x 1] and 0 number of label optimizations.\", \"Reformatted the ablations into multiple tables for additional ease of understanding.\", \"Additional plot in the Appendix of NMI vs the previous iteration for the [10k x1] AlexNet.\", \"Additional table in the Appendix showing NMI, adjusted NMI and adjusted Rand-Index for several models.\", \"Incorporated several clarifications requested by the reviewers.\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThe paper proposes a self-supervised learning procedure to train deep neural networks within an unsupervised learning setting. The authors build their work on a pretext task that consists in maximizing the information between the input data samples and labels that are basically obtained by a self-labeling procedure that clusters them in K distinct classes. This is similar to what was done in DeepCluster, a previous self-supervised algorithm that self-labels samples through clustering. However, differently from that approach, the current method does not introduce any additional clustering cost functions. Instead it implicitly achieves self-labeling by simply adding the constraint that label assignments equally partition the dataset. This constraint acts as a \\\"regularizer\\\" that allows the authors to minimize the cross-entropy loss between inputs and pseudo-labels while avoiding the degenerate trivial solution where all samples are assigned to the same pseudo-label. As a result, the authors are able to derive a self-labeling method that optimizes the same cross-entropy loss as the classification task. That is done by remarking that minimizing the loss function over the pseudo-label assignments (under the equal partition constraint) can be formulated as an optimal transport problem that can be solved efficiently with a fast version of the Sinkhorn-Knopp algorithm. In practice, what the authors do at training time is to alternate between 1) minimizing the average cross-entropy loss by training the neural network (feature extractor + linear classification head) given a fixed pseudo-labels assignment, and 2) optimizing the pseudo-label assignments implemented as a matrix Q of posterior distribution of labels given the sample index, which, as said, can be done efficiently with a KL-regularized version of Sinkhorn. Moreover, this last step can be carried out simultaneously for multiple distinct classification heads (with possibly different number of labels), each sharing the same feature extractor but inducing a different matrix Q. At this point, the number of classification heads can be treated as a hyperparameter of the algorithm.\\nThe authors then go on to show that this new algorithm is competitive with current state of the art method with several architectures in terms of providing a good feature extractor for downstream image classification, detection and segmentation tasks. They for instance consistently beat DeepCluster, the main direct competitor, on classification and detection tasks.\\nThey also conduct ablation studies that provide interesting insights on the functioning of their algorithms and the effects of the multiple classification heads, the number of clusters, and the quality of the learned assignment. Intriguingly, on ImageNet they find for example that AlexNet obtains better performance at validation when it's trained from scratch on labels obtained with their self-labeling procedure, as opposed to the original labels.\", \"decision\": \"In my opinion this paper should be a clear accept. The paper is well written, presents an elegant idea in a clear and straight-forward manner, and is solidly built on top of the current literature on self-supervised learning for image processing, which is also very well summarized.\\nThe feature extractors obtained with the proposed algorithm are convincingly tested and validated on several downstream tasks (like classification on ImageNet, PascalVOC classification, detection and segmentation), and that is done for several base architectures, obtaining performances that are competitive with state-of-the-art. In addition, a series of careful ablation studies help in gleaning some scientific understanding on the method.\", \"minor_comments\": [\"The authors refer to traditional clustering like k-means as being \\\"generative\\\", which a little confusing. Clustering algorithms can be derived within a probabilistic framework by positing a generative model of the data, however, strictly speaking, k-means is not by itself is not a generative approach. It's a minor point, but it could be helpful to be more precise about this, in order to avoid possible confusion. The main point that the authors want to make in this regard is that their framework eschews having to posit an additional clustering cost function.\"]}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper develops a novel self-supervised learning method by combing clustering and representation learning together.\\nDifferent from other methods, the two tasks are optimized within the same objective function. Under the weak assumption that the number of samples should be similar across different clusters, the authors further develop a modified Sinkhorn-Knopp algorithm to solve the problem. Experiments on real-world image data demonstrate the effectiveness of the developed solution. In general, the whole paper is well written and the developed solution is interesting. However, I have the following comments:\\n\\n1. I am still confused about the difference between self-supervised learning and clustering when we do not have labeled data. From my point of view, they are actually the same thing. The authors are suggested to provide more explanations about the differences.\\n2. The used assumption that samples are uniformed distributed across different cluster are too strong in practice. In many real-world scenarios, the size of the cluster often very a lot, I was wondering how the proposed method can tackle this issue.\\n3. Experiments are only conducted on the image dataset, which is not quite convincing. The authors are suggested to use the datasets that are normally used in the clustering research to further demonstrate the effectiveness of the method.\\n4. It is quite surprising that conventional clustering evaluation metrics such as Normalized Mutual Information and Adjusted Rand Index are not used in the experiments.\\n5. How about the time complexity of the developed algorithm? Can it be scaled to large datasets? Further complexity analyzes are suggested.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"Summary & Pros\", \"This paper proposes a representation learning method based on clustering. The proposed method performs clustering and representation learning alternatively and simultaneously. This approach requires only a few domain-specific prior (precisely, CNN prior) while self-supervised learning requires more prior domain knowledge.\", \"Compared to the previous work, DeepCluster, this paper uses the same objective for clustering and representation learning. For clustering, the objective can be formulated as an optimal transport problem and it can be efficiently solved. This approach provides desired properties such as convergence.\", \"This paper shows the proposed method outperforms DeepCluster and it achieves comparable performance with SOTA methods in the representation learning literature.\", \"Concerns #1: More analysis should be provided.\", \"The author claimed that the proposed method has better convergence properties than DeepCluster. To verify that, more experimental or theoretical supports should be provided. For example, the convergence rate might be checked as Figure 2(b) in DeepCluster paper using NMI against the previous iteration.\", \"If the number of each class is imbalanced, the equipartition constraint might degrade the quality of the label assignment. Thus, ablation studies about the imbalance setting on small-sized datasets such as CIFAR should be provided. I think K-means could prevent such imbalance issues, so in this case, DeepCluster might perform well.\", \"Concerns #2: Performance is still far from SOTA.\", \"As reported in Section 4.3, the proposed method still underperforms SOTA methods significantly. The SOTA method can be considered as a combination of (instance-wise) clustering and self-supervision. Thus, such a combination should be tried for improving performance.\", \"Concerns #3: How to guarantee this approach finds good semantic representations?\", \"In this approach, the model generates a task via clustering, so it might suffer from unsuitable solutions even under the equipartition constraint. If we use a very deeper architecture and a larger size of embedding, then the main optimization problem (3) might be solved before correct label assignments. Moreover, at the first iteration, the labels might be totally random, and then clustering quality is also zero. How to guarantee the clustering quality is gradually improved while training?\", \"The proposed method provides meaningful gain compared to the previous work, DeepCluster. I think this direction against self-supervised learning is important because it requires relatively smaller domain knowledge. However, I'm not sure how the proposed method can converge stably and efficiently. So I think it would be better if more analysis about the convergence is given in a rebuttal.\"]}" ] }
HklliySFDS
Continual Learning with Gated Incremental Memories for Sequential Data Processing
[ "Andrea Cossu", "Antonio Carta", "Davide Bacciu" ]
The ability to learn over changing task distributions without forgetting previous knowledge, also known as continual learning, is a key enabler for scalable and trustworthy deployments of adaptive solutions. While the importance of continual learning is largely acknowledged in machine vision and reinforcement learning problems, this is mostly under-documented for sequence processing tasks. This work focuses on characterizing and quantitatively assessing the impact of catastrophic forgetting and task interference when dealing with sequential data in recurrent neural networks. We also introduce a general architecture, named Gated Incremental Memory, for augmenting recurrent models with continual learning skills, whose effectiveness is demonstrated through the benchmarks introduced in this paper.
[ "continual learning", "recurrent neural networks", "progressive networks", "gating autoencoders", "sequential data processing" ]
Reject
https://openreview.net/pdf?id=HklliySFDS
https://openreview.net/forum?id=HklliySFDS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "y-f3SvPFfm", "rJlJckeOjB", "H1ebIkl_jH", "rylS_A1OoS", "r1xv8YTatr", "r1esT_GTKr", "r1lsUfWpKr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735467, 1573547910911, 1573547848569, 1573547628947, 1571834190919, 1571788995269, 1571783250659 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1902/Authors" ], [ "ICLR.cc/2020/Conference/Paper1902/Authors" ], [ "ICLR.cc/2020/Conference/Paper1902/Authors" ], [ "ICLR.cc/2020/Conference/Paper1902/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1902/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1902/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This manuscript describes a continual learning approach where individual instances consist of sequences, such as language modeling. The paper consists of a definition of a problem setting, tasks in that problem setting, baselines (not based on existing continual learning approaches, which the authors argue is to highlight the need for such techniques, but with which the reviewers took issue), and a novel architecture.\\n\\nReviews focused on the gravity of the contribution. R1 and R2, in particular, argued that the paper is written as though the problem/benchmark definition is the main contribution. R2 mentions that in spite of this, the methods section jumps directly into the candidate architecture. As mentioned above, several reviewers also took issue with the fact that existing CL techniques are not employed as baselines. The authors engaged with reviewers and promised updates, but did not take the opportunity to update their paper.\\n\\nAs many of the reviewers' comments remain unaddressed and the authors' updates did not materialize, I recommend rejection, and encourage the authors to incorporate the feedback they have received in a future submission.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to R3\", \"comment\": \"We agree that the model size is the main limitation GIM. However, as you say, it is reasonable to assume that the number of hidden units of each module could decrease as the number of modules increases. Inter-modules connections can foster reuse of previous features, thus reducing the need to learn them from scratch. As far as we know, our work is the first example of a progressive model + gating AE applied to sequential data. Therefore, we decided to leave this extension out of the proposed paper, since we prefer to focus more on the benchmark design and the sequential nature of the data rather than the model architecture. We would like to propose GIM as a simple baseline that can be improved upon in several different directions.\\n\\nWe agree that it would be valuable to compare current CL techniques (e.g EWC) with GIM in order to further assess our approach. We will integrate some of the main techniques into our analysis and we will highlight the main differences between their application on a computer vision scenario and a sequential data processing scenario.\\n\\nWe struggle to understand what the use of a larger dataset (e.g. NLP / sentiment analysis) would add to our analysis. We recognize that it will be an important step in the development of full-fledged CL systems for sequential data processing, but we believe that in this first phase it would be better to focus on task-agnostic benchmarks before moving on to more complex scenarios. This choice allows us to highlight the CL properties of the model without the need to tailor it on a specific field of application, which often requires a complex preprocessing or the use of intermediate embeddings. By using simpler benchmarks, we reduce the probability to misinterpretation of results. This approach also makes it easier for other researchers to compare their results against our benchmark, since we do not require a large computational infrastructure to manage the experiments.\"}", "{\"title\": \"Response to R2\", \"comment\": \"The main difference between the sequential data processing scenario and the vision scenario is related to the fact that sequential processing requires the use of a memory that embeds the history of past inputs. Such memories have to be appropriately learned and preserved, making the sequential processing tasks clearly different than the vision tasks. When it comes to CL, drifts in the input distribution could affect the hidden memory of RNNs. Additional works will be needed in order to clarify this phenomenon. We will clarify this point in the Introduction.\\n\\nThe main concern of this work was to provide a set of common benchmarks for CL in sequential domains that are independent of domain-specific applications (e.g. NLP) against which existing and future models can compare their performances. We will better describe the experimental settings, reserving a specific section to the description of the tasks and datasets. Since they are the main contribution of this work, we agree that they should be better highlighted.\\nIn addition, we extend the progressive approach and the gating autoencoder to the recurrent domain. At the best of our knowledge, no previous work proposed these two extensions for recurrent neural networks, nor they combine both into one end-to-end model.\\n\\nThe reason why the Augmented models need the autoencoders (e.g. from A-LSTM to GIM-LSTM) is that without the autoencoders is not possible to avoid the use of task labels at inference time. GIM architectures can detect the correct module for inference, while Augmented modules alone only allow the transfer of useful, learned features from one module to the others.\\n\\nThe experimental protocol is task-agnostic in the sense that we do not restrict the choice of datasets to a particular application (e.g. NLP), but instead, we proposed a set of general datasets (Copy and SSMNIST) that do not require any domain-specific technique. Using this benchmarks, we can evaluate CL models while eliminating the idiosyncrasies of specific application domains. \\n\\nIn future versions, we will extend standard CL techniques to the RNN scenario and we will compare their performances against both naive RNNs and GIM.\\nWe will also provide ablation studies highlighting the effects that autoencoders and inter-modules connections have on the overall performance of GIM.\"}", "{\"title\": \"Response to R1\", \"comment\": \"The baselines do not employ CL techniques since the main motivation behind the baseline experiments was to assess the impact and extent of catastrophic forgetting in RNNs, which we believed to be a necessary first step to highlight continual learning issues in the context of sequential data processing (being the literature on this topic still in its infancy, differently from what occurs with feedforward networks and machine vision applications). Results show that, unsurprisingly, the standard models are severely affected by catastrophic forgetting, supporting our claim for novel architectures and approaches addressing the issue for sequential models. It is within this context, that we introduce GIM as a possible solution to the issue: of course, there can be others and based on different approaches, e.g. an adaptation of elastic weight consolidation. Nevertheless, to the extent of our knowledge, there is, yet, no work in literature tackling catastrophic forgetting in continual learning for sequential problems.\\nNevertheless, we are taking in the reviewer suggestion and we will expand the analysis by adapting standard CL techniques to the recurrent scenario, assessing their performance in this novel context.\\nWe will provide ablation studies with respect to Augmented architectures and GIM architectures in future versions.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposed an interesting continual learning approach for sequential data processing with recurrent neural network architecture.\\nThe authors provide a general application on sequential data for continual learning, and show their proposed model outperforms baseline.\\n\\nIt is natural that their naive baseline shows poor performance since they do not consider any continual learning issues like the catastrophic forgetting problem. Then, I hesitate to evaluate the model in terms of performance. In that sense, it would be much crucial to show more meaningful ablation studies and analysis for proposed model. However, there is a few of thing about them. \\n\\nThen, I decide to give a lower score that even the authors suggest that the main contribution is a definition of problem setting. It requires more detailed and sophisticated analysis.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The goal of this work is to best understand the performance and benchmarking of continual learning algorithms when applied to sequential data processing problems like language or sequence data sets. The contributions of the paper are 3 fold - new benchmarks for CL with sequential data for RNN processing, new architecture introduced for more effective processing and a thorough empirical evaluation.\", \"introduction\": \"I think a little more insight into why the sequential data processing CL scenario is any different than the vision scenario would be quite helpful. Specifically, it would be quite impactful to tell us more about what the additional challenges with RNNs for CL vs feedforward for CL are in the intro. \\n\\nThe paper is written as if the benchmark is the main contribution and the architecture improvement is just a delta on top of this, but it gets confusing when the methods section starts off with just directly stating the new architecture. \\n\\nThe algorithm seems like a straightforward combination of recurrent progressive nets and gated autoencoders for CL. Can the authors provide more justification if that is the contribution or there is more to the insight than has been previously suggested in prior work?\\n\\nFigure 1 has a very uninformative caption. It also doesn\\u2019t show how modules feed into one another properly. \\n\\nThe motivation for why one needs GIM after one already has A-LSTM or A-LMN is not very clear?\\n\\nOverall the contribution does seem a bit incremental based on prior work and the description lacks enough detail to properly indicate why this is a very important contribution?\", \"experiments\": \"What does it mean to be application agnostic but restricted to particular datasets and losses? This doesn\\u2019t quite parse to me. \\n\\nThe description of the tasks is very informal and hard to follow. It\\u2019s not clear what exactly the tasks and datasets look like \\n\\n\\u201cusing morehidden units can bridge this gap\\u201d -> why not just do it? Its a benchmark after all. \\n\\nOverall the task descriptions should be in a separate section where the setup is described in a lot of detail and motivated properly. \\n\\nThe results in the experiments section are very hard to parse. The captions need much more detail for eg Table 2. \\n\\nCould we also possibly have more baselines from continual learning? For instance EWC (Kirkpatrick) or generative replay might be competitive baselines. \\n\\nOverall I think that the GIM and A-LMN and A-LSTM methods are reasonable although somewhat incremental. But the proposed benchmarks are pretty unclear and the results are a bit hard to really interpret well. It would also be important to run comparisons with more baselines and to provide more ablation/analysis experiments to really see the benefit of GIM/A-LMN or A-LSTM. I also think that the task descriptions should be much earlier in the paper and desribed in much more rigorous detail.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\n\\nIn this paper, the authors propose a new method to apply continual learning on sequential data. The model is constructed by combining an Autoencoder and LSTM/LMN for each task. The experiments on several datasets show the proposed model outperforms basic LSTM/LMN.\", \"strength\": [\"Sequential data widely exist in the real world, e.g., text, health records. Thus, It is interesting to see that continual learning is used in sequential data.\", \"The motivation of the proposed model is clear. The authors save the learned knowledge in the hidden representation of LSTM/LMN.\"], \"weakness\": [\"In this paper, the model size linearly increases since the number of LSTM/LMN and AE increases when a new task comes in. Thus, if the number of tasks is too large, the model size is quite big. In traditional continual learning settings, researchers may not always increase the model size for overcoming catastrophic forgetting. For example, if task 1 and task 2 sample from the same distribution, they can share the same LSTM/LMN and AE. Thus, it would be better if the authors can consider how to reduce the model size in the future version.\", \"In the experiments, the authors only compare the proposed model with simple LSTM or LMN. However, most continual learning methods can still be applied in this scenario, at least regularization based methods [1,2] can be simply applied in this scenario. The authors may need to compare the proposed method with them in the future version.\", \"It is better to compare it with a larger dataset. For example, in the natural language processing field, we can regard sentiment analysis on one language as one task. Then, we can construct the continual learning dataset for sentiment analysis.\"], \"minor_comments\": \"It is better to improve Figure 3 by adding the x-axis label and y-axis label.\\n\\n\\n[1] Kirkpatrick, James, et al. \\\"Overcoming catastrophic forgetting in neural networks.\\\" Proceedings of the national academy of sciences 114.13 (2017): 3521-3526.\\n[2] Zenke, Friedemann, Ben Poole, and Surya Ganguli. \\\"Continual learning through synaptic intelligence.\\\" Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017.\"}" ] }
HyxgoyHtDB
Policy Optimization by Local Improvement through Search
[ "Jialin Song", "Joe Wenjie Jiang", "Amir Yazdanbakhsh", "Ebrahim Songhori", "Anna Goldie", "Navdeep Jaitly", "Azalia Mirhoseini" ]
Imitation learning has emerged as a powerful strategy for learning initial policies that can be refined with reinforcement learning techniques. Most strategies in imitation learning, however, rely on per-step supervision either from expert demonstrations, referred to as behavioral cloning or from interactive expert policy queries such as DAgger. These strategies differ on the state distribution at which the expert actions are collected -- the former using the state distribution of the expert, the latter using the state distribution of the policy being trained. However, the learning signal in both cases arises from the expert actions. On the other end of the spectrum, approaches rooted in Policy Iteration, such as Dual Policy Iteration do not choose next step actions based on an expert, but instead use planning or search over the policy to choose an action distribution to train towards. However, this can be computationally expensive, and can also end up training the policy on a state distribution that is far from the current policy's induced distribution. In this paper, we propose an algorithm that finds a middle ground by using Monte Carlo Tree Search (MCTS) to perform local trajectory improvement over rollouts from the policy. We provide theoretical justification for both the proposed local trajectory search algorithm and for our use of MCTS as a local policy improvement operator. We also show empirically that our method (Policy Optimization by Local Improvement through Search or POLISH) is much faster than methods that plan globally, speeding up training by a factor of up to 14 in wall clock time. Furthermore, the resulting policy outperforms strong baselines in both reinforcement learning and imitation learning.
[ "policy learning", "imitation learning" ]
Reject
https://openreview.net/pdf?id=HyxgoyHtDB
https://openreview.net/forum?id=HyxgoyHtDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "YerBdxWIm", "Bygc2W9njS", "HylNqb53iS", "S1eqLZ92or", "H1gDz3KpYS", "S1g0TOI8KS", "rkgsiAyOOS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735436, 1573851569844, 1573851531621, 1573851474184, 1571818511166, 1571346629700, 1570401954822 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1901/Authors" ], [ "ICLR.cc/2020/Conference/Paper1901/Authors" ], [ "ICLR.cc/2020/Conference/Paper1901/Authors" ], [ "ICLR.cc/2020/Conference/Paper1901/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1901/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1901/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"Thanks for your detailed responses to the reviewers, which helped us a lot to better understand your paper.\\nHowever, given that the current manuscript still contains many unclear parts, we decided not to accept the paper. We hope that the reviewers' comments help you improve your paper for potential future submission.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Review #1\", \"comment\": \"We thank you for your review and suggestions for improvement.\\n\\nRegarding your concern on \\u201cboth access to an expert and a reward function\\u201d:\\nWe want to emphasize that the only prerequisite we need to apply POLISH is access to the environment. The expert, in our case, is not a predefined policy. Rather, it is built via MCTS dynamically. This is applicable in cases where a simulator is available, for example games and robotics.\\n\\nRegarding \\u201cmixes two seemingly distinct problem settings (imitation learning and reinforcement learning)\\u201d:\\nWe do not require a predefined expert policy, rather we build our own expert through interactions with the environment by performing MCTS. It is a standard approach in the absence of an expert [4, 5, 6].\\n\\nWe wish to point out that combining both learning settings is a popular approach to policy optimization [1, 2, 3, 4]. In this paper, we provide a novel approach of combining IL and RL through the application of local policy improvement. Given a current policy, we showed that using MCTS to plan with the ***current policy*** can result in a better policy. This better policy plays the role of an \\u201cexpert\\u201d. Your understanding is correct in the primary use-case of POLISH is when the current policy is sub-optimal, which is why we are performing policy optimization in the first case.\\n\\nRegarding \\u201cnot sure that PPO is a directly comparable baseline\\u201d:\\nWe agree and we could improve the presentation of the experiment section. The main comparison we are interested in is to compare different values of t (the rollout horizon for MCTS). PPO is a reference for a pure reinforcement learning algorithm to show that by imitating an MCTS policy, we can improve over pure reinforcement learning approach.\\n\\nWe apologize for the confusion. Both POLISH and PPO are initialized from the same pre-trained policy.\"}", "{\"title\": \"Response to Review #2\", \"comment\": \"We thank you for taking the time to review our paper. We will improve our writing to make it more accessible to the general audience. We answer a few clarification questions here.\", \"the_main_research_question_is\": \"given an expert policy (e.g., MCTS), how can we best collect demonstrations from expert rollouts for most efficient imitation learning? Our main contribution is the POLISH algorithm where we collect local trajectory improvements via rolling out the expert policy for short time horizons.\\n\\nRegarding \\u201chow MCTS with UCT is used\\u201d:\\nWe use MCTS with the UCT rule [1] to generate local improvements of an existing trajectory. This is similar to how AlphaZero [2] works.\", \"regarding_the_0_1_loss_and_the_empirical_loss_used_in_the_experiment\": \"The reason we switch is mentioned in the paragraph above Proposition 2. The loss in our experiment basically matches a distribution instead of a single action as is the case for the 0-1 loss. The third term is a value loss term defined in the last sentence of section 6.1. This term is similar to the value loss term in the PPO algorithm [3].\", \"regarding_state_distribution_divergence\": \"This is a common problem with imitation learning [4]. Since the expert demonstration data are collected from the state distribution induced from the expert policy, once the learned policy deviates from this trained state distribution, it is essentially encountering states not seen in the training. While it is possible with enough demonstrations, we can minimize the amount of unseen states. It is practically impossible to collect that many demonstrations. Thus, this is a central challenge in imitation learning.\\n\\n[1]: Browne, Cameron B., et al. \\\"A survey of monte carlo tree search methods.\\\" IEEE Transactions on Computational Intelligence and AI in games 4.1 (2012): 1-43.\\n[2]: Silver, David, et al. \\\"Mastering chess and shogi by self-play with a general reinforcement learning algorithm.\\\" arXiv preprint arXiv:1712.01815 (2017).\\n[3]: Schulman, John, et al. \\\"Proximal policy optimization algorithms.\\\" arXiv preprint arXiv:1707.06347 (2017).\\n[4]: Ross, St\\u00e9phane, Geoffrey Gordon, and Drew Bagnell. \\\"A reduction of imitation learning and structured prediction to no-regret online learning.\\\" Proceedings of the fourteenth international conference on artificial intelligence and statistics. 2011.\"}", "{\"title\": \"Response to Review #3\", \"comment\": \"Thank you for your detailed review and your suggestion for improvements. Thank you for recognizing the contribution of our submission.\\n\\nRegarding \\u201ca value of t other than the extremes will maximize the bound\\u201d:\\nFirstly, Theorem 1 does not concern the effect of t, it is an existing result we used to derive the consideration for t in equation (3). \\n\\nIt certainly is desirable to have an analytic form for what value of t maximizes the bound. However, without making additional assumptions on how J(\\\\pi^*) grows, it is not possible to derive an expression for an optimal t. Thus we phrase it as \\u201cthere can be a balance point\\u201d and treat it as the motivation for experimenting with different t\\u2019s in our empirical studies.\\n\\nRegarding \\u201cthese bounds will have different values of \\\\epsilon\\u201d:\\nYour understanding is correct. We assumed the \\\\epsilon\\u2019s remained constant across different values of t to focus on the main point, the relationship of the bound with respect to different t. Assuming constant \\\\epsilon is an approximation to what happens in practice. \\n\\nIn practice, we found that it became harder to imitate long time horizon MCTS rollouts. If we refer back to Proposition 2, the KL-divergence plus the entropy is an upper bound on the \\\\epsilon. In Figure 3 (b), we showed that the KL-divergence term grows as t grows and in practice the entropy term remains comparable across different values of t. Thus the upper bound grows as t grows.\\n\\nRegarding discounting in Equation (2):\\nThanks for catching this! We have updated the analysis with the correct discountings.\\n\\nRegarding \\u201cdoes changing t also affect the MCTS step and the expert policy obtained\\u201d:\\nChanging t does not change the MCTS step. It changes how long we obtain the MCTS demonstrations for while maintaining the same MCTS policy.\\n\\nRegarding \\u201cdoes the MCTS step use the perfect simulator\\u201d:\\nWe use the environment interactions during MCTS. The main contribution is to find a sweet point for the value of t when MCTS is used in the policy training. For the comparison with different values of t, the number of environment interactions is the same so the comparison is fine across values of t.\\n\\nThe comparison with PPO is to provide a pure reinforcement learning baseline and shows the advantage of the proposed approach.\", \"regarding_curves_are_overlapping\": \"We wish to point out that in the Ant and Walker that the advantage of t=32 over the next best value of t on the mean return is around 300, which is significant. The curves look largely overlapping is because of the advantage, when compared with the advantage over PPO, looks small. We will improve the presentation in the figures to make the differences more obvious. \\n\\nRegarding \\u201cin 6.3, what does the reward improvement \\u2026 mean\\u201d:\\nIt is the first term in the bound minus J(\\\\pi) where \\\\pi is the current policy. Figure 6.3 shows how much better the MCTS expert is over the current policy. Since the MCTS policy does not change when t varies, this shows the effect of changing initial state distributions has on the first term in the bound.\", \"regarding_minor_comments\": \"We will add more details on obtaining an expert policy through MCTS.\\nt = 32 is chosen because it is approximately halfway between 1 and 1000 in terms of multiplicative factors.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"[Summary]\\nThis paper proposes POLISH, an imitation learning algorithm that provides a balance between Behavioral Cloning (BC) and DAgger. The algorithm reduces the mismatch between the target policy and an expert policy on states obtained from starting at the target policy's state distribution and following the expert policy for a time segment of t steps. The claim is that a suitable t will keep the training states close to the target policy's state distribution and avoid the compounding errors that arise when the agent drifts away from its training distribution. The paper also explores the possibility of policy optimization by replacing the pre-defined expert policy in POLISH with a policy derived from Monte Carlo Tree Search. Theoretical and empirical analyses in the paper studies the effect of t and MCTS planning in POLISH on policy improvement.\\n\\n[Decision]\\nA clear study of the dimension between BC and DAgger is a useful contribution to the literature and an algorithm that effectively solves the distributional shift problem in BC will be of high practical value. However, the results in this paper do not support the claim that a reasonable time segment length in POLISH alleviates this problem. The theory shows a bound on the performance of the target policy that varies with t, but it is not clear if a suitable t is better than the two extremes, i.e., BC and DAgger. The experiments section is limited and, on two out of the three tasks, there is no considerable difference between the performance of POLISH, BC, and DAgger. I am leaning towards rejecting this paper.\\n\\n[Explanation]\\nIn Section 5, Theorem 1 shows the effect of t, the length of time segments, on the performance and the target policy by providing a bound. If this theorem is motivating a middle ground between BC and DAgger, it needs to show that a value of t other than the extremes will maximize the bound. The bound in the paper consists of a positive term and a negative term, both of which grow with t. It is then concluded that a balance point will maximize the overall bound. I do not see how it follows that this balance point is a middle ground and not an extreme value. If, for example, the negative term grows faster than the positive term, then DAgger (t=1) will be have the best performance according to this theorem.\\n\\nIt is not clear how the bound in Theorem 1 is comparing the performance of algorithms with different values of t. For a fixed policy, one can obtain different bounds by choosing different values of t. These bounds will have different values of \\\\epsilon. The state distribution for \\\\epsilon is the distribution of states visited in a limited time segment (which depends on the target policy, the expert policy, and the segment length). Using \\\\epsilon (or \\\\epsilon_i) in the bound drops the relationship between \\\\epsilon and the length of time segments, and hides the fact that different algorithms are minimizing different errors. \\n\\nThe equality in Eq 2 is not obvious to me. J(\\\\pi) is and expectation under the discounted visit distribution. For example, if gamma is small, then the states in the start state distribution will have a higher weight in J(\\\\pi) while the right-hand side sums over the expected performance in all time segments equally. I believe the later terms in the sum should also be discounted.\\n\\nDoes changing t also affect the MCTS step and the expert policy obtained from it? If so, it is possible that a suitable t will result in better performance because the MCTS step finds a stronger expert policy, and not because the imitation learning step better reduces the error.\\n\\nDoes the MCTS step use the perfect simulator or a learned model in the experiments? If POLISH, unlike PPO, has access to the MDP, the comparison of these two methods is not fair.\\n\\nIn Fig 2 (a) and (c), the curves for t=1, t=32, and t=1000 overlap through most of the training process. This is not conclusive evidence that a sweet spot for t results in better performance. An experiment on a simpler setting with more runs may elucidate the effect of t on the performance.\\n\\nIn 6.3, what does the reward improvement after running MCTS with the current policy precisely mean, and how does this correspond to the first term in the bound, i.e., the sum of the expected performance of \\\\pi_* over time segments?\\n\\n\\n[Minor comments]\\n- I suggest adding the process of obtaining an expert policy through MCTS to Algorithm 1. It is hard to understand the process without a clear step-by-step description.\\n- How is t=32 chosen for the experiments in Fig 2?\\n-------------------\", \"after_rebuttal\": \"I have read the authors' response and the other reviews. The rebuttal addresses my questions and concerns about clarity of presentation. However, I am still not convinced by the evidence in this paper. On two out of the three environments, the performance of the three values of t is not much different through the training and this is not because of the advantage over PPO. On Ant, for example, the curves for t=32 and t=1000 are not even one std apart through most of the training.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper was confusing and difficult to read. I think it is trying to devise a methodology that improves upon a given expert policy, but I am not confident whether if this is the objective. The main contribution of the paper is the algorithm POLISH on page 5. However, it was unclear how MCTS with UCT is used with the expert policy in the algorithm. The first mention of the 0-1 loss objective is in section 4.1, and Algorithm 1 on page 5 claims to minimize this loss on line 15. However, in the experiment, the loss function is then switched to another L(D, \\\\pi) = D_{KL}(\\\\pi || \\\\pi*) + H(\\\\pi) + Lv(D, \\\\pi). Why the switch? And what is the definition of the third term? By examining the experimental results, Algorithm 1 seems to outperform PPO. Since the expert policy is known and given to Algorithm 1, I speculate Algorithm 1 is only replicating the expert policy, which would have outperformed PPO that has to learn from scratch. Thus, the comparison does not seem fair. It would be interesting to see how POLISH compares with the expert policy.\", \"other_comments\": [\"Page 1 in Introduction: \\u201cHowever, these models suffer from the problem that even small difference between the learned policy and the expert behavior can lead to a snow-balling effect, where the state distribution diverges to a place where the behaviour of the policy is now meaningless since it was not trained that part of space\\u201d. Do you mean that the algorithm diverges instead of the state distribution diverges? The agent may incur errors in a space that has not been observed, but it should be able to learn eventually. So, what is causing divergence in states that have not yet encountered?\", \"Page 4 in The POLISH Algorithm Main Algorithm: the 0-1 loss function is defined to be \\u201cL(D, \\\\pi) = 1/|D| \\\\sum_{s,a*}\\\\in D (I(\\\\pi(s) \\\\neq a*)), where a* is the expert policy\\u2019s selected action. I think it makes more sense to write \\u201cL(D, \\\\pi)\\\" as 1/|D| \\\\sum_{s,a*}\\\\in D I ( a \\\\neq a* : a ~ \\\\pi(s)). Also, in RL, we don't say that the policy receives a reward, but rather the agent receives a reward.\"]}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes POLISH, a reinforcement learning learning algorithm based on imitating partial trajectories produced by an MCTS procedure. The intuition behind this idea is that behavioral cloning suffers from distribution shift over time, and using MCTS allows imitation learning to be done on states closer to the policy's state distribution, which the authors justify using techniques similar to DAgger. The authors evaluate this method on continuous OpenAI Gym tasks, and show that it consistently beats a PPO baseline.\\n\\nOverall, my decision for this paper errs on the side of reject. This primarily comes from the fact that the writing is unclear to me, and this algorithm needs both access to an expert and a reward function, which is a setting that I'm not sure is very applicable in practice. Additionally, the experimental results seem fairly weak. In the imitation learning setting, there appears to be little difference between behavioral cloning, DAgger, and an intermediate segment length. In the reinforcement learning setting, the PPO baseline seems to be unfair, which I detail below. \\n\\nThe writing is confusing to me as it mixes two seemingly distinct problem settings (imitation learning and reinforcement learning) and interferes with my full understanding of the motivation of the paper. My current guess is that this paper is primarily aimed towards policy optimization in a reinforcement learning setting. The algorithm can start from scratch with a random initial policy, and optimize the policy to maximize total returns. However, much of the paper is written as if the setting were imitation learning, where expert advice is available. If this paper is primarily aimed at imitation learning paper, I am unsure of the advantage of using this method over DAgger. My hypothesis is that the primary use-case of POLISH over DAgger is when the provided expert is suboptimal, and using MCTS allows the policy to improve beyond the expert. However, this point is not provided in the paper.\\n\\nFor the experiments, I'm not sure that PPO is a directly comparable baseline. \\n1) Were queries to the simulator used during MCTS accounted for when measuring sample complexity? (Figure 2). Being able to query the environment without cost is a significant advantage to POLISH in terms of sample complexity. \\n2) Additionally, it was stated that POLISH had access to a pre-trained policy, which is additional information that PPO cannot exploit. A reasonable comparison could be to initialize the PPO agent from that pre-trained policy, or to not give POLISH access to the pre-trained policy.\\n\\nFor related work, I would argue that an important class of algorithms to mention are RL methods based on imitating some sort of policy improvement procedure. This includes work such as (not exhaustive) self-imitation learning (Oh 2019), the cross-entropy method, guided policy search (Levine 14), reward-weighted regression (Peters 07) and UREX (Nachum 17).\"}" ] }
S1xJikHtDH
Improving Model Compatibility of Generative Adversarial Networks by Boundary Calibration
[ "Si-An Chen", "Chun-Liang Li", "Hsuan-Tien Lin" ]
Generative Adversarial Networks (GANs) is a powerful family of models that learn an underlying distribution to generate synthetic data. Many existing studies of GANs focus on improving the realness of the generated image data for visual applications, and few of them concern about improving the quality of the generated data for training other classifiers---a task known as the model compatibility problem. As a consequence, existing GANs often prefer generating `easier' synthetic data that are far from the boundaries of the classifiers, and refrain from generating near-boundary data, which are known to play an important roles in training the classifiers. To improve GAN in terms of model compatibility, we propose Boundary-Calibration GANs (BCGANs), which leverage the boundary information from a set of pre-trained classifiers using the original data. In particular, we introduce an auxiliary Boundary-Calibration loss (BC-loss) into the generator of GAN to match the statistics between the posterior distributions of original data and generated data with respect to the boundaries of the pre-trained classifiers. The BC-loss is provably unbiased and can be easily coupled with different GAN variants to improve their model compatibility. Experimental results demonstrate that BCGANs not only generate realistic images like original GANs but also achieves superior model compatibility than the original GANs.
[ "generative adversarial network", "GAN", "model compatibility", "machine learning efficacy" ]
Reject
https://openreview.net/pdf?id=S1xJikHtDH
https://openreview.net/forum?id=S1xJikHtDH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "Mq7p_r9SG1", "HkgBnP1b2r", "r1elU4ZhsS", "BJlUb4bnjr", "HJlLQ8ERYH", "BJgUq9uoKH" ], "note_type": [ "decision", "official_review", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1576798735405, 1574135724982, 1573815367709, 1573815294219, 1571862045604, 1571682957598 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1900/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1900/Authors" ], [ "ICLR.cc/2020/Conference/Paper1900/Authors" ], [ "ICLR.cc/2020/Conference/Paper1900/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1900/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper presents a method for increasing the \\\"model compatibility\\\" of Generative Adversarial Networks by adding a term to the loss function relating to classification boundaries. The reviewers recognized the importance of the problem, but several concerns were raised about the clarity of the paper, as well as the significance of the experimental results.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"In this paper the authors propose a method for improving \\\"model compatibility\\\" in GANs. For this reason they add to the loss of the generation procedure a term that depends on the maximum mean discrepancy between the following datasets: (1) the output of a classifier with input the real dataset, (2) the output of the same classifier with input GAN-generated samples. They authors show that in essentially all the datasets they tried, the model compatibility of the produces generator is increased after adding the aforementioned cost, while the visual quality of the data is not decreased.\", \"strengths\": [\"The low model compatibility of GANs is a very important disadvantage and hence improving this aspect of GANs is a relevant problem.\"], \"weaknesses___comments\": \"A. The increase in the model compatibility is very mild. Especially in CIFAR-10, the increase in very small.\\n\\nB. In MNIST the increase is larger than CIFAR-10 but the initial model compatibility using vanilla GANs is smaller. The reason might be that for MNIST much simpler classification algorithms have been used. This maybe suggests that the proposed method affects more the model compatibility of methods that achieve lower model compatibility in before the addition of the extra cost term.\", \"minor_comments\": \"1. In equation (1) it looks strange that the summation is over A but A does not appear at all in the summand. I suggest you replace h and h' with A(D) and A(D') so that this is clear.\\n2. In Theorem 2, \\\\hat{L}_G is used but for the proof the authors have replaces \\\\hat{M} with M. There should be a comment for that. In general I believe that Theorem 2 is almost trivial and does not add value to this clearly experimental paper.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for the valuable comments. We\\u2019ve revised our paper according to the suggestions. Below we detail each comment individually.\", \"q1\": \"\\u201cIf the generated dataset exhibits very good property as the real dataset, it means the data is to some extent perfectly foreseen, and there is little to no privacy, is it contrary to the aim of not leaking the real dataset?\\u201d\", \"a1\": \"Thank you for the valuable comment. We agree that a perfect generator may leak the real dataset to some extent. However, it depends on the support size of the real dataset, which is hard to evaluate. For example, it is hard to not generate the same instance for a low-dimension discrete dataset. Besides, previous work[1] has shown that empirically, GANs are lack of diversity (low support size) instead of memorizing training set.\", \"q2\": \"It is more interesting to see the difference between the distribution of real data and the generated data, However the author only show a simple toy data distribution comparison, I would like to see more comprehensive results about the distribution differences on real dataset , e.g. the TSNE embedding?\", \"a2\": \"Thank you for the suggestion. We agree that showing the distribution differences on a real dataset will make this work more comprehensive. We\\u2019ve tried to visualize the distributions by PCA and T-SNE. However, for UCI datasets, we can't even observe clear clusterings for different classes of real data, so we used a well-trained fully-connected network with a 2-units hidden layer before the output layer to project the generated samples to a 2-dimensional embedding space. We've added the result and discussion in Section 5.3.\", \"q3\": \"Equation (3) is not correct.\", \"a3\": \"Thank you for pointing out the typo. We\\u2019ve corrected it in the revision.\", \"q4\": \"The author said that image quality of MNIST and CIFAR10 are not improved, then why the classification results are improved? there should have some differences existed among different compared methods, it would be more convincing if you can show it out.\", \"a4\": \"Thank you for the comment. We agree that there should be some reasons except image quality that makes the classification accuracy improved. We assume that generating more images with correct labels can help a classifier learn better. Unfortunately, we do not find a good way to show it clearly so far.\", \"q5\": \"What kind of generator do you use for the UCI data? How do you settle the output problem? Since some of the data are continuous and some are discrete.\", \"a5\": \"Thank you for the question. In our experiment, discrete features are processed to one-hot encoding and continuous features are scaled to [0, 1]. Therefore the features can be generated by a logistic function.\\n\\n\\n\\n\\n[1] Sanjeev Arora, Andrej Risteski, Yi Zhang: Do GANs learn the distribution? Some Theory and Empirics. ICLR (Poster) 2018\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for the detailed feedback and constructive advice. We\\u2019ve revised our paper according to the suggestions. Below we detail each comment individually.\", \"q1\": \"\\u201cQuantitative scores for quality of generated samples are not provided.\\u201d\", \"a1\": \"Thank you for the comment. We\\u2019ve added the Inception score and FID for CIFAR-10 in the captions of Figure 2 in our revision.\", \"q2\": \"\\u201cThe main metric used is the ratio between accuracies of classifiers (trained on real and generated data). It is hence difficult to tell if the classifiers that were used were trained reasonably and achieved reasonable scores.\\u201d\", \"a2\": \"We use the ratio instead of using the raw accuracy because it would be more reasonable to provide a summary score for different models. All classifiers are trained until the validation loss converged. We have also provided the testing accuracies for each experiment in Appendix for reference.\", \"q3\": \"\\u201cIn the abstract, authors claim that 'GANs often prefer generating easier synthetic data that are far from boundaries of the classifiers'. Although for some GAN settings the generators might be biased to do so, in general this claim is unfounded, as GANs optimize divergences that are agnostic to classifier boundaries.\\u201d\", \"a3\": \"Thank you for pointing out this issue. We agree that the claim is too arbitrary. We\\u2019ve modified the abstract to better describe the fact.\", \"q4\": \"\\u201cIt is unclear what kind of classifier-output is used as an input to MMD. Are these continuous logits, discrete class numbers, or one-hot-encoded class identities?\\u201d\", \"a4\": \"Thank you for the question. In practice we use softmax to obtain the posterior of classifiers. We\\u2019ve added some explanation in Section 4.1 to make it clearer.\", \"q5\": \"\\u201cAuthors use WGAN and MMDGAN with gradient penalty. It is unclear how gradient penalty is applied to MMDGAN as what should be penalized is the witness function, which is different than in WGAN-GP [4], see e.g. [3].\\u201d\", \"a5\": \"The usage of gradient penalty in MMDGAN has been described in [1] and [2]. We follow the implementation of [2], which is available at the Github repository [3].\", \"q6\": \"\\u201cIt is unclear how embeddings of class information are concatenated to discriminator inputs (p.5).\\u201d\", \"a6\": \"Thank you for pointing out the problem. The embeddings of class information are concatenated as additional features to discriminator inputs. We\\u2019ve added the explanation in Section 5.1 to describe it explicitly.\", \"q7\": \"\\u201cIt is unclear to what extent feature selection is deterministic. Authors argue in Section 5.4 that the intersection of top-k features selected from two models should be large. It would be good to provide the same statistics for features selected twice on the same sample.\\u201d\", \"a7\": \"Thank you for the suggestion. We\\u2019ve added a column in Table 3 and Table 4 to show the statistics for feature selected on the same sample with a different random seed.\", \"typos\": \"We appreciate the reviewer for reading the paper thoroughly and pointing out the typos. We\\u2019ve corrected them in the revision and checked the paper again.\\n\\n\\n\\n\\n[1] Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, Barnab\\u00e1s P\\u00f3czos: MMD GAN: Towards Deeper Understanding of Moment Matching Network. NIPS 2017: 2203-2213\\n[2] Mikolaj Binkowski, Dougal J. Sutherland, Michael Arbel, Arthur Gretton: Demystifying MMD GANs. ICLR (Poster) 2018\\n[3]\\u00a0https://github.com/MichaelArbel/Scaled-MMD-GAN\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper aims at training a GAN that can generate data matches the real data distribution well especially at the boundaries of the classifiers. A Boundary-Calibration loss (BC-loss) base on multi pretrained classifiers is introduced to match the statistics between the distributions of original data and generated data. The motivation is interesting. The story is clearly explained. However, the experiments part is weak.\\n\\nThere are several typo and mistakes. The experiments only show that the proposed method got a good performance, but the analysis of the reason is not shown. The reason to name the loss as Boundary-Calibration loss (BC-loss) should be explained and the experiments should show some effect on the boundary areas. Some concerns are listed below,\\n\\n1.\\tIf the generated dataset exhibits very good property as the real dataset, it means the data is to some extent perfectly foreseen, and there is little to no privacy, is it contrary to the aim of not leaking the real dataset?\\n2.\\tIt is more interesting to see the difference between the distribution of real data and the generated data, However the author only show a simple toy data distribution comparison, I would like to see more comprehensive results about the distribution differences on real dataset , e.g. the TSNE embedding? \\n3.\\tEquation (3) is not correct.\\n4.\\tThe author said that image quality of MNIST and CIFAR10 are not improved, then why the classification results are improved? there should have some differences existed among different compared methods, it would be more convincing if you can show it out.\\n5.\\tWhat kind of generator do you use for the UCI data? How do you settle the output problem? Since some of the data are continuous and some are discrete.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"In this work authors consider a problem of 'model compatibility' of GANs, i.e. usefullness of the generated samples for classification tasks. Proposed 'Boundary Calibration' GAN attempts to tackle this issue by adding non-adversarial terms to discriminator, obtained as outputs of the classifiers trained on the original data. For evaluation, it is proposed to compare accuracies obtained by classifiers trained on generated and on real data (termed 'relative acurracy'). Experiments show that the proposed methods improve such scores.\", \"pros\": [\"considered problem seems to be important GAN application which has not yet received too much attention.\", \"proposed method seems to improve the accuracy of classifiers trained on generated data.\"], \"cons\": [\"the paper is poorly written, has multiple typos and often it is unclear what authors mean.\", \"the proposed evaluation does not exactly measure the potential improvements from training classifiers with generated data. It would make sense to provide information if generated data can improve classifier's scores, if added to the training data (or small part of it).\", \"it is unclear whether or not the proposed technique does not affect the sample quality, as no quantitative metrics are provided.\"], \"details\": \"1. Quantitative scores for quality of generated samples are not provided. Authors instead provide few samples and state that it is difficult to detect the difference. Although sample quality is not the main task here, it certainly is important - otherwise we could train generators solely against the classifier 'boundary calibration' loss terms - this, however would likely lead to adversarial examples. Combining two losses often leads to trade-offs, hence showing that we can improve 'model compatibility' without the loss of sample quality is actually crucial. Metrics such as Inception score [1], FID [2] or KID [3] are desired, especially given relatively poor quality of cifar and mnist samples from all models.\\n2.The main metric used is the ratio between accurracies of classifiers (trained on real and generated data). It is hence difficult to tell if the classifiers that were used were trained reasonably and achieved reasonable scores.\\n3. In the abstract, authors claim that 'GANs often prefer generating easier synthetic data that are far from boundaries of the classifiers'. Although for some GAN settings the generators might be biased to do so, in general this claim is unfounded, as GANs optimize divergences that are agnostic to classifier boundaries.\\n4. It is unclear what kind of classifier-output is used as an input to MMD. Are these continuous logits, discrete class numbers, or one-hot-encoded class identities?\\n5. Authors use WGAN and MMDGAN with gradient penalty. It is unclear how gradient penalty is applied to MMDGAN as what should be penalized is the witness function, which is different than in WGAN-GP [4], see e.g. [3].\\n6. It is unclear how embeddings of class information are concatenated to discriminator inputs (p.5).\\n7. It is unclear to what extent feature selection is deterministic. Authors argue in Section 5.4 that the intersection of top-k features selected from two models should be large. It would be good to provide the same statistics for features selected twice on the same sample.\\n\\nOverall, the paper currently does not match the quality requirements of ICLR, however it has potential for improvement if the mentioned issues are addressed.\\n\\nTypos/unclear expressions:\\n[p1] 'may not willing' >> 'may not be willing'\\n[p1]'with the property similar to the original data is demanding' >> properties, demanded/in demand\\n[p2] 'Although GANs are versatile as aforementioned' - strange wording\\n[p2] 'The pioneered work' >> 'pioneering work'\\n[p2] 'information of models' >> 'information from the models'\\n[p2] effects >> affects\\n[p3] related works >> related work\\n[p3] distribution of label >> distribution of labels\\n[p3] 'generated dataset adopt ' >> 'generated dataset will adopt '\\n[p3] 'To known about the boundary' >> ' To know the boundary'/'To include the information about the boundary'\\n[p3] 'the a distance'\\n[p3] 'the problem to distinguish whether two sets of samples' \\n[p4] 'If they are close the sets might be sampled from the same distribution' (?)\\n[p4] tries to minimized the MMD\\n[p4] would not leads to\\n[p6, Table 2 caption] number of estimator used\\n[p6] at Appendix\\n[p6] depresses the model compatibility (?)\\n[p6] can providing\\n[p7] to known how\\n[p8] our work open >> our work will open\\n\\n\\n------------------------------------- Revision -------------------------------------\\n\\nAlthough some issue seem to have been clarified, two of my main concerns, i.e. the proposed evaluation and the text quality, have not been resolved. Additionally, as pointed out by reviewer #4, the results seem somewhat incremental. For these reasons, I decided to keep the rating unchanged.\"}" ] }
SJx0q1rtvS
Robust anomaly detection and backdoor attack detection via differential privacy
[ "Min Du", "Ruoxi Jia", "Dawn Song" ]
Outlier detection and novelty detection are two important topics for anomaly detection. Suppose the majority of a dataset are drawn from a certain distribution, outlier detection and novelty detection both aim to detect data samples that do not fit the distribution. Outliers refer to data samples within this dataset, while novelties refer to new samples. In the meantime, backdoor poisoning attacks for machine learning models are achieved through injecting poisoning samples into the training dataset, which could be regarded as “outliers” that are intentionally added by attackers. Differential privacy has been proposed to avoid leaking any individual’s information, when aggregated analysis is performed on a given dataset. It is typically achieved by adding random noise, either directly to the input dataset, or to intermediate results of the aggregation mechanism. In this paper, we demonstrate that applying differential privacy could improve the utility of outlier detection and novelty detection, with an extension to detect poisoning samples in backdoor attacks. We first present a theoretical analysis on how differential privacy helps with the detection, and then conduct extensive experiments to validate the effectiveness of differential privacy in improving outlier detection, novelty detection, and backdoor attack detection.
[ "outlier detection", "novelty detection", "backdoor attack detection", "system log anomaly detection", "differential privacy" ]
Accept (Poster)
https://openreview.net/pdf?id=SJx0q1rtvS
https://openreview.net/forum?id=SJx0q1rtvS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "Q1ImzIjMYQ", "HygQfGu2jH", "H1xu6x-Oor", "Bke8IlWuoB", "H1gknyb_or", "SJlQuoeTcB", "Syxdja9Mqr", "S1gm7kORtB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735364, 1573843466925, 1573552319923, 1573552206023, 1573552038527, 1572830058650, 1572150687635, 1571876635063 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1898/Authors" ], [ "ICLR.cc/2020/Conference/Paper1898/Authors" ], [ "ICLR.cc/2020/Conference/Paper1898/Authors" ], [ "ICLR.cc/2020/Conference/Paper1898/Authors" ], [ "ICLR.cc/2020/Conference/Paper1898/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1898/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1898/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"Thanks for the submission. This paper leverages the stability of differential privacy for the problems of anomaly and backdoor attack detection. The reviewers agree that this application of differential privacy is novel. The theory of the paper appears to be a bit weak (with very strong assumptions on the private learner), although it reflects the basic underlying idea of the detection technique. The paper also provides some empirical evaluation of the technique.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewer #2 (part 2)\", \"comment\": \"Q: When the training set contains anomalies, this work can be viewed as \\u201cwhat is the impact of differential privacy\\u201d on a training sets with a majority group (training examples from a given distribution) and a minority group (training examples from a different distribution). Under this view, this paper essentially says that \\u201cdifferential privacy leads to disparate impact on model accuracy/loss\\u201d. This has been recently investigated in the following NeurIPS19 paper: https://arxiv.org/abs/1905.12101. Thus the contributions of the paper are not substantial.\", \"a\": \"Thank you for pointing out the reference! It is indeed very relevant and discovered the same intrinsic phenomenon as us. In our revised paper, we have cited this paper and discussed more in the last paragraph of Section 5 Related work. In short, while the related paper explains the phenomenon in a more generic way, our work includes a theoretical justification, which, for the first time, precisely characterizes the dependence of the performance gap between the majority and the minority group on the privacy parameters. Our theory is further backed up by extensive experiments in anomaly detection, novelty detection and backdoor attack detection; notably, our proposed method has greatly improved the state-of-the-art system log anomaly detection performance, which is itself a significant contribution to the computer security area. By contrast, the reference mainly considered the implication of differential privacy to the fairness of machine learning models.\"}", "{\"title\": \"Response to Reviewer #3\", \"comment\": \"Response to R3:\\n\\nThank you for the valuable comments!\", \"q\": \"I\\u2019m not an expert in differential privacy. But as far as I\\u2019m concerned, a typical downside is that the false positive rate will increase and there is no theoretical guarantee that the increase of false-positive rate will be negligible compared with the increase of true positive rate.\\nIts effectiveness in detecting backdoor attacks seems elusive. As we know, the backdoor attacks exist when users want to outsource the task of training the network to a third-party, which may potentially be an attack. Therefore, the training process is out-of-control to the detector. However, the paper proposes to use differential privacy to the model training process, which is not in the settings of a backdoor attack.\", \"a\": \"First of all, we want to note that although false positives may slightly increase with DP noise added, false negatives reduces significantly, and the overall F-measure also increases a lot. As shown in Figure 2(b), the best F-measure of DeepLog with DP (96.29%) is well above the best F-measure of DeepLog without DP (93.66%). With a reasonable noise scale (not too large), DeepLog with DP almost always outperforms the state-of-the-art real-world system log anomaly detection model in computer security domain in terms of various metrics that take into account both false negatives and false positives. Also, as indicated in the first paragraph of page 7, reducing false negatives could be more important because false positives could be further checked by system admin, while false negatives may never be discovered until a disastrous event occurs.\\n\\nSecondly, the backdoor attack scenario mentioned by the reviewer is indeed a common one. Another common scenario for backdoor attacks is crowdsourcing, where the model trainer gathers training data from untrusted individuals. In this case the model trainer does not have control over the data quality but does have control over the model training process. Our proposal of adding DP noise is useful for detecting backdoor attacks and training more robust models in such scenario. We have clarified the use case of our method in the first paragraph of Section 4.3.\\n\\nWe will appreciate it if the reviewer has further comments!\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"Response to R1:\\n\\nThank you very much for your kind review and helpful comments!\", \"q\": \"Theorem 2 presents some theoretical bound to show the power of DP on improving outlier detection, however, in the parameter setting used in the experiment, Theorem 2 does not provide meaningful bounds. There is a bit disconnection between the two parts.\", \"a\": \"We agree with the reviewer that our bound is not tight; however, our theory is still *useful* as it can be used to explain various trends in our experiments. For example, as shown in Table 1, the more outliers in training dataset, the higher noise scale it requires to achieve the best anomaly detection performance. This scenario can be explained by Theorem 2. As the second paragraph on page 4 explains, our theory shows that the privacy parameters cannot be too large or too small to ensure optimal anomaly detection performance, which coincides with the experimental results in Table 1. We have revised the paper to clarify the correlation between the implications of our theory and experimental findings, which could be found in the last paragraph of Section 4.1.\"}", "{\"title\": \"Response to Reviewer #2 (part 1)\", \"comment\": \"Thank you for your valuable comments and for pointing out the related work, which have greatly helped to improve the paper.\", \"q\": \"The authors make no attempt to co-optimize the performance of the model with its ability to be used for better anomaly detection. For instance, the authors choose an l2-clipping-norm C of 1 and do not consider trading off C with the noise variance.\", \"a\": \"Thank you for pointing out the direction of co-optimizing the performance of the model with its ability to be used for better anomaly detection. We apologize if our paper didn\\u2019t explain well. In fact, for all 3 models evaluated in the paper, including the autoencoders and DeepLog, their sole purpose is anomaly detection. Only for backdoor attack detection did we use the same CNN model for both image classification and backdoor detection. We have added the experiments of varying C with fixed noise variance for backdoor attack detection, which could be found in Table 3. From Table 2&3, we can observe that the best model for anomaly detection could have a similar set of parameters with the best model for image classification. However, in general, classification accuracy and robustness are two conflicting desiderata; model trainers can tune the privacy parameter in order to meet the task-specific requirements for accuracy and robustness. We have added the discussion of the results in the last paragraph of Section 4.3.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper leverages differential privacy\\u2019s stability properties to investigate its use for improved anomaly and backdoor attack detection. Under an assumption (called \\u201cuniformly asymptotic empirical risk minimization\\u201d), the authors show that difference between the expected loss of a differentially private learning algorithm on an outlier (where the expectation is taken over the randomness of the learning algorithm) and the expected loss of the same algorithm on data from the underlying distribution (expectation taken over data & randomness of the algorithm) is lower bounded by a (possibly/hopefully) non-negative quantity with high probability. The authors then conduct a set of experiments to show that differential privacy improves the performance of outliers, novel examples, and backdoor attack detection.\\n\\nOverall, the paper is very well written and easy to read. The paper also tackles an important and timely problem that is relevant to the ICLR community. While there has been some recent work on connecting differential privacy to robustness & attacks, this paper investigates the use of differential private model training as a means to improve novelty detection at inference time.\", \"a_few_points_that_need_attention_from_the_authors\": \"1. The theory developed is insightful in general but has very little (to no) practical value. For starters, it assumes that differentially private model training is uniformly asymptotic to empirical risk minimization. This is not necessarily true for highly non-convex models trained with SGD. Further, it cannot be verify via experimentations (despite the authors\\u2019 attempt to sanity check it using Figure 1). More importantly, the theory developed in Section 3 is not used in any meaningful way in the experiments section \\u2014 the anomaly detection schemes are agnostic to it. \\n2. The authors make no attempt to co-optimize the performance of the model with its ability to be used for better anomaly detection. For instance, the authors choose an l2-clipping-norm C of 1 and do not consider trading off C with the noise variance. \\n\\nWhen the training set contains anomalies, this work can be viewed as \\u201cwhat is the impact of differential privacy\\u201d on a training sets with a majority group (training examples from a given distribution) and a minority group (training examples from a different distribution). Under this view, this paper essentially says that \\u201cdifferential privacy leads to disparate impact on model accuracy/loss\\u201d. This has been recently investigated in the following NeurIPS19 paper: https://arxiv.org/abs/1905.12101. Thus the contributions of the paper are not substantial.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes the idea of using differential privacy (DP) to improve the performance of outlier and novelty detection. Differential privacy was proposed as a privacy metric which limits the contribution of a single data point in the training set to the output. This property naturally controls how poisoned data would affect the output of the learned model. Under the assumption that a well-trained model would incur a higher loss on the outliers, the paper gives a theoretic bound on how this loss will decrease if there are poisoned samples in the training set.\\n\\nThe paper also performs several experiments on synthetic and real-world datasets. The paper shows that add differential privacy during training can improve the performance of autoencoder-based outlier detection on MNIST data. For real-world data, the paper improves the performance of anomaly detection on the HDFS dataset over the state-of-the-art algorithm. The paper also shows empirically how DP can help improve backdoor attack detection. \\n\\nThe paper is overall nicely written with some nice results. The paper could be improved if the following confusions can be resolved.\\n\\n1. Novelty detection is generally referred to as detecting samples in the test set that are not in the distribution of the training set. In the theory part, the analysis is mostly based on data poisoning, which is not typical in the novelty detection setting. I hope this can be clarified.\\n2. In the experiment part, the paper uses Figure 1 to show how UAERM is satisfied. I find this a bit confusing. In definition 4, the h^* is referred to as the global minimizer while in the experiment, the empirical minimizer is used.\\n3. Theorem 2 presents some theoretical bound to show the power of DP on improving outlier detection, however, in the parameter setting used in the experiment, Theorem 2 does not provide meaningful bounds. There is a bit disconnection between the two parts.\\n\\nBased on the above comments, I think the paper can be accepted if there is room for it. But I won't push it for acceptance.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Interesting topic but lacks of novelty\\n#Summary:\\nThe paper proposes that by applying differential privacy, the performance on outlier and novelty detection can be improved. It first presents a theoretical analysis, which establishes a lower bound on the prediction performance difference between normal and outlier data. By adding noise into the training process, the outliers in the dataset will be hidden by the noise, which will result in a model that utilizes the normal data. In this way, when deploying the model, the model will find the outlier by observing low confidence.\\n\\n#Strength\\nIt is good to see that the paper builds a connection between the privacy parameter and the noise level and the experiments make the theory valid.\\n\\n#Weakness\\nI\\u2019m not an expert in differential privacy. But as far as I\\u2019m concerned, a typical downside is that the false positive rate will increase and there is no theoretical guarantee that the increase of false-positive rate will be negligible compared with the increase of true positive rate.\\nIts effectiveness in detecting backdoor attacks seems elusive. As we know, the backdoor attacks exist when users want to outsource the task of training the network to a third-party, which may potentially be an attack. Therefore, the training process is out-of-control to the detector. However, the paper proposes to use differential privacy to the model training process, which is not in the settings of a backdoor attack.\"}" ] }
HkxCcJHtPr
CAT: Compression-Aware Training for bandwidth reduction
[ "Chaim Baskin", "Brian Chmiel", "Evgenii Zheltonozhskii", "Ron Banner", "Alex M. Bronstein", "Avi Mendelson" ]
Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving visual processing tasks. One of the major obstacles hindering the ubiquitous use of CNNs for inference is their relatively high memory bandwidth requirements, which can be a main energy consumer and throughput bottleneck in hardware accelerators. Accordingly, an efficient feature map compression method can result in substantial performance gains. Inspired by quantization-aware training approaches, we propose a compression-aware training (CAT) method that involves training the model in a way that allows better compression of feature maps during inference. Our method trains the model to achieve low-entropy feature maps, which enables efficient compression at inference time using classical transform coding methods. CAT significantly improves the state-of-the-art results reported for quantization. For example, on ResNet-34 we achieve 73.1% accuracy (0.2% degradation from the baseline) with an average representation of only 1.79 bits per value. Reference implementation accompanies the paper.
[ "compression", "quantization", "efficient inference", "memory bandwidth", "entropy", "compression-aware training", "Huffman", "variable length coding" ]
Reject
https://openreview.net/pdf?id=HkxCcJHtPr
https://openreview.net/forum?id=HkxCcJHtPr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "6CYya7dyCj", "BJgsBhO8oB", "B1x25mjeiS", "rJxtzQieiB", "HkgO4zslsS", "rJgdsSVysS", "BklsH3PCcr", "rklhGYNC5B", "rJghghXPtr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735331, 1573452866815, 1573069716237, 1573069584773, 1573069360021, 1572976031806, 1572924483439, 1572911380146, 1571400691803 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1897/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1897/Authors" ], [ "ICLR.cc/2020/Conference/Paper1897/Authors" ], [ "ICLR.cc/2020/Conference/Paper1897/Authors" ], [ "ICLR.cc/2020/Conference/Paper1897/Authors" ], [ "ICLR.cc/2020/Conference/Paper1897/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1897/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1897/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This work propose a compression-aware training (CAT) method to allows efficient compression of feature maps during inference. I read the paper myself. The proposed method is quite straightforward and looks incremental compared with existing approaches based on entropy regularization.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thank You for Your Response\", \"comment\": \"I appreciate the quick response, and I find them reasonable enough. I will change the score.\"}", "{\"title\": \"Answer to reviewer #1\", \"comment\": \"Thank you for your review and your comments. We would like to provide an answer to the issues raised in your review.\", \"q\": \"The idea of regularizing by the entropy is not novel (see for instance \\\"Entropic Regularization\\\", Grandvalet et al.), as well as the idea of further encoding the weights using entropic coders (as in \\\"Deep Compression\\\", Han et al.).\", \"a\": \"The idea of using entropy regularization, in general, is indeed not novel and we mention relevant works in the Related Work section. However, using a differentiable entropy approximation as a loss term to improve the compressibility of the activations is novel.\"}", "{\"title\": \"Answer to reviewer #4\", \"comment\": \"Once again, thank you for your review and your comments. We will answer the questions in the review.\", \"q\": \"How about adding the regularization to weights?\", \"a\": \"Our method could also be applied to the weights. In this paper, we focus on activations, since the benefit of the lossless compression activation is more significant due to the fact they are usually responsible for most of the memory accesses.\\nIn addition, the paper by Aytekin et al. proposed compressibility loss and has successfully applied it to the weights.\"}", "{\"title\": \"Answer to reviewer #3\", \"comment\": \"Thank you for your review and your comments. We fixed the wrong reference format and switched the equations as proposed. We also would like to provide an answer to the issues raised in your review.\", \"q\": \"what is the impact on accuracy if only part of the batch is considered.\", \"a\": \"Since there is only minor difference between the two methods (soft entropy based only on part of the tensor and compressibility loss which is calculated on the whole tensor), we believe there is no significant impact of using only part of the batch. Since the tensors are large, the amount of values used for approximation is still relatively big. To check this assumption, we ran a soft entropy evaluation on a single tensor and compared it to real values, and added the results to Appendix B.\"}", "{\"title\": \"Format fix\", \"comment\": \"Thank you for your review!\\n\\nAs proposed, we have updated the pdf with the right format. For some reason, one of TeX packages interfered with it. We are going to address other points of your review later.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this article, the authors propose a compression-aware method to achieve efficient compression of feature maps during the inference procedure. When in the training step, the authors introduce a regularization term to the target loss that optimizes the entropy of the activation values. At inference time, an entropy encoding method is applied to compress the activations before writing them into memory. The experimental results indicate that the proposed method achieves better compression than other methods while holding accuracy.\", \"there_are_still_some_issues_as_follows\": \"1.\\tThe authors should carefully check the format of the references in the whole article. For example, in section 2, line 5 from the top and line 8 from the bottom, \\u201cXiao et al. (2017), Xing et al. (2019)\\u201d and \\u201c(Chmiel et al., 2019)\\u201d are in the wrong format.\\n2.\\tIt is suggested that the authors swap the order of formulation (8) and (9) in section 3.2 so that it will be a good correlation with the formulation (3) and (4).\\n3.\\tI am interested in learning the time taken by the proposed method during the inference procedure vs other related methods.\\n4.\\tThe authors studied two differentiable entropy approximation in the paper, and they stated that they calculate soft entropy only on the part of the batch for the reduction of both memory requirements and time complexity in training. I hope the authors will clarify 1) Whether the accuracy will be affected by other differentiable entropy approximations; 2) what is the impact on accuracy if only part of the batch is considered.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"The format of the paper does not meet the requirement of ICLR. Due to this, I will give a 3. I suggest the authors to change it as soon as possible.\\n\\nBesides that, the main idea of the paper is to regularize the training of a neural network to reduce the entropy of its activations. There are extensive experiments in the paper.\\n\\nThe paper introduce two kinds of method to regularize the entropy. The first method is a soft version of the original entropy, and the second is the compressibility loss. After adding the regularization, the performance drop of the compressed network is reduced. The experiment performance is promising.\", \"i_think_the_method_is_straightforward_and_reasonable_with_only_a_few_questions\": \"1. Why do you quantize the weight? Seems it's not necessary because the paper only address activation quantization.\\n2. What will happen if the weights are quantized to lower bits? For example, 4bit?\\n2. How about adding the regularization to weights?\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\nThe authors propose a method for training easy-to-quantize models that are quantized after training (post-training quantization). They do so by regularizing by the entropy, thereby forcing the weight distribution to be more compressible. They further compress the weights using entropy coding.\", \"strengths_of_the_paper\": [\"The paper presents strong experimental results on ResNet, SqueezeNet VGG and MobileNet architectures and provides the code, which looks sensible.\"], \"weaknesses_of_the_paper\": [\"The authors could have applied CAT to other tasks such as Image Detection, while proving inference times on CPUs. Indeed, it is unclear to me what would be the influence of the entropic decoder which is claimed to be fast for \\\"efficient implementations\\\" by the authors.\", \"The idea of regularizing by the entropy is not novel (see for instance \\\"Entropic Regularization\\\", Grandvalet et a.l), as well as the idea of further encoding the weights using entropic coders (as in \\\"Deep Compression\\\", Han et al.).\"], \"justification_of_rating\": \"The authors present an intuitive method (yet not novel) for quantizing the weights of a neural network. My main concern would be about the inference time but I consider that the experimental results suggest strong evidence that CAT performs well on a wide variety of architectures.\"}" ] }
Ske6qJSKPH
Scheduling the Learning Rate Via Hypergradients: New Insights and a New Algorithm
[ "Michele Donini", "Luca Franceschi", "Orchid Majumder", "Massimiliano Pontil", "Paolo Frasconi" ]
We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization. This allows us to explicitly search for schedules that achieve good generalization. We describe the structure of the gradient of a validation error w.r.t. the learning rates, the hypergradient, and based on this we introduce a novel online algorithm. Our method adaptively interpolates between two recently proposed techniques (Franceschi et al., 2017; Baydin et al.,2018), featuring increased stability and faster convergence. We show empirically that the proposed technique compares favorably with baselines and related methodsin terms of final test accuracy.
[ "automl", "hyperparameter optimization", "learning rate", "deep learning" ]
Reject
https://openreview.net/pdf?id=Ske6qJSKPH
https://openreview.net/forum?id=Ske6qJSKPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "rGUkXMM9-K", "rkeIq2w3oS", "rJlwitIoiS", "HyxPrb8ijS", "rklSLolsjS", "rkxda68for", "ryxSgYLMsr", "HyxaEj1zqB", "SkgnLRm0tB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1576798735299, 1573842061993, 1573771678554, 1573769534752, 1573747533071, 1573182911915, 1573181677365, 1572105013110, 1571860052470 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1896/Authors" ], [ "ICLR.cc/2020/Conference/Paper1896/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1896/Authors" ], [ "ICLR.cc/2020/Conference/Paper1896/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1896/Authors" ], [ "ICLR.cc/2020/Conference/Paper1896/Authors" ], [ "ICLR.cc/2020/Conference/Paper1896/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1896/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"First, I'd like to apologize once again for failing to secure a third reviewer for this paper. To compensate, I checked the paper more thoroughly than standard.\\n\\nThe area of online adaptation of the learning rate is of great importance and I appreciate the authors' effort in that direction. The authors carefully abundantly cite the research on gradient-based hyperparameter optimization but I would have appreciated to also see past works on stochastic line search (for instance \\\"A stochastic line-search method with convergence rate\\\") or statistical methods (\\\"Using Statistics to Automate Stochastic Optimization\\\").\\n\\nThe issue with these methods is that, despite usually very positive claims in the paper, they are not that competitive against a carefully tuned fixed schedule and end up not being used in practice. Hence, it is critical to develop a convincing experimental section to assuage doubts. Unfortunately, the experimental section of this work is a bit lacking, as pointed by both reviewers. I would like to comment on two points specifically:\\n- First, no plot uses wall-clock time as the x-axis. Since the authors state that it can be up to 4 times as slow per iteration, the gains compared to a carefully tuned schedule are unclear.\\n- Second, the use of a single (albeit two variants) dataset also leads to skepticism. Datasets have vastly different optimization properties and, by not using a wide range of them, one can miss the true sensitivity of the proposed algorithm.\\n\\nWhile I do not think that the paper is ready for publication, I feel like there is a clear path to an improved version that could be submitted to a later conference.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Uploaded new revision\", \"comment\": \"Dear all,\", \"we_uploaded_a_revised_version_of_the_paper_which_includes\": \"1) a new experiment in Appendix D that shows the performances of MARTHE on CIFAR-100 with the initial learning rate set to 0 ($\\\\eta_0 = 0$). MARTHE produces a competitive schedule also in this disadvantaged setting, empirically showing that the method is not particularly sensitive also to the initial learning rate (we remind that we provide a rule for computing online the damping factor $\\\\mu$ and we find $\\\\beta$ quickly as the first value that does not lead to divergence).\\n2) runs of SGDR with the last restart at epoch 200 in Figures 7 and 8, as asked by R1.\\n3) a table of notation with descriptions and examples when appropriated, in Appendix A. We hope this will improve the readability of the paper, answering some concerns expressed by R2.\\n4) a modified description of experiments in Appendix E to better underline that they refer to the `inadaptive` version of MARTHE.\\n5) a comment on the runtime and space complexity of MARTHE (as requested by R2).\\n6) links to the appendix in the main text and corrections of minor typos.\\n\\nWe hope that these additions and modifications will strengthen the submission and address the reviewers concerns.\\n\\nWe believe that this work makes a step forward (yet, certainly not the last) in the old but crucial topic of finding good learning rates, providing novel insights and a unified view of two previously proposed gradient-based adaptive algorithms. We developed a new method with the purpose of taking ``the best of both world'', providing a mathematical exposition of its internal functioning. In the experimental validation, we offered comparisons with two previous algorithms and highly performing non-adaptive baselines, arose in multiple years of experience with vision datasets such as CIFAR-10 and 100.\\n\\nWe thank the reviewers and the area chair for their time and valuable suggestions,\\n\\nSincerely,\\nThe authors\"}", "{\"title\": \"Re\", \"comment\": \">> We do not understand this comment. Figure 1\\nSorry, it was a typo. I referred to Figure 4. \\n\\n >> It is well known that Adam does not perform well with an initial learning rate of 0.1.\\nYes, in my original comment I didn't notice that the results of CIFAR-100 were for Adam. \\n\\n>> To keep the number of experiments at a reasonable number we decided to use SGDR as it is a newer LR scheduler compared to cosine annealing.\\n\\nAFAIK, cosine annealing was introduced in that same paper. In fact, it is a parameter choice - when you don't perform restarts you end up with cosine annealing.\\n\\n>> We would like to remark that this is indeed one of the advantages of using MARTHE, that is, it does not require the calibration of multiple configuration parameters.\\n\\nIt requires learning rate and MARTHE's hyperparameter to be calibrated in contrast to calibrating learning rate of cosine annealing.\"}", "{\"title\": \"Comments\", \"comment\": \"Thank you for your comments. We reply below:\\n\\n>> It is 0.05 for all experiments with SGDR in [1]. The origin of 3e-4 for Adam is not clear to me.\\nAs you stated in your previous answer, the commonly used initial learning rate values for SGDM with resnet on CIFAR are both 0.1 and 0.05. We picked 0.1, and we run all the experiments and comparisons keeping this value fixed. \\nConcerning Adam, the commonly used range is between 10-3 and 10-4. It is well known that Adam does not perform well with an initial learning rate of 0.1.\\n\\n>> They yield comparable results, see Table 1 in [1]. \\nAccording to the results in [1], cosine annealing underperforms on CIFAR100 (compared to SGDR), and it yields equivalent performance on CIFAR10. To keep the number of experiments at a reasonable number we decided to use SGDR as it is a newer LR scheduler compared to cosine annealing.\\n\\n>> Why you didn't update Figure 1 with comparable SGDR and cosine annealing? \\nWe do not understand this comment. Figure 1 shows the pitfalls of HD and RTHO on two synthetic test functions from the optimization literature. We remark that the aim of that section is to highlight the behavior of two previously proposed gradient-based algorithms which MARTHE generalizes.\\n\\n>> you could use t0=13 and t_mul=2 to have restarts after 13, 39, 91 and 195 epochs to avoid rounding issues.\\nFollowing your earlier suggestion, we chose a schedule to terminate the last restart exactly at the last epoch (after rounding our restarts are at 10, 33, 84, 200). For sure, there are several possible choices for t0 and t_mul to obtain this behavior of the learning rate schedule. We would like to remark that this is indeed one of the advantages of using MARTHE, that is, it does not require the calibration of multiple configuration parameters.\"}", "{\"title\": \"Re\", \"comment\": \">> and applying the suggested initial learning rate of 0.1 (see Section C).\\n\\nIt is 0.05 for all experiments with SGDR in [1]. The origin of 3e-4 for Adam is not clear to me.\\n\\n>> 2) We compared with SGDR that yields systematically better results than cosine annealing [1] at the same computational cost.\\n\\nThey yield comparable results, see Table 1 in [1]. \\n\\n>> For example, in the case of CIFAR10, our reported result has a best accuracy for SGDR of 92.54% vs. 92.36% using t0=10 and t_mul=2.264 (which reach the last convergence at epoch 200).\\n\\nWhy you didn't update Figure 1 with comparable SGDR and cosine annealing? \\n\\n>> t0=10 and t_mul=2.264\\nyou could use t0=13 and t_mul=2 to have restarts after 13, 39, 91 and 195 epochs to avoid rounding issues.\"}", "{\"title\": \"Answer to Reviewer 1\", \"comment\": \"In the following we try to address the reviewer\\u2019s concerns point by point.\\n1) The learning rates are indeed different because the (inner) optimization methods are different: SGDM on CIFAR10 and Adam on CIFAR100; please see 2nd and 3rd paragraph of Section 6. As it is well known SGDM and Adam have different ranges. Our aim was to showcase the behaviour of MARTHE with different optimization methods. For the sake of completeness, we added in the supplementary material of the updated version of the paper the results of the same experiments using SGDM for CIFAR100, and applying the suggested initial learning rate of 0.1 (see Section C).\\n2) We compared with SGDR that yields systematically better results than cosine annealing [1] at the same computational cost.\\n3) For CIFAR10 we used the best found hyperparameters by the authors [1]. We however tried to repeat the SGDR experiments obtaining the last convergence at epoch 200 which didn\\u2019t result in any statistical improvement in the best accuracy reached. For example, in the case of CIFAR10, our reported result has a best accuracy for SGDR of 92.54% vs. 92.36% using t0=10 and t_mul=2.264 (which reach the last convergence at epoch 200).\\n4) Please note that MARTHE has only one effective configuration parameter (beta) which is the step-size to adapt the hyper-learning rate. As we show in Section 5 and also empirically in Section 6, this parameter is quite easy to set: the method diverges very quickly for higher values of beta and when it starts converging it has consistent and stable results (see second last paragraph of Section 5). In fact, we propose at the end of Section 4 a very simple methodology to set this configuration parameter which we use in the experiments in Sec. 6. \\n\\nRegarding the results with previous adaptive approaches, to the best of our knowledge we are not aware of experiments with RTHO on CIFAR datasets. The results reported in [2], instead, are obtained with a slightly different architecture (VGG16 while we use a VGG11) and the statistics reported in [2] are different (they report validation loss while we report accuracy).\\n\\nAs a final remark let us stress upon the fact that the main contribution of this paper is to better understand existing methods (HD and RTHO), to show that they have limitations and can fail in some cases (Sec 3 & Figure 1) and to demonstrate that they can be generalized using a single algorithm (Sec 4) which can interpolate between them to mitigate some of their limitations. The goal is not to win yet another performance battle but rather to improve the understanding on the topic, which is an important one in the context of training deep neural networks. \\n\\n[1] Loshchilov, Ilya, and Frank Hutter. \\\"Sgdr: Stochastic gradient descent with warm restarts.\\\" arXiv preprint arXiv:1608.03983 (2016).\\n[2] Baydin, Atilim Gunes, Robert Cornish, David Martinez Rubio, Mark Schmidt, and Frank Wood. \\\"Online learning rate adaptation with hypergradient descent.\\\" arXiv preprint arXiv:1703.04782 (2017).\"}", "{\"title\": \"Answer to Reviewer 2\", \"comment\": \"We thank the reviewer for the helpful feedback. We answer below to the concerns raised.\\n\\n1) Yes, you are correct. In Figure 2 (right) when no point is reported it means that the achieved average accuracy for that configuration falls below 88% or that the method diverged. We changed the caption to make this point clearer. Please note that the sensitivity of beta is different among the different methods since for MARTHE it controls the hyper-learning rate updates while for the others (HD, RTHO) is the hyper-learning rate itself. We have found empirically that the parameter beta for MARTHE is quite easy to set since the method either diverges very quickly for higher than appropriate values of beta or \\u2014 when it starts converging \\u2014 it has consistent and stable results. See Fig. 2 (right) and Sec. 5 and 6. In fact, we propose at the end of Sec. 4 a very simple methodology to set this configuration parameter which we use in the experiments in Sec. 6. \\n2) In general, we agree with your comment. In this case, however, we decided to use the abundance of previous successful experimental results on the CIFAR10 and 100 datasets and thus used established settings from literature, e.g. [1,2,3]. \\n3) We think that the initial increase of the LR brings the weights to a good initial point where, even with a quick exponential decrease, the method leads to very good results. This is somewhat in line with the intuition behind the super convergence effect on neural networks [4].\\n4) Yes. The computation of the variables Z is structurally identical to the tangent propagation of forward mode algorithmic differentiation. This means that theoretically the runtime complexity is up to 4 times that of the underlying optimization iteration and the memory requirement up to 2 times. We added a comment on this at the end of section 4. However, with our implementation (in PyTorch), we have noticed the running time to be roughly 5X slower for VGG and also observed that the slowness increases with the depth of the network (e.g. it is ~8X slower for ResNet). Due to this computational bottleneck, we have not yet been able to train on ImageNet dataset within a reasonable time-limit. \\n\\nRegarding the clarity of the paper, we would be very happy to improve the readability of our work in the final version. We kindly ask the reviewer to point us which sections/paragraphs or choices of notation need to be revised.\\n\\n[1] Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708).\\n[2] Zagoruyko, S., & Komodakis, N. (2016). Wide residual networks. arXiv preprint arXiv:1605.07146.\\n[3] Loshchilov, I., & Hutter, F. (2018). Decoupled weight decay regularization.\\n[4] Smith, Leslie N., and Nicholay Topin. \\\"Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates.\\\" arXiv preprint arXiv:1708.07120 (2017).\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors introduce a hypergradient optimization algorithm for finding learning rate schedules that maximize test set accuracy. The proposed algorithm adaptively interpolates between two recently proposed hyperparameter optimization algorithms and performs comparably in terms of convergence and generalization with these baselines.\\n\\nOverall the paper is interesting, although I found it a bit dense and hard to read. I frequently found myself having to scroll to different parts of the paper to remind myself of the notation used and the definition of the different matrices. This makes it harder to evaluate the paper properly. The proposed algorithm seems interesting however, and the experimental results look quite impressive.\\n\\nI have a few concerns regarding the experiments however, which explains my score:\\n\\n1. In figure 2, does MARTHE diverge for values of beta greater than 1e-4? This seems to indicate that MARTHE is somehow more sensitive to beta than the other variations used. Do the authors have any intuition about what might be causing this behavior?\\n\\n2. The initial learning rate for SGDM and Adam was fixed at certain values for all experiments. Why is this a reasonable thing to do? It feels like MARTHE should be compared to SGDM and Adam at least when the initial learning rate is tuned for these properly. Otherwise, it doesn't feel like a fair evaluation? To the best of my knowledge, the final achieved accuracies achieved with MARTHE however seem quite competitive with the best results typically reached with tuned SGDM on the convolutional nets used in the paper.\\n\\n3. The learning rate schedules found by MARTHE seem to be somewhat counterintuitive. While an initial increase matches the heuristic of warmup learning rates frequently used when training convnets, the algorithms seems to decrease down the learning rate after that even quicker than what the greedy algorithm HD does. Do the authors have any intuition why this can lead to such a big improvement in performance over HD?\\n\\n4. Is it possible to provide some sort of estimate of how much computation MARTHE requires compared to a single SGDM run? How feasible is to test this algorithm on a bigger classification model on ImageNet?\\n\\nI think this paper is borderline, although I am leaning towards accepting it given the impressive empirical results. It would really improve the paper if the readability was improved, as well as if larger experimental results were included.\\n\\n====================================\", \"edit_after_rebuttal\": \"I thank the authors for their response. I am happy with their response and am sticking to my score.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The results are given only for the CIFAR datasets. Even for these two datasets the authors use very outdated networks, e.g., the best error rate for CIFAR-10 is in order of 6 percent. One should use contemporary/bigger networks, e.g., WRNs published in 2016 would give you about 4 percent.\\n1) The initial learning rate for CIFAR-10 and CIFAR-100 are different, respectively 0.1 and 0.0003. The use of such small initial learning rate for CIFAR-100 is not motivated especially given that it is usually in order of 0.05 or 0.1 when resnets are considered. \\n2) The authors don't compare to cosine annealing without restarts which is a pretty strong baseline. \\n3) The authors compare to SGDR but don't set its initial number of epochs in a way that its last restart convergences at around 200 epochs. \\n4) The proposed method has its own hyperparameters which greatly influence the results as shown in the appendix. I suspect that setting these hyperparameters is exactly what controls the slope of the learning schedule. \\n\\nOverall, the results are not convincing. The authors show that the previous adaptive approaches don't work well on the CIFAR datasets (despite the fact that their authors claimed the oppositve) and I don't think that the paper contains enough material to avoid the situation that futures approaches will claim similar things about the current study.\"}" ] }
BkxackSKvH
Learning Entailment-Based Sentence Embeddings from Natural Language Inference
[ "Rabeeh Karimi Mahabadi*", "Florian Mai*", "James Henderson" ]
Large datasets on natural language inference are a potentially valuable resource for inducing semantic representations of natural language sentences. But in many such models the embeddings computed by the sentence encoder goes through an MLP-based interaction layer before predicting its label, and thus some of the information about textual entailment is encoded in the interpretation of sentence embeddings given by this parameterised MLP. In this work we propose a simple interaction layer based on predefined entailment and contradiction scores applied directly to the sentence embeddings. This parameter-free interaction model achieves results on natural language inference competitive with MLP-based models, demonstrating that the trained sentence embeddings directly represent the information needed for textual entailment, and the inductive bias of this model leads to better generalisation to other related datasets.
[ "sentence embeddings", "textual entailment", "natural language inference", "interpretability" ]
Reject
https://openreview.net/pdf?id=BkxackSKvH
https://openreview.net/forum?id=BkxackSKvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "SmKtJ-nJHH", "s-h3mZabw", "SkgU_Tl9sS", "SJljPhlcor", "Skxf25e9oS", "Bkgq7QUptr", "HJgW7iEhYr", "ryx2DriFtr" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1578311863555, 1576798735270, 1573682541788, 1573682274633, 1573681834104, 1571803938205, 1571732248773, 1571562851760 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper1895/Authors" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1895/Authors" ], [ "ICLR.cc/2020/Conference/Paper1895/Authors" ], [ "ICLR.cc/2020/Conference/Paper1895/Authors" ], [ "ICLR.cc/2020/Conference/Paper1895/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1895/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1895/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Meta-review of reviewing for Int. Conf. on Learning Representations\", \"comment\": \"It is worrying for the field when reviewers for ICLR can't see any value in a novel and effective form of representation learning, and only consider engineering improvements. This is a short-sited view of how science makes progress.\\n\\n - James Henderson\"}", "{\"decision\": \"Reject\", \"comment\": \"This paper proposes a method for learning sentence embeddings such that entailment and contradiction relationships between sentence pairs can be inferred by a simple parameter-free operation on the vectors for the two sentences.\\n\\nReviewers found the method and the results interesting, but in private discussion, couldn't reach a consensus on what (if any) substantial valuable contributions the paper had proven. The performance of the method isn't compellingly strong in absolute or relative terms, yielding doubts about the value of the method for entailment applications, and the reviewers didn't see a strong enough motivation for the line of work to justify publishing it as a tentative or exploratory effort at ICLR.\", \"title\": \"Paper Decision\"}", "{\"title\": \"The fundamental nature of textual entailment\", \"comment\": \"Thank you for your helpful comments and suggestions.\\n\\nWe thank the reviewer for making the connection to fuzzy logic. We are looking at this connection, in particular to understand the role of unknown values in fuzzy logic. We will include a discussion in a later version of the paper.\\n\\nRegarding concern 1, that the scores are ad hoc to the inference task, and \\\"Entailment, contradiction, and neutral scores are interesting, but hardly generalize to other sentence matching tasks (e.g., various IR applications)\\\":\\nWe agree, in that the scores are designed to be specific to NLI, but we disagree that NLI is just another semantic task. As we discuss in Section 1.1, entailment, and thus NLI, is fundamental to models of natural language semantics. There is now a mini-industry of converting lots of different semantic tasks into NLI tasks (e.g. (Poliak et al., 2018a)). Saying that the entailment score is ad hoc because it only measure information inclusion is like saying that the dot product is ad hoc because it only measures similarity. More specifically for IR, it is easy to imagine a model of IR which says that a document is relevant if it entails the query (i.e. the information in the query is included in the information in the document).\\n\\nRegarding concern 2, that the importance of NLI is over-estimated, that NLI datasets are degenerate, and that NLI models do not transfer well:\\nWe share the view that existing NLI datasets are only partially representative of the fundamental problem of textual entailment. That is why we focus in this paper on learning representations with a clear interpretation in terms of the fundamental task, rather than just maximising performance on the existing datasets, and why we evaluate the resulting inductive bias on transfer performance to datasets which are \\\"degenerate\\\" in different ways (i.e. have different biases). In addition, our new transfer results with SentEval (discussed in our reply to Reviewer 1 and now reported in Section 4.4.2) demonstrate improved transfer performance on several different tasks, including semantic similarity (STS).\"}", "{\"title\": \"New results with SentEval and inductive bias against learning annotation bias\", \"comment\": \"Thanks for your helpful comments and suggestions.\\n\\nRegarding the comments \\\"I do have some concerns about their claims that this helps in transfer learning to other NLI tasks\\\" and \\\"I don't see why their model would perform worse since both theirs and the baseline would benefit from having these biases\\\":\\nOur claim in the transfer experiments is that our model has an inductive bias which encourages it to learn entailment and contradiction and discourages it from learning the arbitrary functions needed to capture annotation biases. The MLP, in contrast, can learn anything. Thus, we expect the baseline to perform better when knowing the annotation biases of the training set is useful in the test set (SNLI), but we expect our model to perform better when it is more useful to know the true underlying NLI task which all these datasets have in common. This is the nature of inductive biases. Clearly we need to improve our presentation of this contribution.\\n\\nRegarding the comment \\\"They could evaluate on sentence embedding and probing tasks (like SentEval)\\\":\\nWe have now done this evaluation with SentEval, and have added these results in a new version of the paper in Section 4.4.2. There are three types of experiments in SentEval, probing tasks, supervised transfer tasks and unsupervised transfer tasks. Results for the probing tasks show that our embeddings are worse for recovering surface and syntactic characteristics, but are better for the semantic probing tasks, as desired. For all the unsupervised transfer tasks and six out of seven of the supervised transfer tasks, our entailment-based sentence embeddings perform better than the baseline. We do not include the sentence-pair tasks in these experiments because the SentEval implementation uses heuristic matching features for these experiments and thus is incompatible with our model. \\n\\nRegarding the comment \\\"it's nice to see that the MLP isn't doing a lot of heavy lifting\\\":\\nWe would simply like to clarify that the MLP classifier in the baseline is doing a lot of the work, but not in our model (where we do not have one).\\n\\nWe hope that with these additional explanations and evaluations the reviewer will find our transfer experiments and analysis more convincing.\"}", "{\"title\": \"Clarifications on the main contributions and the numbers of parameters for various baselines\", \"comment\": \"Thank you for your helpful comments and suggestions.\\n\\nRegarding the comments \\\"the question becomes, how many fewer parameters\\\" and \\\"The authors should have tried to an ablation type of approach in equation 1\\\":\\nIn Table 1 we report how many parameters are included in the interaction model and classifier (\\\"#mlp\\\"), and results for concatenation (\\\"p,h\\\") instead of equation 1 (\\\"HM\\\"). For both HM and concatenation, our interaction model and classifier has about five orders of magnitude fewer parameters. And our model has much better accuracy than concatenation. We don't know if there is a different subset of HM features which would perform as well as our model, but we know from previous work that this set of HM features performs better than such alternatives. And it is clear that the number of parameters in any such classifier will still be over 4 orders of magnitude larger. In contract, the results for the ablation of our model given in Table 2 show similar levels of accuracy for 18 (E,C,N,S), 12 (E,C,N) and 9 (E,C) parameters. It is clear from these results that for any curve of accuracy versus parameters our model will be far superior.\\n\\nRegarding the comment \\\"It is hard to narrow down on the exact contributions of this paper\\\":\\nAs we tried to make clear in the last paragraph of Section 1, we believe that our main contributions are inducing sentence representations that are interpretable in terms of entailment (information inclusion) and contradiction, and providing an inductive bias which improves learning of the true NLI task at the expense of learning the annotation artefacts of individual NLI corpora. The first claim is supported by the ablation study and classifier weight analysis in section 3.4. The second claim is supported by the transfer performance reported in Section 4.4, which has now been extended with the results now in Section 4.4.2. We will rewrite this summary.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes an interesting approach towards learning NLI via parameter free operations over pairs of sentence embeddings. The authors propose entailment and contradiction operators that learn entailment and contradiction scores while training the parameters of the sentence encoders.\\n\\nThis is an interesting approach, however the experiments make me doubt the effectiveness of the proposed method. Admittedly, the authors do point out that at the cost of fewer training parameters, the proposed approach attains the same performance as NLI encoders with MLP based or attention based classifiers. However, the question becomes, how many fewer parameters are being learned to accept a performance that is in the same ball park but that does not exceed the SOTA.\\n\\nThe authors should have tried to an ablation type of approach in equation 1, to check if concatenation alone, element wise dot product alone or absolute difference alone or a combination of any two would work better with the scoring function.\\n\\nThis paper while taking a step in the right direction, seems a little premature for publication. That being said, the reported results my be of some value after all. It is hard to narrow down on the exact contributions of this paper.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"***Update***\\nI'd like to thank the authors for responding to my questions and for the additional experiments. I think the new sentence embedding experiments make the paper quite a bit stronger - it would be interesting to scale them up to using SNLI + MNLI to see how much further they can go (right now they are still below Sentence-BERT which also was trained on SNLI in addition to MNLI). I think this paper is an interesting idea, and my only other main concern is the transfer results to other NLI datasets. I think it would be a good idea to confirm that the difference between the two approaches is due to biases perhaps though an error analysis. I am borderline on this paper, but I feel enough improvement has been made to raise my rating.\\n\\n\\nThis paper proposes a new way to train sentence embedding models using NLI data such that very few parameters are used for classification. This is in contrast to prior work where an MLP is used. They define an interaction layer where operations are applied to the sentence embeddings to produce just 5 scores, which are then fed into a softmax layer for the final prediction.\\n\\nThe reason for their approach is they hypothesize that the the classification layers are encoding some of the information, presumably lessening the amount of information in the sentence embeddings and also preventing them from having a direct interpretation.\\n\\nTheir model does surprisingly well on the entailment datasets, only about 1-1.5 points or so worse than using a MLP, and so they indeed demonstrate that the sentence embeddings contain a lot of entailment information. However, I do have some concerns about their claims that this helps in transfer learning to other NLI tasks. Their main results show transfer performance comparing their approach and using an MLP, but it seems that overall, on all datasets, their approach transfers more poorly. Two datasets that they do worse on they try to discount for either having terrible, below random performance (which is true) and the other for having the same biases as MNLI. However, if that was the case, I don't see why their model would perform worse since both theirs and the baseline would benefit from having these biases, but their model performs about 11 points less.\\n\\nSo therefore, I don't find the transfer experiments convincing, though it is interesting how different the models do on some of these tasks - model performance is surprisingly task dependent \\n\\nWhat I propose is for the authors to investigate if their sentence embeddings are in fact noticeably different than the ones trained in the more conventional matter. They could evaluate on sentence embedding and probing tasks (like SentEval) and see how the two models compare. It would be interesting to see wha encoded information differs between the models.\\n\\nIm summary, I think this is an interesting experiment and it's nice to see that the MLP isn't doing a lot of heavy lifting (which also might be slightly counter to their hypothesis about the MLP containing a lot of entailment information). However, I find the transfer experiments unconvincing and the paper is short on analysis about when their model does better on transfer and when having an MLP helps, or how the learned sentence embeddings of the two models differ.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a few heuristic scorers to model entailment and contradiction, based on encoded sentence embeddings.\\n\\nThese scores include an entailment score, a contradiction score, a neutral score, and two similarity scores. They are defined heuristically, e.g., entailment score = geometric avg of such thing: 1 - sigma(premise not satisfied) * sigma(hypothesis satisfied). This is similar to fuzzy logic (for example, https://en.wikipedia.org/wiki/Fuzzy_logic) and some citations are needed in this regard.\\n\\nDifferent from fuzzy logic, this paper learns whether an anonymous feature is true or false by NN encoders end-to-end. Thus, the model actually has enough power to extract those features suitable for fuzzy logic-like heuristic matching.\\n\\nThe experiments are well designed. I especially appreciate the comparison to random matching heuristics, which already exhibits non-trivial performance. This is very reasonable because the neural network underlying random matching heuristics is still learnable. However, the proposed matching heuristics achieve 7% improvement compared with random ones, showing the effectiveness of the approach. \\n\\nThe authors also have ablation test and experiments on out-of-domain datasets.\", \"i_have_two_concerns\": \"1. One limitation of this paper is that the heuristic matching scorers are pretty ad hoc to the inference task. The two similarity scores are not too novel, for example, sim_diff is the L1-distance between two vectors. Entailment, contradiction, and neutral scores are interesting, but hardly generalize to other sentence matching tasks (e.g., various IR applications). \\n\\n2. I have a feeling that the importance of NLI is over-estimated. While logical reasoning is important in AI, NLI datasets are somehow degenerated, and existing solutions are basically connecting neural edges. As mentioned in the paper, NLI models do not transfer well to out-of-domain NLI samples, not to mention non-NLI tasks. It would be interesting to see if the well-designed heuristic matching scores could ease the underlying model, so that it learns more generic sentence embeddings in general.\", \"minor\": \"\", \"in_references\": \"Williams, Nagnia, Bowman: duplicate entry\"}" ] }
HJxp9kBFDS
Invariance vs Robustness of Neural Networks
[ "Sandesh Kamath", "Amit Deshpande", "K V Subrahmanyam" ]
Neural networks achieve human-level accuracy on many standard datasets used in image classification. The next step is to achieve better generalization to natural (or non-adversarial) perturbations as well as known pixel-wise adversarial perturbations of inputs. Previous work has studied generalization to natural geometric transformations (e.g., rotations) as invariance, and generalization to adversarial perturbations as robustness. In this paper, we examine the interplay between invariance and robustness. We empirically study the following two cases:(a) change in adversarial robustness as we improve only the invariance using equivariant models and training augmentation, (b) change in invariance as we improve only the adversarial robustness using adversarial training. We observe that the rotation invariance of equivariant models (StdCNNs and GCNNs) improves by training augmentation with progressively larger rotations but while doing so, their adversarial robustness does not improve, or worse, it can even drop significantly on datasets such as MNIST. As a plausible explanation for this phenomenon we observe that the average perturbation distance of the test points to the decision boundary decreases as the model learns larger and larger rotations. On the other hand, we take adversarially trained LeNet and ResNet models which have good \ell_\infty adversarial robustness on MNIST and CIFAR-10, and observe that adversarially training them with progressively larger norms keeps their rotation invariance essentially unchanged. In fact, the difference between test accuracy on unrotated test data and on randomly rotated test data upto \theta , for all \theta in [0, 180], remains essentially unchanged after adversarial training . As a plausible explanation for the observed phenomenon we show empirically that the principal components of adversarial perturbations and perturbations given by small rotations are nearly orthogonal
[ "Invariance", "Adversarial", "Robustness" ]
Reject
https://openreview.net/pdf?id=HJxp9kBFDS
https://openreview.net/forum?id=HJxp9kBFDS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "eKytb1tm1x", "rJxQy7DCKS", "r1lG531aKr", "rkx9Ros2FS" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735238, 1571873498761, 1571777673923, 1571761105546 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1894/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1894/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1894/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper examines the interplay between the related ideas of invariance and robustness in deep neural network models. Invariance is the notion that small perturbations to an input image (such as rotations or translations) should not change the classification of that image. Robustness is usually taken to be the idea that small perturbations to input images (e.g. noise, whether white or adversarial) should not significantly affect the model's performance. In the context of this paper, robustness is mostly considered in terms of adversarial perturbations that are imperceptible to humans and created to intentionally disrupt a model's accuracy. The results of this investigation suggests that these ideas are mostly unrelated: equivariant models (with architectures designed to encourage the learning of invariances) that are trained with data augmentation whereby input images are given random rotations do not seem to offer any additional adversarial robustness, and similarly using adversarial training to combat adversarial noise does not seem to confer any additional help for learning rotational invariance. (In some cases, these types of training on the one hand seem to make invariance to the other type of perturbations even worse.)\\n\\nUnfortunately, the reviewers do not believe the technical results are of sufficient interest to warrant publication at this time.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper analyzes the behaviour of DNN trained with rotated images and adversarial examples. Namely, the paper analyzes the relationship between training with rotated images and the robustness to adversarial perturbations, and vice-versa.\", \"the_paper_has_several_technical_issues_that_need_to_be_resolved_before_drawing_any_conclusions\": \"1) \\u201cinvariance\\u201d: this term is not used in the correct way. The fact that the network has the same accuracy when before and after rotation does not mean that the output layer is invariant to rotation. Note invariance in the output layer is a more stringent criterion as it requires that the images get labeled in the same way. The same accuracy can be achieved with completely different labelings of the images. What this paper is evaluation is robustness to rotation vs robustness to adversarial perturbations.\\n\\n2) It is unclear that Figure 3 is saying that adversarial training does not affect the rotation invariance because there is a general drop of accuracy. The analysis could have been done by evaluating how many images are labelled differently after the rotation, and all the plots will be aligned at 0 degrees.\\n\\n3) Finding out the robustness to adversarial perturbations is an NP-hard problem. So, for all tested cases in the paper, there could be a perturbation that damaged the model much more than the ones found, which could change the conclusions of the analysis. \\n\\n4) The networks compared in the two experiments are different networks. There could be a network dependency.\\n\\nAlso, I find the paper poorly written (eg. in the abstract: \\\"Neural networks achieve human-level accuracy on many standard datasets used in image classification.\\u201d -> what does it mean \\u201chuman-level accuracy\\u201d?; \\\"The next step is to achieve better generalization to natural (or non-adversarial) perturbations\\u201d -> why is this the next step?).\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper shows empirically that rotational invariance and l infinity robustness are orthogonal to each other in the training procedure. However, the reviewer has the following concerns,\\n\\nIt is already shown in (Engstrom et al., 2017) that models hardened against l infinity-bounded perturbations are still vulnerable to even small, perceptually minor departures from this family, such as small rotations and translations. What is the message beyond that paper that the authors would like to convey?\\nThe experiments are only on MNIST and CIFAR-10. Training on a larger dataset like imagenet would make the experiments more convincing.\\nGoing beyond the observation, what shall we do to improve against different perturbation simultaneously? Or is it an impossible task to improve on both?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper examines the interplay between the related ideas of invariance and robustness in deep neural network models. Invariance is the notion that small perturbations to an input image (such as rotations or translations) should not change the classification of that image. Robustness is usually taken to be the idea that small perturbations to input images (e.g. noise, whether white or adversarial) should not significantly affect the model's performance. In the context of this paper, robustness is mostly considered in terms of adversarial perturbations that are imperceptible to humans and created to intentionally disrupt a model's accuracy. The results of this investigation suggests that these ideas are mostly unrelated: equivariant models (with architectures designed to encourage the learning of invariances) that are trained with data augmentation whereby input images are given random rotations do not seem to offer any additional adversarial robustness, and similarly using adversarial training to combat adversarial noise does not seem to confer any additional help for learning rotational invariance. (In some cases, these types of training on the one hand seem to make invariance to the other type of perturbations even worse.)\\n\\nThis paper is mostly clear and reasonably written. However, I do not think that the results of this investigation are significant enough to warrant publication at ICLR. In particular, I'm not sure that I really understand the motivation of this research question. I suppose a full notion of robustness would include invariance to perturbations of all types -- whether adversarial or otherwise -- and one might hope that techniques for encouraging such resilience mutually reinforce each other. However, since they are human perceptible, the perturbations associated with invariance are exactly the features that are not associated with adversarial noise, so I don't see why they should be related at all. \\n\\nFrom a different perspective, invariance is a property of the true data distribution -- a rotated version of an image of a cat is another valid sample from the underlying distribution -- the invariance property is a type of constraint tying together different elements of the data generating distribution. On the other hand, adversarially perturbed images are often thought to be \\\"off the data manifold\\\" -- i.e. not valid samples from the true underlying distribution. Given this perspective, I am confused about why I should expect to see any interplay between these two ideas. The fact that the authors do not find any interplay is reasonable, but I remain confused about why they were investigating this question in the first place. \\n\\nUnfortunately, this means that I don't think this work meets the significance criterion for being published at ICLR.\"}" ] }
r1enqkBtwr
Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
[ "Stefano Spigler", "Mario Geiger", "Matthieu Wyart" ]
How many training data are needed to learn a supervised task? It is often observed that the generalization error decreases as $n^{-\beta}$ where $n$ is the number of training examples and $\beta$ an exponent that depends on both data and algorithm. In this work we measure $\beta$ when applying kernel methods to real datasets. For MNIST we find $\beta\approx 0.4$ and for CIFAR10 $\beta\approx 0.1$. Remarkably, $\beta$ is the same for regression and classification tasks, and for Gaussian or Laplace kernels. To rationalize the existence of non-trivial exponents that can be independent of the specific kernel used, we introduce the Teacher-Student framework for kernels. In this scheme, a Teacher generates data according to a Gaussian random field, and a Student learns them via kernel regression. With a simplifying assumption --- namely that the data are sampled from a regular lattice --- we derive analytically $\beta$ for translation invariant kernels, using previous results from the kriging literature. Provided that the Student is not too sensitive to high frequencies, $\beta$ depends only on the training data and their dimension. We confirm numerically that these predictions hold when the training points are sampled at random on a hypersphere. Overall, our results quantify how smooth Gaussian data should be to avoid the curse of dimensionality, and indicate that for kernel learning the relevant dimension of the data should be defined in terms of how the distance between nearest data points depends on $n$. With this definition one obtains reasonable effective smoothness estimates for MNIST and CIFAR10.
[ "data", "kernel methods", "curves", "empirical data", "mnist", "student", "asymptotic", "paradigm asymptotic" ]
Reject
https://openreview.net/pdf?id=r1enqkBtwr
https://openreview.net/forum?id=r1enqkBtwr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "kcLvb6eiDN", "Skeke2rhjB", "rJl0KNrniH", "SygHlIsjiS", "BJlIOv6LiB", "rJgzzP6UjS", "B1lvLLaLiB", "HklSLYXrcr", "HylkirUAYS", "Sye_AQX0FH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735207, 1573833703491, 1573831814135, 1573791213251, 1573472109613, 1573472010182, 1573471822781, 1572317517372, 1571870102615, 1571857360168 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1893/Authors" ], [ "ICLR.cc/2020/Conference/Paper1893/Authors" ], [ "ICLR.cc/2020/Conference/Paper1893/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1893/Authors" ], [ "ICLR.cc/2020/Conference/Paper1893/Authors" ], [ "ICLR.cc/2020/Conference/Paper1893/Authors" ], [ "ICLR.cc/2020/Conference/Paper1893/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1893/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1893/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper studies, theoretically and empirically, the problem when generalization error decreases as $n^{-\\\\beta}$ where $\\\\beta$ is not $\\\\frac{1}{2}$. It analyses a Teacher-Student problem where the Teacher generates data from a Gaussian random field. The paper provides a theorem that derives $\\\\beta$ for Gaussian and Laplace kernels, and show empirical evidence supporting the theory using MNIST and CIFAR.\\n\\nThe reviews contained two low scores, both of which were not confident. A more confident reviewer provided a weak accept score, and interacted multiple times with the authors during the discussion period (which is one of the nice things about the ICLR review process). However, this reviewer also noted that ICLR may not be the best venue for this work.\\n\\nOverall, while this paper shows promise, the negative review scores show that the topic may not be the best fit to the ICLR audience.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Data added to pdf\", \"comment\": \"We have added the new data in Appendix E in the pdf.\"}", "{\"title\": \"Further data\", \"comment\": \"Thank you. Following your suggestion we ran some simulations using a Mat\\u00e9rn kernel of varying parameter $\\\\nu$ as Teacher and a Laplace kernel as student, in 1d.\\n\\nAs found in [1] the exponent $\\\\alpha$ for a Mat\\u00e9rn kernel with parameter $\\\\nu$ is $\\\\alpha=d+2\\\\nu$. Varying $\\\\nu$ we can vary the mean-squared smoothness of the data. Within our framework we can predict the exponent $\\\\beta$ to be $\\\\beta = \\\\frac1d \\\\min(\\\\alpha_T-d,2\\\\alpha_S) = \\\\min(2\\\\nu,4)$. In the simulations we tested several values $\\\\nu = 0.5, 1, 2, 4, 8$. Indeed, we observe the predicted exponents $\\\\beta=1$ for $\\\\nu=0.5$, $\\\\beta=2$ for $\\\\nu=1$ and $\\\\beta=4$ for the others. We will definitely add this data to our paper, since we agree with you that they strengthen our point.\\n\\nIs there any way to upload the figure here for the review process?\\n\\n\\n[1] Rasmussen, Carl Edward and Williams, Christopher K. I. (2006) Gaussian Processes for Machine Learning\"}", "{\"title\": \"Thanks\", \"comment\": \"Thanks for your comments; I in particular think making explicit that the theorem applies with effective dimension is important.\\n\\nIn terms of testing the theorem's predictions: I agree that you certainly do not need to study some enormous band of kernels. But you've only evaluated two smoothness settings (Laplace and Gaussian); it would bring more confidence to consider more. Mat\\u00e9rn kernels in particular should allow for easily setting the smoothness parameter to whatever you want. I don't know if I would call this \\\"necessary,\\\" but I think it would help illustrate the applicability of your theorem.\"}", "{\"title\": \"Answer to R2\", \"comment\": \"We thank R2 for pointing to literature. However reading (in the short time we had) the book by Wendland we could only find cases where the target function is assumed to be in the RKHS of the kernel used to make the inference. This is definitely *not* what we do in our paper: our assumption is much weaker (a Gaussian process is never in the RKHS of its co-variance kernel), and leads to training curves with new exponents. If R2 has references that correspond to what we actually do, we would be interested to know.\\n\\nAs stated in (6) and (28) we are computing the test error, defined as the expected mean-squared error committed on a new, previously unobserved point $\\\\mathbf{x}$. Of course in the presence of noise one could decompose the generalization error over different contributions. But again, the treatment of noise does not exist in the framework we introduce here, which is not based on RKHS.\", \"concerning_the_last_comment\": \"This is wrong. The Teacher-Student setting considered in Theorem 1 precisely does *not* assume that the instance $Z$ of the teacher process lies in the RKHS of the student kernel, as also R1 has emphasized. A brief discussion of what would happen in that case is included in the Conclusion (Sec. 6), and it leads to the conclusion that in such a case $\\\\alpha_T$ would have to be fairly large and $Z$ should be very smooth (in a mean-squared sense). All the previous comments of the referee appear to be based on this misconception, apparently leading to the weak mark he gave.\"}", "{\"title\": \"Answer to R1\", \"comment\": \"We agree with R1: our point is that the analogy between deep learning and kernels motivates a better understanding of kernels in general. We will clarify this sentence.\\n\\nR1 is correct, and it is a interesting statement. We will add a few sentences introducing the notion of kernel dominance and stating this result. \\n\\nStudying how a regularizing term would affect the learning curve is an interesting empirical question. Yet we do not think it is opportune to add such studies here: they do not connect to our theoretical framework that does not include regularization. Our manuscript would then be less clear.\\n\\nYes, the proof is identical in the case where the points lie on a regular lattice of lower dimension $d_\\\\mathrm{eff}$ than the embedding space. To see that, it is sufficient to define the kernels restricted to this lower dimension subspace. The restricted kernels have the same coefficient $\\\\alpha_S$ and $\\\\alpha_T$; and the theorem goes through with $d$ replaced by $d_\\\\mathrm{eff}$. We will indicate that point. \\n\\nThanks! Concerning other kernels: note that our goal is not to perform a test of our theorem on all translation invariant kernels. Instead, it is to test all the qualitatively distinct predictions our theorem makes. To do that we change the spatial dimension as well as the smoothness of the kernel (Laplace or Gaussian) both for the teacher and the student kernels. That way, we explore all the different cases that our theorem predicts, and we believe the empirical support for our prediction is strong. However, should R1 still deem necessary that we provide some further numerical results, we will.\"}", "{\"title\": \"Answer to R3\", \"comment\": \"Our manuscript does not provide a method (as noted by the other reviewers), so there is no meaning in comparing its efficacy to anything else. It is a fundamental work, proposing a theoretical framework to explain quantitative observations on the learning curves of kernels.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In order to rationalize the existence of non-trivial exponents that can be independent of the specific kernel\\nused, this paper introduces the Teacher-Student framework for kernels. In this scheme, a Teacher generates data according to a Gaussian random field, and a Student learns them via kernel regression. Theresults quantify how smooth Gaussian data should be to avoid the curse of dimensionality, and indicate that for kernel learning the relevant dimension of the data should be defined in terms of how the distance between nearest data points depends on sample numbers.\\nThe paper is well written, tghe major issue of this paper is the lack of comparison with other previous methods. Therefore, the efficacy of the proposed model can not be well demontrated.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies, empirically and theoretically, the learning rates of (shift-invariant) kernel learners in a misspecified setting. In the well-specified setting, the rate of kernel learners is at least $n^{-1/2}$, and in a misspecified setting assuming only Lipschitz targets, the rate is $n^{-1/d}$. Neither seems to match the experimental rate on MNIST and CIFAR-10; this paper proposes a theoretical model that can more-or-less match the experimental rate with essentially-reasonable assumptions.\", \"my_main_complaint_is_on_the_basic_setting_of_the_work\": \"in your motivation, you say \\\"it is nowadays part of the lore that there exist kernels whose performance is nearly comparable to deep networks.\\\" The main such kernel, though, is the (convolutional) neural tangent kernel of Arora et al. (2019), which unlike the kernels you study here is not shift-invariant, and your theorems do not at all apply to this kernel. This is fine, but should probably be clearer in the description.\", \"a_related_comment_on_your_main_theorem\": \"your target function evaluated at every conceivable point (not just on a grid) is a sample from a Gaussian process. Samples from GPs with mean zero and covariance kernel $K_T$ almost surely are not in the RKHS $\\\\mathcal H_T$, but they *are* almost surely in the RKHS of any kernel $K_R$ which nuclearly dominates $K_T$ (see Lukic and Beder, \\\"Stochastic Processes with Sample Paths in Reproducing Kernel Hilbert Spaces\\\", Trans. AMS 2001). If such a kernel exists, using it as the \\\"student\\\" kernel should give us a rate of at least $n^{-1/2}$ with standard results (with some slight details still to be worked out, but should be true). Thus, it seems that your theorem implies that for $\\\\alpha_T < \\\\frac32 d$, no such translation-invariant kernel $R$ exists. This might be already easy to see from a Fourier definition of nuclear dominance, I'm not sure, but if not it is something that seems of somewhat independent interest.\", \"it_is_also_notable_that_both_your_practical_results_and_your_theorem_are_for_algorithms_essentially_without_any_regularization_other_than_the_choice_of_kernel\": \"the regression setting is exact interpolation, and your soft-margin uses $C = 10^4$ so is \\\"almost\\\" a hard-margin SVM. This is also fine \\u2013\\u00a0interpolation methods have seen a lot of interest of late, and certainly can perform well. But it's not the typical setting, and it would be interesting to see if the curves of Figure 1 look different when using e.g. a cross-validated setting for the amount of regularization.\", \"another_complaint\": \"you argue that applying Theorem 1 with this particular notion of effective dimension seems to give good results, but at least as it's stated, Theorem 1 doesn't actually apply with effective dimension, only ambient dimension. Is it possible to prove Theorem 1 with an appropriate version of effective dimension? I didn't carefully check the proof, but from your outlined sketch it seems like it might be only a small change.\\n\\nEmpirically, your investigations are nice, but it would be good to consider some other shift-invariant kernels as well: inverse multiquadric, Mat\\u00e9rn, or spline RBF kernels would be prominent options.\", \"overall\": \"I think this is a worthwhile study with interesting results. The theoretical setting, though, is somewhat limited by its fundamental approach, and the experiments aren't as thorough as they could be. Also, honestly, I'm not sure ICLR is the best venue for it (if I had written this paper around this time, I probably would have submitted it to AISTATS; it's certainly not *off* topic for ICLR, but fairly distant from most work at it).\", \"some_typos\": [\"Under (2): \\\"man-square error.\\\"\", \"Under (25): \\\"where where.\\\"\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper experimentally investigates how fast the generalization error decreases when some specific kernel functions are used in real datasets. This paper conducted numerical experiments on several datasets to investigate the decreasing rate of the generalization error, and the rate is determined for such datasets. This decreasing rate is theoretically analyzed by using the approximation theory of RKHS in the teacher-student setting. It is shown that the rate is determined with the smoothness and effective dimensionality of input. Then, the smoothness of the teacher function is also derived through this analysis.\\n\\nOverall, the paper is well written. I could easily follow the line. The pros and cons of the paper are summarized as follows.\", \"pros\": \"The numerical experimetns conducted in this paper are thorough, and they show interesting observations on the real datasets. This paper gives a practical information on the theoretical analysis as an empirical study.\", \"cons\": \"- The approximation theory shown in this paper (Theorem 1) is closely related to well-known results on kernel interpolation. However, this paper misses several related work in the literature. The result should be properly put in the literature. See, for example, [R1].\\n\\n[R1] H. Wendland. Scattered Data Approximation. Cambridge University Press, Cambridge, UK, 2005.\\n\\n- It is mentioned that this paper investigates the \\\"generalization error.\\\" However, what is acutally done is more like \\\"approximation error\\\" analysis (about linear interpolation in RKHS). In reality, there are observation noises and thus we typically consider the generalization error. But, the teacher-student setting does not assume the existence of noise. Under existence of noise, generalization error analysis seems more appropriate as performed in [R2].\\n\\n[R2] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008.\", \"minor_comment\": \"- In the introduction, it is mentioned that the assumption that the target function is included in RKHS is strong. However, the teacher-student setting considered in Theorem 1 assumes this assumption. The introduction requires some modification to make the message consistent.\\n\\n---Update---\\nThank you for your reply.\\nI understand the RKHS for teacher and that for student are different. But, in the introduction, you stated as \\\"Yet, RKHS is a very strong assumption which requires the smoothness of the target function to increase with d (Bach, 2017) (see more on this point below), which may not be realistic in large dimensions.\\\", which sounds like that an assumption that the target function is included in \\\"some\\\" RKHS corresponding to a smooth kernel is a strong assumption. At least, this sentence is not saying anything about difference between teacher and student, but is just saying assuming smoothness on the target is unrealistic. For me, this sounds inconsistent to your analysis. (This is just a minor concern. I wanted to clarify my understanding of your problem setting.) \\n\\nI think the setting where the teacher is not included in the student RKHS is also analyzed, for example, in the following papers (there are also several related papers):\\nF.J. Narcowich, J.D. Ward, and H. Wendland. Sobolev Error Estimates and a Bernstein\\nInequality for Scattered Data Interpolation via Radial Basis Functions. Constr. Approx.,\", \"24\": \"175\\u2013186, 2006.\\nSCHEUERER, M., SCHABACK, R., & SCHLATHER, M. (2013). Interpolation of spatial data \\u2013 A stochastic or a deterministic problem? European Journal of Applied Mathematics, 24(4), 601-629. \\n\\nTherefore, I still feel that the paper requires more expositions about the relation to the literature.\"}" ] }
rklhqkHFDB
LARGE SCALE REPRESENTATION LEARNING FROM TRIPLET COMPARISONS
[ "Siavash Haghiri", "Leena Chennuru Vankadara", "Ulrike von Luxburg" ]
In this paper, we discuss the fundamental problem of representation learning from a new perspective. It has been observed in many supervised/unsupervised DNNs that the final layer of the network often provides an informative representation for many tasks, even though the network has been trained to perform a particular task. The common ingredient in all previous studies is a low-level feature representation for items, for example, RGB values of images in the image context. In the present work, we assume that no meaningful representation of the items is given. Instead, we are provided with the answers to some triplet comparisons of the following form: Is item A more similar to item B or item C? We provide a fast algorithm based on DNNs that constructs a Euclidean representation for the items, using solely the answers to the above-mentioned triplet comparisons. This problem has been studied in a sub-community of machine learning by the name "Ordinal Embedding". Previous approaches to the problem are painfully slow and cannot scale to larger datasets. We demonstrate that our proposed approach is significantly faster than available methods, and can scale to real-world large datasets. Thereby, we also draw attention to the less explored idea of using neural networks to directly, approximately solve non-convex, NP-hard optimization problems that arise naturally in unsupervised learning problems.
[ "representation learning", "triplet comparison", "contrastive learning", "ordinal embedding" ]
Reject
https://openreview.net/pdf?id=rklhqkHFDB
https://openreview.net/forum?id=rklhqkHFDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "yQCUg_xaq2", "HJe1Rvq2oH", "HkxS8w92oS", "HkgcGv52iH", "H1gENCyjjS", "ryeO3Av5sB", "ryln6geqiS", "rklIgxx5oH", "SJgYkJgqsS", "SJgNYnkcsS", "rkxBU7f0qH", "rkgfiCmT5r", "HygLj-cG9B", "HJxl4O3AYB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735176, 1573853126972, 1573853004858, 1573852945757, 1573744172121, 1573711535582, 1573679299576, 1573679085866, 1573678816694, 1573678203539, 1572901709177, 1572843161812, 1572147614059, 1571895336109 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1892/Authors" ], [ "ICLR.cc/2020/Conference/Paper1892/Authors" ], [ "ICLR.cc/2020/Conference/Paper1892/Authors" ], [ "ICLR.cc/2020/Conference/Paper1892/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1892/Area_Chair1" ], [ "ICLR.cc/2020/Conference/Paper1892/Authors" ], [ "ICLR.cc/2020/Conference/Paper1892/Authors" ], [ "ICLR.cc/2020/Conference/Paper1892/Authors" ], [ "ICLR.cc/2020/Conference/Paper1892/Authors" ], [ "ICLR.cc/2020/Conference/Paper1892/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1892/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1892/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1892/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The authors demonstrate how neural networks can be used to learn vectorial representations of a set of items given only triplet comparisons among those items. The reviewers had some concerns regarding the scale of the experiments and strength of the conclusions: empirically, it seemed like there should be more truly large-scale experiments considering that this is a selling point; there should have been more analysis and/or discussion of why/how the neural networks help; and the claim that deep networks are approximately solving an NP-hard problem seemed unimportant as they are routinely used for this purpose in ML problems. With a combination of improved experiments and revised discussion/analysis, I believe a revised version of this paper could make a good submission to a future conference.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Author response (1/3)\", \"comment\": \"We thank the reviewer for their constructive feedback! We address all the reviewer\\u2019s comments below and made necessary revisions to the paper.\\n1) What are the nice properties of using the specific loss function and do we lose something by relaxation?\", \"short_answer\": \"Using the hinge loss leads not to a relaxation but an equivalent optimization problem to ordinal embedding. We describe this in detail below. This is established in the ordinal embedding literature and for completeness sake, we also added the below explanation in subsection A.8 in the appendix.\\nThe problem of ordinal embedding - finding an embedding $X = \\\\left \\\\{ x_1, x_2, .., x_n \\\\right \\\\} \\\\in \\\\mathbb{R}^d$ that satisfies a set of given triplets, $\\\\mathcal{T}$ - can be phrased as a quadratic feasibility problem (1) as shown below.\\n\\\\begin{equation}\\n\\t\\\\textrm{find } X \\\\textrm{ subject to } X^T P_{i,j,k} X > 0 \\\\textrm{ } \\\\forall (i,j,k) \\\\in \\\\mathcal{T}.\\n\\\\end{equation}\\nEach $P_{i,j,k}$ corresponds to a triplet constraint that satisfies,\\n$$\\\\vert \\\\vert x_i - x_j \\\\vert \\\\vert^2 > \\\\vert \\\\vert x_i - x_k \\\\vert \\\\vert^2 \\\\iff X^T P_{i,j,k} X > 0 $$\\nEvery feasible solution to problem above is a valid solution to the problem of ordinal embedding. Note that here we rephrased the same problem as defined in Equation (2) of the main paper.\\nAn equivalent way to solve the above problem, i.e., find a feasible solution that satisfies the constraints is by finding the global optima of the constrained optimization problem (1) given by the optimization problem as shown below.\\n\\\\begin{equation}\\n\\t\\\\min \\\\limits_{X \\\\in \\\\mathbb{R}^{nd}} \\\\sum \\\\limits_{(i,j,k) \\\\in \\\\mathcal{T}} \\\\max \\\\left \\\\{ 0, 1 - X^T P_{i,j,k} X \\\\right \\\\}\\n\\\\end{equation}\\nMeaning, every feasible solution to the first problem can be scaled to attain global optima of the second one and every global optima of the second problem is a feasible solution of the first (1). Moreover, in the first problem, any positive scaling of a feasible point $X$ is a solution as well. Whereas in the second one, this effect is eliminated.\\nTo summarize, the hinge loss does satisfy some nice properties in the sense that using the hinge loss to solve the ordinal embedding problem is not a relaxation but rather an equivalent one.\\n\\n(1) Bower, Amanda, Lalit Jain, and Laura Balzano. \\\"The Landscape of Non-Convex Quadratic Feasibility.\\\" 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018.\"}", "{\"title\": \"Author response (2/3)\", \"comment\": \"2) Why are neural networks able to approximately solve the problem? Can you develop some hypotheses based on these intuitions and some experiments to prove/disprove the hypothesis?\\nA popular hypothesis among neural network theorists and practitioners is that training a neural network typically leads to a non-convex optimization problem where the quality of local optima is improved toward the global optima with increasing width and depth. There is a lot of theoretical and empirical literature supporting this hypothesis.\\nFor instance, 1* showed that in deep linear networks with the square loss function, the objective function is non-convex in the network parameters and yet every local minima is a global minima. 2* showed that in fully connected neural networks with RELU activation functions, the quality of all differentiable local minima improves with increasing width and depth. In practice, this implies that first-order, gradient-based - approaches such as SGD can be used to efficiently train the networks.\\nMoreover, 3* also showed that for any fixed input dimension $d$, RELU networks of width $d+1$ and arbitrary depth can approximate any real-valued function on $d$ input variables.\\nOur primary conjecture is that this would hold true for our architecture as well with respect to our chosen loss function. We provide experimental evidence to support our hypothesis (already in the paper) demonstrating that with increasing width of the hidden layers, the objective value (triplet loss) achieved by the network decreases.\\n1* Kawaguchi, Kenji. \\\"Deep learning without poor local minima.\\\" Advances in neural information processing systems. 2016.\\n2*Kawaguchi, Kenji, Jiaoyang Huang, and Leslie Pack Kaelbling. \\\"Effect of depth and width on local minima in deep learning.\\\" Neural computation 31.7 (2019): 1462-1498.\\n3*Zhang, Chiyuan, et al. \\\"Understanding deep learning requires rethinking generalization.\\\" (2016).\"}", "{\"title\": \"Author response (3/3)\", \"comment\": \"3) Do the distributions satisfy some nice properties and this is why the problem of ordinal embedding is somehow easier which enables neural networks to solve the problem?\\nIt is natural to ask if the data distributions satisfy nice properties which allow the neural network to solve an easier optimization problem. Our short answer to this question is that the proposed method solves the ordinal embedding problem even for datasets that do not possess such nice properties. To provide evidence to support this claim, we refer to experiments conducted on 3 different datasets, which could be considered pathological, to demonstrate that our approach can achieve a low training error even in these cases. 1) Experiments in sections 4.4, A.2, A.3, A.4 are all performed on the uniform distributions which do not possess any pattern. Moreover, we chose 2 extra datasets, to demonstrate that our approach can minimize the objective just as well even in these cases. We refer the reviewer to Figure 9 in Appendix A.6 for more details.\", \"we_would_like_to_add_some_further_clarification_differentiating_the_two_tasks\": \"1) Reconstruction of the original dataset and 2) Solving ordinal embedding.\\nThe two problems are not equivalent. The ordinal embedding solution tends to a unique solution --- up to an isometric transform--- when the number of points grows large. For datasets with smaller sample sizes, the solution is not unique. Intuitively one can observe that wiggling points in their position do not violate the triplet answers. \\nThe focus of our work is to solve the ordinal embedding problem. The problem is solely to find a feasible set of points (embedding). However, observe that ordinal embedding is typically an under-determined problem, i.e., the solution is not unique. A simple example to see this would be to consider three points in some Euclidean space and generate all possible triplets from these points. One can verify that several possible configurations of these points satisfy all the triplets.\"}", "{\"title\": \"Clarification\", \"comment\": \"Thanks for the response.\\n\\nYes, I agree the entire problem is not convex when we optimize the coordinates of all the points. This was my mistake, but you can introduce auxiliary variables to leverage the convexity, and do an iterative, EM type of update. My point is that the loss function has certain nice properties, and I am expecting the paper to have more development on those.\\n\\nI do not agree that the problem is NP-hard regardless of the data distribution. You can construct pathological distributions that makes the problem easy to solve if you know the distribution a priori. Regardless, my point is that maybe there are certain properties/constraints that make the problem easier to solve, and maybe the experimental settings happen to satisfy these properties/constraints. That's why I mentioned hard examples. Approximation also plays a role here, because we probably do not need to solve the problem exactly. (As least that is the sentiment I get from the experiments.)\\n\\nIn terms of insights as to how neural networks solve them, other reviewers seem to have similar issues. I have read the responses, and I agree that this message of the paper is to point out that neural networks are able to solve this particular NP-hard problem. However, I find it weak to have this single message as a paper.\\n\\nHere are some concrete points to improve the paper. I would raise the score if the paper includes them.\\n\\n* Why do the authors think the networks are able to solve the problem? Can you form hypotheses based on those intuitions?\\n* How does the use of constrastive loss play a role in the hypotheses? Is it necessary or is it sufficient? How much do we lose from the relaxation? (This is where the properties of the loss function come in.)\\n* Can you do experiments to confirm or disprove those hypotheses?\"}", "{\"title\": \"Reviewers, any comments on the author responses?\", \"comment\": \"Dear Reviewers, thanks for your thoughtful input on this submission! \\u00a0The authors have now responded to your comments. \\u00a0Please be sure to go through their replies and revisions. \\u00a0If you have additional feedback or questions, it would be great to get them this week while the authors still have the opportunity to respond/revise further. \\u00a0Thanks!\"}", "{\"title\": \"Author response\", \"comment\": \"We thank the reviewer for the feedback. In the following, we address the comments individually:\\n\\n- Comparison with conventional triplet methods using images and their corresponding RGB images\", \"we_did_not_consider_comparisons_with_conventional_triplet_approaches\": \"the message of our paper was not to demonstrate the utility of ordinal embedding approaches over conventional (representation-based) triplet approaches. It was rather to show that when input representations are NOT available, we provide a scalable approach to solve the ordinal embedding problem. We agree with the reviewer that such a comparison would be possible but the experiments, we believe, would not reflect the message of the paper.\\n\\n- More synthetic experiments comparing the various ordinal embedding approaches\\n\\nThere is a large literature that compares existing ordinal embedding approaches, and in order to not overload the figures, we had decided to just compare against the most popular traditional algorithms. But we can definitely add more comparisons in the revision of the paper.\\n\\n- The \\u201cclaim that the use of neural networks with discrete inputs can approximately solve NP-hard optimization problems\\u201d\\n\\nWe would like to clarify that our claim was merely that we use neural networks to address ONE instance of an NP-hard optimization problem. We want to bring attention to the generic idea of using neural networks as optimization toolboxes to directly solve non-convex optimization objectives instead of merely for learning problems. \\nTo elaborate, consider optimization problems that arise in unsupervised learning - for instance, ordinal embedding objectives, clustering objectives or dimensionality reduction objectives. These optimization problems are typically not solved directly since there are non-convex, discrete, NP-hard. Instead, we resort to convex relaxations and many convex relaxations do not come with any guarantees. Consider, however, if we could use a non-convex optimization toolbox to directly tackle the original optimization problem - which is currently NOT the standard practice in ML. Then the value of the true objective already informs us of how close we are to the optimal solution of the optimization problem. So powerful non-convex solvers might be of a significant advantage over convex relaxations. Our paper simply shows ONE example for this. \\n\\n- Additional feedback: 1) scatter plots for the MTurk experiment with an increasing number of triplets 2) detailed analysis of heat-map distance matrix \\n\\nBoth suggestions will be added to the revision to enhance the analysis of the experiment. We will add the scatter plots of the training set (a subsample of the set), color-coded by the category, similar to scatter plots in Fig 3. Moreover, it is certainly possible to consider the pairwise distances heat-map of Figure 5. We plotted a detailed version of this plot in Figure 9, with full category labels. There are indeed meaningful patterns in block diagonals. For instance, we had the \\\"confectionery store\\\" category in the food concept, which is conceptually a bit far from food. Thus, we observe a clear rectangle with warmer colors. This is also the case for the \\\"goods wagon\\\" in the Vehicle concept.\"}", "{\"title\": \"Author response\", \"comment\": \"We thank the reviewer for the insightful comments. We address the questions in the following:\\n\\n- How many images did you have in the experiment?\\n\\nWe had 7500 images in total. We had 3 concept classes, and 2500 images for each concept. We will mention the total number in the main text.\\n\\n- The proposed network is not deep, but shallow\\n\\nWe agree that a clear distinction line between shallow and deep networks does not exist. So we will make a note on that issue.\\n\\n- More experiments on the number of layers \\n\\nWe had experimented with fewer layers. We realized that in this case the width of the network should be increased to compensate for the representation power of the network. As we already had an extensive set of experiments, we decided not to report that. As the proposed architecture already performs well to solve the ordinal embedding problem, we found it unnecessary to try deeper networks.\\n\\n- \\\"I don't see a clear conclusion of how to pick the width of hidden layers, maybe a better representation could be used.\\\"\\n\\nThere exist three parameters in this experiment, which makes it hard to come up with the most conclusive representation. We also generated line plots (multiple curves in one plot) and 3D mesh plots to show the dependency. In the end, we found the heat-map more informative. In the revision, we will add the other plots to support the claim.\\n\\n\\n- \\\"I don't see a discussion about the downsides of the method\\\"\\n\\nOne of the drawbacks is that our method needs GPUs, while the more traditional algorithms run on CPUs. This can be of disadvantage if non-machine learning experts want to use our method. However, this is the case for most recent ML methods based on neural nets.\\n\\nThe number of required triplets is theoretically lower bounded by nd log n, and this is also being confirmed by our experiments (our algorithm, as well as our competitors, break down when they get fewer triplets). Therefore, in a setting with passive triplet answers, and without extra information, it is impossible to overcome this problem. \\n\\n- \\\"in section 4.4 when comparing the proposed approach with another method why not use more complex datasets (like those used in section 4.3)\\\"\\n\\nIndependent of the dataset complexity, provided with enough triplet answers, all methods can yield less than 5% triplet error. However, the computation time is significantly lower for our proposed method. Due to the iterative nature of all algorithms, the computation time does not depend on the data distribution, but on the number of input points. Thus, a simple uniform dataset could serve to show our intention in this section. \\n\\n- \\\"in section 4.3, there is no guarantee that the intersection between the training set and the test set is empty.\\\"\\n\\nYes, in theory that is true, but in practice this is negligible: the total number of possible triplets is about 10^9. So the likelihood that two sets of size 1000 intersect is close to 0.\\n\\n- \\\"in section 4.3 how is the reconstruction built (Figure 3b)?\\\"\\n\\nFigure 3b is the exact output of the ordinal embedding in two dimensions. The colors are the initial labels of the input items. There are two or three labels assigned to demonstrate the quality of reconstruction. Note that the ordinal embedding output is unique only up to isometric transforms. In other words, every valid output is still valid with rotation, scaling and translation.\"}", "{\"title\": \"Author response\", \"comment\": \"Thanks for your feedback. We discuss each comment in the following:\\n\\n- The experiments are not large scale\\n\\nWe respectfully disagree with the reviewer's main comment that the experiments are not large scale. One needs to see the background of existing work: Existing ordinal embedding methods are known to be notoriously slow and embedding more than 10,000 points is not practical - as reflected in our experiments (see Figure 4). Our new approach manages to get one order of magnitude higher (100000 many points and about 4 million triplets), without diverting to heuristics such subsampling or adding extra information such as invoking active oracles (as needed in landmark approaches). Sure, this is not the scale of 80 million tiny images; but one wouldn\\u2019t ask an author of an improved SAT-solving algorithm, say, to scale to 80 million instances. \\n\\nRepresentation learning, the topic of this conference, has many facets. Learning representations from \\u201cbig data\\u201d (as in 80 million images with RGB representations) is one side, but learning representations when little data is available (no explicit representation, just binary-valued triplet comparisons) is the other side. Both are valuable in different circumstances. \\n\\n- No substantiate insight with respect to NP-hard problems\\n\\nWe would like to clarify that our claim was merely that we use neural networks to address ONE instance of an NP-hard optimization problem. We want to bring attention to the generic idea of using neural networks as optimization toolboxes to directly solve non-convex optimization objectives instead of merely for learning problems. \\nTo elaborate, consider optimization problems that arise in unsupervised learning - for instance, ordinal embedding objectives, clustering objectives or dimensionality reduction objectives. These optimization problems are typically not solved directly since there are non-convex, discrete, NP-hard. Instead, we resort to convex relaxations and many convex relaxations do not come with any guarantees. Consider, however, if we could use a non-convex optimization toolbox to directly tackle the original optimization problem - which is currently NOT the standard practice in ML. Then the value of the true objective already informs us of how close we are to the optimal solution of the optimization problem. So powerful non-convex solvers might be of a significant advantage over convex relaxations. Our paper simply shows ONE example for this. \\n\\n- It is not clear why the log n representation for items is chosen -- why not just map to embeddings directly?\\n\\nIt would not be possible to set the input dimension the same as the embedding dimension.\\n Our experiments demonstrate that we need input representations of size at least Omega (log n) to sufficiently reduce the triplet error. The size of the embedding dimension can be too low to achieve this. One could argue that instead of using a small network like ours, a heavily over-parameterized neural network could potentially accomplish the same with smaller input representation. However, the computational complexity of the method is significantly affected by this and this is in conflict with the main goal of the paper: scaling ordinal embedding.\\n\\n- Methods, where items have no representation, are questionable\\n\\nItems having no representation is a caveat of the data available rather than that of the method. The representationless framework of triplets is relevant to many applications (e.g. crowdsourcing), and the whole field of comparison-based learning works in this framework. \\n\\n- How to generalize to unseen items \\n\\nFirst, it is not standard practice to discuss the generalization to unseen instances in unsupervised machine learning problems, for example in the literature on clustering. But of course, if generalization exists, it is of advantage. \\nWe believe that in our case, generalization is realizable. One possible approach would be to reserve some extra bits in the binary representation of inputs, and then utilize it to represent new items. The network can be trained with extra batches of triplets which involves the new items. \\n\\n- The paper also misses relevant citations of similar questions from the field of (probabilistic) matrix factorization and relational learning.\\n\\nWe don\\u2019t really see a link to matrix factorization or relational learning. If the reviewer has some idea of such connections, we would be happy to learn of this.\"}", "{\"title\": \"Author reponse\", \"comment\": \"We would like to thank the reviewer for their feedback. We address each comment below individually with appropriate headings.\\n\\n- Summary\\n\\nWe would like to point out that the reviewer in the summary incorrectly described that our approach uses the \\\"triplet loss as a convex relaxation of the ordinal embedding problem\\\". Using the triplet loss as a proxy does not make the problem convex.\\n\\n- The relation between data distribution and hardness of ordinal embedding\\n\\nOrdinal embedding is NP-hard independent of the data distribution. The paper \\u201cLandscape of non-convex quadratic feasibility\\u201d (Bower et al. 2018) can shed more light on this. The equation (1) in this paper rephrases the ordinal embedding problem as a homogeneous quadratic feasibility problem. The constraint matrices of the problem (P_i in the paper), which correspond to the triplet inequalities, are all indefinite which makes the whole optimization NP-hard. \\n\\nMoreover, many of our experiments in this paper feature the uniform distribution, which does not satisfy any nice structural assumptions.\\n\\n- Using a convex solver\\n\\nAs we pointed out earlier, using the triplet loss does not make the optimization problem convex and hence using a convex solver would not be possible here.\\n\\n- \\u201cEquations (3) and (4): isn't this the same as using the hinge loss to bound the zero-one loss?\\u201d\\n\\nYes, that is true.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": [\"The paper presents a Neural Network based method for learning ordinal embeddings only from triplet comparisons.\", \"A nice, easy to read paper, with an original idea.\", \"Still, there are some issues the authors should address:\", \"for the experiment with Imagenet images, it is not very clear how many pictures are used. Is this number 2500?\", \"the authors state that they use \\\"the power of DNNs\\\" while they are experimenting with a neural network with only 4 layers. While there is no clear line between shallow and deep neural networks, I would argue that a 4 layer NN is rather shallow.\", \"the authors fix the number of layers of the used network based on \\\"our experience\\\". For the sake of completeness, more experiments in this area would be nice.\", \"for Figure 6, there is not a clear conclusion. While, it supports that \\\" that logarithmic growth of the layer width respect to n is enough to obtain desirable performance.\\\" I don't see a clear conclusion of how to pick the width of hidden layers, maybe a better representation could be used.\", \"I don't see a discussion about the downsides of the method (for example, the large number of triplet comparison examples needed for training; and possible methods to overcome this problem).\", \"in section 4.4 when comparing the proposed approach with another methods why not use more complex datasets (like those used in section 4.3)\", \"in section 4.3, there is no guarantee that the intersection between the training set and test set is empty.\", \"in section 4.3 how is the reconstruction built (Figure 3b)?\"], \"a_few_typos_found\": [\"In figure 3 (c) \\\"number |T of input\\\" should be \\\"number |T| of input\\\"\", \"In figure 5 (a) \\\"cencept\\\" should be \\\"concept\\\"\", \"In figure 8 \\\"Each column corresponds to ...\\\" should be \\\"Each row corresponds to ...\\\".\", \"In the last paragraph of A1 \\\"growth of the layer width respect\\\" should be \\\"growth of the layer width with respect\\\"\", \"In the second paragraph of A2 \\\"hypothesize the that relation\\\" should be \\\"hypothesize that the relation\\\".\", \"In section 4.3 last paragraph, first sentence: \\\"with the maximunm number\\\" should be \\\"with the maximum number\\\"\"]}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"Summary:\\n\\nMany prior works have found that the features output by the final layer of neural networks can often be used as informative representations for many tasks despite being trained for one in particular. These feature representations, however, are learned transformations of low-level input representations, e.g. RGB values of an image. In this paper, they aim to learn useful feature representations without meaningful low-level input representations, e.g. just an instance ID. Instead, meaningful representations are learned through gathered triplet comparisons of these IDs, e.g. is instance A more similar to instance B or instance C? Similar existing techniques fall in the realm of learning ordinal embeddings, but this technique demonstrates speed-ups that allow it to scale to large real world datasets.\", \"the_two_primary_contributions_of_the_paper_are_given_as\": \"- a showcase of the power of neural networks as a tool to approximately solve NP-hard optimization problems with discrete inputs\\n- a scalable approach for the ordinal embedding problem\\n\\nAfter experimentation on synthetic data, they compare the effectiveness of their proposed method Ordinal Embedding Neural Network (OENN) against the baseline techniques of Local Ordinal Embedding (LOE) and t-distributed Stochastic Triplet Embedding (TSTE). The test error given by the systems is comparable, but there are clear speed benefits to the proposed method OENN as the other techniques could not be run for a dataset size of 20k, 50k, or 100k.\\n\\nThen, they gathered real-world data using MTurk applied to a subset of ImageNet and applied OENN to learning embeddings of different image instances using only the MTurk triplet information rather than the input RGB input features.\", \"decision\": \"Weak Reject\\n\\n1. Interesting technique to take advantage of neural networks to efficiently learn ordinal embeddings from a set of relationships without a low-level feature representation, but I believe the experiments could be improved. One of the main advantages of this approach is efficiency, which allows it to be used on large real-world datasets. The MTurk experiment gives a qualitative picture, but it could be improved with comparisons to pairwise distances learned through alternative means using the RGB image itself (given that images would permit such a comparison). By this I mean, that you may be able to use relationships learned using conventional triplet methods which use input RGB features as ground truth, and test your learned relationships against those. However, since quantitative exploration of large real-world datasets may be challenging and expensive to collect, the synthetic experiments could have been more detailed. The message of synthetic experiments would be stronger if more of them were available and if the comparison between LOE, TSTE, and OENN was made on more of them.\\n\\n2. I think that the claim that the use of neural networks with discrete inputs can approximately solve NP-hard optimization problems is an exciting one, which likely necessitates more experiments (or theoretical results), but as it stands I don't think it is a fundamentally different conclusion from the fact that this method provides a great scalable solution for the ordinal embedding problem. This claim can be made secondarily or as motivation for continued exploration along this direction, but I think listing them as two distinct contributions is necessary.\", \"additional_feedback\": \"Since quantitative real-world results are challenging to obtain, improved presentation of the qualitative results would be helpful as well. You may be able to show more plots which help display the quality of the embedding space varying with the number of triplets used. For example, an additional plot after Figure 5 (b) which shows a few scatter plots of points (color coded by class) for training with different numbers of collected triplets. Also, since it should be fairly easy to distinguish between cars and animals or cars and food, it may be more interesting to focus on the heat-maps from along the block diagonal of Figure 5 (a) and talk about what relationships may have been uncovered within the animal or food subsets.\", \"very_minor_details\": \"In Figure 5, a legend indicating the relationship between color intensity and distance would be helpful.\\n\\nIn Figure 6 there seem to be unnecessary discrepancies between the y-axis and colorbar of subplots (a) and (b), and keeping those more consistent would improve readability.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper presents a way to learn a vectorial representation for items which are only described by triplet similiarity expressions.\\n\\nThe paper not only claims 'large scale representation learning' but also utilizing the described idea to use neural networks to \\\"directly, approximately solve non-convex NP-hard optimization problems that arise naturally in unsupervised learning problems.\\\" Both claims are not really shown in the paper: (i) The experiments are not large scale and (ii) it becomes not clear how any substantiate insight with respect to NP-hard problems can be gained here apart from the fact that it tackles a ML problem, which many seem to be computationally hard problems.\\n\\nAs such the paper is not convincing. On a more detailed level it is not clear why the log n representation for items is choosen -- why not just map to embeddings directly? The more interesting question of how to generalize to unseen items (how would that be possible given that items have no representation at all) is not discussed at all and seems not to be realizable, which makes the starting point of such methods (items have no representation) questionable.\\n\\nThe paper also misses relevant citations of similar questions from the field of (probabilistic) matrix factorization and relational learning.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes to use the triplet loss as a convex relaxation of the ordinal embedding problem. The loss is solved using feed-forward neural network with the input to the network being the ids of the items encoded in binary codes. The benefit of using a deep network is to exploit its optimization capability and the parallelism on GPUs. The experiments presented in the paper include a set of simulation experiments and a real-world task.\\n\\nI am giving a score of 3. This work is an interesting application of deep learning, but it gives little insight as to why deep networks are able to solve the problem and how to solve ordinal embedding itself.\\n\\nTo elaborate, the problem is known to be NP-hard in the worst case, while the data sets used in the paper seem to have certain nice properties. It would be interesting to see how deep networks do for the hard cases. It would also be interesting to see if additional assumptions, such as the existence of clusters or separation between clusters, make ordinal embedding simpler and thus tractable. Another approach is to assume the solution to have low surrogate loss (4), and any convex solver with sufficiently large number of points is able to find such a solution. Then the question becomes how deep networks solve the particular convex optimization problem. Thinking along these directions would bring more insight and impact to both the ordinal embedding problem and optimization in deep networks.\", \"one_quick_question\": \"equations (3) and (4)\\n--> isn't this the same as using the hinge loss to bound the zero-one loss?\"}" ] }
ByxoqJrtvr
Learning to Reach Goals Without Reinforcement Learning
[ "Dibya Ghosh", "Abhishek Gupta", "Justin Fu", "Ashwin Reddy", "Coline Devin", "Benjamin Eysenbach", "Sergey Levine" ]
Imitation learning algorithms provide a simple and straightforward approach for training control policies via standard supervised learning methods. By maximizing the likelihood of good actions provided by an expert demonstrator, supervised imitation learning can produce effective policies without the algorithmic complexities and optimization challenges of reinforcement learning, at the cost of requiring an expert demonstrator -- typically a person -- to provide the demonstrations. In this paper, we ask: can we use imitation learning to train effective policies without any expert demonstrations? The key observation that makes this possible is that, in the multi-task setting, trajectories that are generated by a suboptimal policy can still serve as optimal examples for other tasks. In particular, in the setting where the tasks correspond to different goals, every trajectory is a successful demonstration for the state that it actually reaches. Informed by this observation, we propose a very simple algorithm for learning behaviors without any demonstrations, user-provided reward functions, or complex reinforcement learning methods. Our method simply maximizes the likelihood of actions the agent actually took in its own previous rollouts, conditioned on the goal being the state that it actually reached. Although related variants of this approach have been proposed previously in imitation learning settings with example demonstrations, we present the first instance of this approach as a method for learning goal-reaching policies entirely from scratch. We present a theoretical result linking self-supervised imitation learning and reinforcement learning, and empirical results showing that it performs competitively with more complex reinforcement learning methods on a range of challenging goal reaching problems.
[ "Reinforcement Learning", "Goal Reaching", "Imitation Learning" ]
Reject
https://openreview.net/pdf?id=ByxoqJrtvr
https://openreview.net/forum?id=ByxoqJrtvr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "v0hOoPNtCW", "SkxlVY42sr", "rkx2BrNiiS", "S1ewOro5jB", "rkeZrXFcoS", "S1eiHXW9iB", "r1gszulcjH", "B1lVxde5oS", "Hkxzpwx5sr", "S1xCmqT0tB", "B1e5iMfpYS", "H1xKkYT3tS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735145, 1573828904485, 1573762372377, 1573725550899, 1573716793419, 1573684035414, 1573681170873, 1573681131533, 1573681082159, 1571899941828, 1571787425876, 1571768545505 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1890/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1890/Authors" ], [ "ICLR.cc/2020/Conference/Paper1890/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1890/Authors" ], [ "ICLR.cc/2020/Conference/Paper1890/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1890/Authors" ], [ "ICLR.cc/2020/Conference/Paper1890/Authors" ], [ "ICLR.cc/2020/Conference/Paper1890/Authors" ], [ "ICLR.cc/2020/Conference/Paper1890/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1890/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1890/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The authors present an algorithm that utilizes ideas from imitation learning to improve on goal-conditioned policy learning methods that rely on RL, such as hindsight experience replay. Several issues of clarity and the correctness of the main theoretical result were addressed during the rebuttal period in way that satisfied the reviewers with respect to their concerns in these areas. However, after discussion, the reviewers still felt that there were some fundamental issues with the paper, namely that the applicability of this method to more general RL problems (complex reward functions rather than signle state goals, time ) is unclear. The basic idea seems interesting, but it needs further development, and non-trivial modifications, to be broadly applicable as an approach to problems that RL is typically used on. Thus, I recommend rejection of the paper at this time.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Bounds\", \"comment\": \"So I believe the on-policy equvalence you describe in the rebuttal is correct when J_GCSL(pi) is evaluated for trajectories sampled from pi (and becomes a weaker approximation as pi and pi_old deviate). The way Theorem 4.1 is presented just does not make this clear. I would suggest reorganizing that section to remove incorporate the new bounds from B.1, and state explicitly (perhaps as a Corollary) that the equivalence holds for pi = pi_old.\"}", "{\"title\": \"Clarifications on pi_old and bounds\", \"comment\": \"\\u201cRegarding the new bound in appendix B, would this bound imply that a form of policy iteration (using exact integration over trajectories) would converge given an initial policy satisfying the assumptions required for Theorem 4.1\\u201d\\n-> The new bound provided by Lemma B.1 implies that given an exploratory data collection policy and a fully expressive (e.g. tabular) policy class, in the limit of infinite data or exact integration over trajectories), we converge to an optimal policy which maximizes the probability of reaching goals in the environment. Note that this kind of infinite sample analysis is typical for such proofs -- e.g., the Trust Region Policy Optimization proofs also only consider the infinite sample limit. Accounting for sampling error in such analysis is generally quite difficult.\\n\\n\\u201cRegarding the assumptions on pi_old \\u2026\\u201d\\n-> We'd like to clarify the statement of Theorem 4.1, which may have been potentially misleading in the original version of the paper - we have updated the paper to clarify this ambiguity. Theorem 4.1 demonstrates that J_{GCSL}(pi) is a lower bound on J(pi) when *on-policy* trajectories from pi are relabelled and trained on for the GCSL objective. Following the notation of Schulman et al 2015a, pi_{old} is not an arbitrary distribution, but rather a copy of the policy pi through which gradients do not propagate. This is the same pi_{old} that appears in surrogate objectives for the REINFORCE policy gradient, and in derivations for Schulman et al 2015b. As also discussed in those works, with this definition of pi_old, the two objectives J(pi) and J_{surr}(pi) may have different values, but have the same gradient for all pi, and thus equivalent to a constant factor. We have updated both Section 4 and Appendix B to clarify the definition of pi_{old} used in Theorem 4.1. \\n\\nPlease note that although Theorem 4.1 requires on-policy data, the new bound in Lemma B.1 provides performance guarantees that do not depend on on-policy data collection. \\n\\n1. Schulman, J., Levine, S., Moritz, P., Jordan, M., & Abbeel, P. (2015a). Trust Region Policy Optimization. ICML.\\n2. Schulman, J., Heess, N.M., Weber, T., & Abbeel, P. (2015b). Gradient Estimation Using Stochastic Computation Graphs. NIPS.\"}", "{\"title\": \"Assumptions on pi_old\", \"comment\": \"Thank you for taking the time to address these concerns.\\n\\nRegarding the assumptions on pi_old, those should be made explicit in Section 4 where the theorem is actually stated. Even with this assumption though, I am still not certain that the surrogate loss is equivalent to the true loss up to a constant factor. Specificaly, the gradient of the surrogate loss for a specific goal has trajectories weighted by pi_old, conditioned on the fact that they reach the goal, while the gradient of the true loss has them weighted by pi, again conditioned on the fact that the trajectory reaches the goal. These distributions might be very different.\\n\\nRegarding the new bound in appendix B, would this bound imply that a form of policy iteration (using exact integration over trajectories) would converge given an initial policy satisfying the assumptions required for Theorem 4.1.\"}", "{\"title\": \"Clarification of relabeling scheme\", \"comment\": \"For a trajectory {s_0, a_0, s_1, a_1, .... s_T}, we relabel every such tuple (s_t, a_t, s_{t+h}, h) to the dataset (a total of O(T^2) tuples for one trajectory). This may seem counterintuitive, but this relabelling strategy arises as a consequence of the particular notion of optimality we seek to maximize (defined in Section 3) - the likelihood of reaching the goal within a time limit of T timesteps. Under this notion of optimality, an optimal trajectory need not find the shortest path to the goal, but rather simply must reach the goal at the desired time-limit. If we witness some trajectory containing the snippet (s_t, a_t, s_{t+1}, a_{t+1}, .... s_{t+h}), this confirms the existence of a path from s_t to s_{t+h} which takes h timesteps when taking action a_t. Therefore, a_t at s_t must be optimal to reach s_{t+h} for a policy which is attempting to reach the goal exactly *h* timesteps in the future. where The lack of restrictions on when this relabelling can be done allows us to reuse data aggressively. Although in theory this may lead to \\\"lazy\\\" trajectories which wait or initially go the wrong way, we find in practice that the policy learns generally straightforward paths to the goal (as visualized in Appendix C). We have updated Section 4 of the paper to clarify our relabelling scheme. Please let us know if this addresses your concerns about which tuples are relabelled.\"}", "{\"title\": \"Elaborate on how relabeling is performed\", \"comment\": \"The relabeling of data has been mentioned in the paper, but how is it actually performed? In particular, how is an action a_t in s_t measured to be a good action for reaching distant s_{t+h} such that the tuple (s_t, a_t, s_{t+h}, h) is added to the relabeled dataset? The practical implication of this approach depends on the details of this relabeling step.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your insightful comments and suggestions! We have revised the paper to address the concern about the \\\"notion of optimality,\\\" and we have provided additional theoretical analysis on the relationship between the GCSL loss and J(pi) to address the concerns raised in the review. We provide detailed responses to many of your comments below:\\n\\n\\u201cWhile not a flaw in the work itself, it should be made clear in the text that the notion of optimality for the learning tasks considered in this work (i.e. achieving the goal by the end of episode), avoids one of the apparent limitations of the algorithm.\\u201d\\n-> We agree strongly with this point! We have made this discussion more clear in Section 3, 4.1 and 5.1. As you point out, \\u201coptimality\\u201d for our method means reaching the goal within a fixed time horizon, not reaching it as quickly as possible. To understand the nature of the behaviors learned by GCSL, we have provided a visualization of learned trajectories in Appendix B.1. We find that these behaviors, while not necessarily being shortest path in terms of time-steps to the goal, doesn\\u2019t take extremely long paths to the goal either. Please let us know if this addresses your concern, or if you would like to see further revisions to address this point.\\n\\n\\u201cAs there don't appear to be any constraints placed on the policy pi_old, ...\\u201d \\n-> To prevent the surrogate loss from being 0 for a given goal, it is indeed required that the probability of reaching the goal is nonzero for pi_old - we have updated the discussion in Appendix B to clarify this point. Note that this assumption is not unreasonable, and would be required to guarantee convergence of Q-learning or policy gradient approaches as well. \\n\\n\\u201cIt seems to be the case that the quality of the GCSL loss depends on the relationship between pi_old and the goal distribution p(g).\\u201d \\n-> We agree that our proof presents a bound that is overly loose if the desired goal distribution and the experienced state distribution are very different - the given result is not incorrect, but arguably vacuous in such scenarios. We have included a new section in Appendix B which quantifies the gap between the two losses, as a function of the probability of failure and the distribution shift between \\u201crelabelled\\u201d and \\u201cunrelabelled\\u201d trajectories. We present a new bound in Lemma B.1 that shows if the GCSL loss is well optimized throughout the state space, the gap between these two losses nears zero. Please let us know if this addresses your concern, or if you would like to see further revisions to address the theory.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your comments and feedback! We would like to clarify a few aspects about the GCSL algorithm. We have modified the main text of the paper in Section 4 to make these sections more clear and explicit and to address your concerns. Please let us know if these clarifications address your concerns!\\n\\nGCSL is *not* an imitation learning algorithm, but rather an algorithm which leverages ideas from imitation learning to learn goal-reaching behaviors from scratch without the need for any expert demonstration trajectories. Our insight is that even though a trajectory may be suboptimal for the goal being attempted to reach, it is optimal to reach the final state of the trajectory. This insight enables us to generate examples of optimal trajectories from potentially suboptimal ones via automated hindsight relabelling. By combining automated hindsight relabelling with the optimization techniques from imitation learning, we are able to devise a goal-reaching algorithm which avoids the need for bootstrapping or complicated policy gradient schemes that are prevalent in current RL algorithms, and can learn without the need for any human demonstrations as well. \\n\\n\\u201cIf we could save all of the data regardless of whether if an optimal policy generates them or not, why not use them? Less useful data may still contain useful information. The better question is how to use them to learn policy efficiently.\\u201d\\n-> We absolutely agree about the importance of re-using arbitrary past data to learn policies efficiently! Prior approaches which use all previous data learn policies and value functions via bootstrapping, which is known to be very unstable and difficult to optimize (Kumar et al 2019). What we propose in our paper is a more stable and performant policy optimization scheme borrowing ideas from imitation learning, which is also able to efficiently use all previously collected data, regardless of suboptimality. By performing the automatic hindsight relabelling scheme described in Section 4.1 on all previously collected trajectories, we can transform the policy learning problem into a supervised learning (behavior cloning) objective. This allows us to do policy learning from scratch, while retaining the optimization benefits of supervised learning and imitation learning such as simplicity, stability, scalability to larger neural networks, and easy bootstrapping from demonstrations. \\n\\n\\u201cAlso, it seems that the algorithm would require human knowledge to discern a trajectory as goal-reaching or not, which is contrary to self-supervision.\\u201d\\n-> Since we are using automated relabeling to make use of *all* trajectory data that was collected, there is no need for human knowledge to discern goal-reaching trajectories. Could you clarify what you mean by human knowledge in this case? We believe this may stem from a misreading of the paper, which we are eager to correct.\\n\\n\\u201cThe sampled trajectories in the set could be suboptimal for reaching a goal, and there\\u2019s little evidence that optimizing J_GCSL(\\\\pi) will learn an optimal policy based on these data.\\u201d\\n-> While a trajectory may be suboptimal for reaching the goal that it was trying to reach, after the relabeling step (described in Section 4.1 and line 6 of Algorithm 1), the trajectory becomes optimal for the relabeled goal under the notion of optimality defined in Equation 1. This is important because this can now be treated as expert data to optimize J_{GCSL} correctly. \\n\\n\\u201cThe gathering of trajectories and identifying the trajectory as goal-reaching is already a costly step, where no learning happens. RL, on the other hand, would gather the data incrementally, learn, and act right away\\u201d\\n-> GCSL actually incurs the same data collection complexity as more traditional RL algorithms. Prior works developing RL algorithms for goal-reaching [Eysenbach et al, Lin et al] also perform trajectory gathering and relabelling prior to training the policy and value function. Although we presented GCSL in separate data collection, relabelling, and training substeps, all three of these processes can be performed concurrently, just as you mentioned.\\n\\n\\n1. Eysenbach, B., Salakhutdinov, R., & Levine, S. (2019). Search on the Replay Buffer: Bridging Planning and Reinforcement Learning. \\n2. Lin, X., Baweja, H.S., & Held, D. (2019). Reinforcement Learning without Ground-Truth State. ArXiv, abs/1905.07866.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your insightful comments and suggestions! We have updated the paper to address your concerns about the connection between our method and RL, and the role of exploration in GCSL. Please find detailed responses to your comments below:\\n\\n\\u201cThe method is interesting but is still an \\\"RL\\\" method.\\u201d \\n\\n-> We agree that goal-reaching can be written as an RL problem! In Paragraph 2 of Section 3 (Preliminaries), we describe an explicit equivalence between our formulation and RL with a sparse indicator reward for reaching the goal. Due to this equivalence, our method implicitly maximizes the reward function defined in Section 3, and thus is an \\u201cRL method\\u201d. However, unlike more standard \\u201cRL methods\\u201d like TD3 or TRPO (which we compare to in our experiments), we do not rely on dynamic programming or complex policy gradient schemes, but simply use supervised learning as a subroutine in acquiring goal reaching behaviors. We have updated Section 3 to more clearly describe the MDP formulation for goal-reaching and the connections between our algorithm and other RL methods. \\n\\n\\u201cNote that in the method, the algorithm is not doing effective exploration but just randomly explore until you collect sufficient data to solve for a new goal.\\u201d \\n\\n-> We agree that exploration for our method can be improved! With our current exploration strategy (adding action noise), the quality of exploration is influenced greatly by performance - as the agent becomes better at reaching the goals it has seen, the probability of reaching goals on the fringe that have not been encountered previously increases. That said, most RL methods utilize exploration strategies similar to GCSL -- e.g, TRPO and PPO use Gaussian policies, DDPG and HER add time-correlated noise, etc. While dedicated exploration methods such as pseudocounts, intrinsic motivation, and RND could improve exploration, we believe this is an orthogonal direction to the current contribution.\\n\\n\\u201cIf you formulate the problem better, you can see that it actually has a reward\\u201d\\n-> We agree that our algorithm is implicitly optimizing an indicator reward, and for that reason we include two baselines which compare with using the same reward as our method and running model-free RL via TD3 or TRPO. We find that these algorithms perform comparably or worse to GCSL despite being much more complex. We are not certain about what you were suggesting with the model-based baselines, would you be able to provide us more details about your suggestion so we can run the comparison.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a method to learn to reach goals in an RL environment. The method is based on principles of imitation learning. For instance, beginning with an arbitrary policy that samples a sequence of state-action pairs, in the next iteration, the algorithm treats the previous policy as an expert by relabeling its ending state as a goal. The paper shows that the method is theoretically sound and effective empirically for goal-achieving tasks.\\n\\nThe paper is relatively clear and experiments are okay. I would then recommend it is on the positive side of the borderline.\", \"comments\": [\"The method is interesting but is still an \\\"RL\\\" method. So it is really learning to reach the goal via \\\"RL\\\". Note that in the method, the algorithm is not doing effective exploration but just randomly explore until you collect sufficient data to solve for a new goal.\", \"If you formulate the problem better, you can see that it actually has a reward: add an initial state s0; for each g sampled from p(g), transition s0 to an MDP with goal g. You can now do the usual RL algorithm in this new MDP. I would think you can also do model-based learning -- give the model a good representation and then use the policies to learn the dynamics. It may worth to compare your algorithm with these natural baselines.\"]}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper claims to do imitation learning without expert demonstration using trajectories that are generated by suboptimal policies from other tasks.\\n\\nThe point of having an expert demonstrator is to help narrow the search for an optimal policy. By taking the expert demonstration knowledge out of learning, to me, this is not retaining the benefit of imitation learning. Thus, the paper is not about imitation learning, but rather about an optimization method that reuses data generated from multiple tasks. Reusing trajectory data generated from multiple tasks to learn a policy of another task is not a novel idea. If we could save all of the data regardless of whether if an optimal policy generates them or not, why not use them? Less useful data may still contain useful information. The better question is how to use them to learn policy efficiently. If the motivation is to use trajectories from suboptimal policies from other tasks without expert knowledge, then I fail to see the motivation and the novelty of this paper. \\n\\nThe paper claims that the methodology self-supervises each action taken, judging how good it is for reaching a goal in the future without learning Q-values. However, this was not realized. The methodology gathers all trajectories that reach a goal into a set, and use behaviour cloning on the data of the set to learn a policy. The sampled trajectories in the set could be suboptimal for reaching a goal, and there\\u2019s little evidence that optimizing J_GCSL(\\\\pi) will learn an optimal policy based on these data. Optimizing objective J_GCSL(\\\\pi) also does not take the long term effect of actions into account. The gathering of trajectories and identifying the trajectory as goal-reaching is already a costly step, where no learning happens. RL, on the other hand, would gather the data incrementally, learn, and act right away. Also, it seems that the algorithm would require human knowledge to discern a trajectory as goal-reaching or not, which is contrary to self-supervision.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work presents the goal-conditioned supervised learning algorithm (GCSL), which learns goal conditioned policies using only behavioral-cloning of the agent's own actions. The intuition behind the algorithm is the goal of an observed trajectory can be identified after the fact, by simply looking at the states reached during that trajectory. GCSL treats each executed action as a sample from the expert policy conditioned on each of the states reached after that action is taken. Given a distribution over goal states, GCSL alternates between executing its current goal-conditioned policy on randomly selected goals, and learning to imitate the generated actions conditioned on the states they actually reached. Experimental results demonstrate superior performance against a base (non-goal conditioned) RL algorithm (TRPO), and against another approach to learning goal-conditioned polices (TD3-HER), on a relatively diverse set of control problems.\\n\\nA major issue is that the proof of the main theoretical result appears to be wrong. As there don't appear to be any constraints placed on the policy pi_old, it would seem that the surrogate loss would collapse to 0 for any policy pi if pi_old is such that the target goal is never reached (the probability of any trajectory t reaching g is 0 under pi_old(t|g)). It seems to be the case that the quality of the GCSL loss depends on the relationship between pi_old and the goal distribution p(g). The fact that the theoretical results are incorrect does not mean that the algorithm, or the general approach do not have value, but it does highlight the fact that this approach may only be effective for a specific class of problems similar to the experimental domains.\\n\\nWhile not a flaw in the work itself, it should be made clear in the text that the notion of optimality for the learning tasks considered in this work (i.e. achieving the goal by the end of episode), avoids one of the apparent limitations of the algorithm. A randomly generated trajectory is itself optimal for any state that it reaches, if we define optimality as simply reaching a state. Such a trajectory may not be the most efficient way of reaching that state however, so the relabelling process would seem to be prone to learning policies that achieve the conditioned goals, but not doing so in an efficient manner. It isn't clear how well this approach would work for tasks where the efficiency, in terms of the time required to reach the objective, is a key part of the evaluation. Again, this is not a flaw in the work itself, and it is possible that the algorithm will be effective in such tasks, perhaps because the likelihood of an action resulting in a given state is higher if that action brings us closer to this state. It might be useful to conduct some additional experiments where evaluation is based on the time required to solve a task, rather than just the accuracy of the final state.\"}" ] }
S1eq9yrYvH
Subjective Reinforcement Learning for Open Complex Environments
[ "Zhile Yang*", "Haichuan Gao*", "Xin Su", "Shangqi Guo", "Feng Chen" ]
Solving tasks in open environments has been one of the long-time pursuits of reinforcement learning researches. We propose that data confusion is the core underlying problem. Although there exist methods that implicitly alleviate it from different perspectives, we argue that their solutions are based on task-specific prior knowledge that is constrained to certain kinds of tasks and lacks theoretical guarantees. In this paper, Subjective Reinforcement Learning Framework is proposed to state the problem from a broader and systematic view, and subjective policy is proposed to represent existing related algorithms in general. Theoretical analysis is given about the conditions for the superiority of a subjective policy, and the relationship between model complexity and the overall performance. Results are further applied as guidance for algorithm designing without task-specific prior knowledge about tasks.
[ "reinforcement learning theory", "subjective learning" ]
Reject
https://openreview.net/pdf?id=S1eq9yrYvH
https://openreview.net/forum?id=S1eq9yrYvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "xJWyBk6ujM", "B1evH6posB", "BkxTucDssr", "rJgpj9tDjB", "ryedVcYPsr", "Hye0HYKwsB", "rJg8QSNJ9S", "SJlHqPKaKS", "SklceeAvKS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735111, 1573801278565, 1573775988886, 1573522085475, 1573521967582, 1573521734346, 1571927326092, 1571817356834, 1571442674159 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1888/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1888/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1888/Authors" ], [ "ICLR.cc/2020/Conference/Paper1888/Authors" ], [ "ICLR.cc/2020/Conference/Paper1888/Authors" ], [ "ICLR.cc/2020/Conference/Paper1888/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1888/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1888/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The authors propose a learning framework to reframe non-stationary MDPs as smaller stationary MDPs, thus hopefully addressing problems with contradictory or continually changing environments. A policy is learned for each sub-MDP, and the authors present theoretical guarantees that the reframing does not inhibit agent performance.\\n\\nThe reviewers discussed the paper and the authors' rebuttal. They were mainly concerned that the submission offered no practical implementation or demonstration of feasibility, and secondarily concerned that the paper was unclearly written and motivated. The authors' rebuttal did not resolve these issues.\\n\\nMy recommendation is to reject the submission and encourage the authors to develop an empirical validation of their method before resubmitting.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thank you for the response.\", \"comment\": \"Roughly speaking, the updated paper changes some writing problems, replaces sums by integrals around R, and replaces \\\"R\\\" by \\\"Risk\\\" where appropriate. While it is a small improvement, the whole paper still lacks a lot of clarity - a sentiment reflected by both other reviewers too.\\n\\nThis also does not address any of the concerns. I am still of the opinion that the framework automatically segregating the MDP into individual, simpler MDPs based on no prior knowledge whatsoever is an extraordinary claim, requiring extraordinary evidence. If this was supported by a concrete instantiation of the framework and effectiveness of it demonstrated on at least some tasks, my rating would be very different. As-is, I am not ready to recommend acceptance of this work.\"}", "{\"title\": \"Reviewer reply to the author response\", \"comment\": \"Thank you for reading my review and addressing some of my comments.\\n\\nThe authors' summary of the motivation for the proofs is helpful for understanding why they were included in the paper. However, I'm still concerned about the computational cost of the proposed framework (i.e., maintaining value functions and/or policies for each subjective MDP), and would still have liked to have seen an implementable algorithm with an experiment showing it can work.\"}", "{\"title\": \"Author response to Review #1\", \"comment\": \"We appreciate the time you took reviewing our submission and hope our response help address some of your concerns.\", \"replies_to_the_main_part_of_review\": \"(1) Q: \\u201cthe theoretical analysis is difficult to follow and there is sometimes a lack of clarity throughout the paper\\u201d\", \"a\": \"In section 2 we briefly introduce some closely related fields. We summarize that techniques in these fields are introducing task-specific designs to reach better performance in certain kinds of tasks. Our work hope to unify such techniques and provide efficient algorithm designs without human prior. From this perspective, we agree that more discussions can help strengthen our conclusion and clarify our idea, but we think there is no other comparable differences between our work and other concrete techniques. Please see (3) in the part below for one example.\", \"replies_to_the_concrete_points\": \"(1) Please see replies to question 2 in above part.\\n\\n(2) In our framework, \\u201c\\\\kappa\\u201d denotes the information provided by the environment other than current state \\u201cs_t\\u201d, and so our method only chooses how to utilize it according to its actual instantiation in tasks. For a concrete example, consider our subjective \\u201c\\\\bold{h}\\u201d is designed to be a one-hot vector, then we get m=N_S; the actual data (including \\\\kappa) and the chosen form of loss function \\u201c\\\\script{L}\\u201d give concrete bounds defined in eq. (24); given expected \\\\zeta and \\\\eta, inequation (23) gives us the relationships m, u_b, u_d (VC dimensions of function approximators) should obey. In this way, we can determine N_S with given data and without domain knowledge.\\n\\n(3) In hierarchical RL the hyperparameters of function approximators (i.e. the layer num of neural networks) and the maximum number of lower-level policies are designed by designer; while in our work we propose to adjust them according to actual data automatically and with theoretical guarantee, based on our analysis of the relationship between overall performance and some key variables.\\n\\n(4) \\na) Thanks for your advice. We will pay more attention to the simplicity of notations in our future work.\\nb) We wanted to express \\u201cthe number of each kind of data samples tends to infinity\\u201d as the condition for theorem 1, to remove the possibility of insufficient exploration. To make it clearer, we update it to \\u201call possible data samples appear infinite times\\u201d.\\nc) The main idea in the proof of theorem 1 is to show that any policy in original forms can be expressed by a subjective policy. We use \\u201cfake\\u201d to emphasize that the defined \\u201c\\\\pi_{z,fake}\\u201d shares the same form as subjective policy but is actually equal to the policy in the original form.\\nd) Yes, we are assuming \\\\kappa is pre-determined with the tasks. In this paper we only focus on how to use given information with theoretical guarantee. When there are more than one sources (types) of \\\\kappa available, we may consider applying eq. (23) to these sources respectively and select one according to some preferences, i.e. affordable VC dimension of function approximators.\"}", "{\"title\": \"Author response to Review #2\", \"comment\": \"Thank you for taking the time to review our submission and for your feedback. We hope the following may address some of your concerns.\", \"replies_to_the_main_part_of_review\": \"\", \"we_regret_not_having_made_it_clearer_about_our_design_of_the_theorems\": \"1) Theorem 1 aims to provide a basic result that the subjective policy will not get worse results than the original form of policy; this is quite easy to get and not practically useful.\\n2) In theorem 2 we continue to analyze the relation between the bound of performance of a converged policy (lim|G(\\\\pi)-G*|) and the worst-case bound of error on function approximation (\\\\epsilon); this enables us to analyze the final performance through analyzing the function approximator, where the subjective policy makes a difference.\\n3) Then in theorem 3 we get the relationship between the risk bound of the function approximator and some variables that are determined by data and the hyperparameters of selected function model; this enables us to control the overall performance through adjusting the related variables according to the data got in specific tasks.\\n\\nIn this paper, we wish to take a step forward in theory towards algorithm design with considering general cases of reinforcement learning tasks. As we analyzed in section 5, currently the variables (e.g. m, u_b, u_d) can theoretically be adjusted but there do exist difficulties when considering some function models, and we acknowledge that concrete examples may well help prove the practicability of our work.\\n\\nReplies to \\u201cgeneral comments\\u201d:\\n1) Q: \\u201cThe definition of rewards is confusing; script R was never defined.\\u201d\", \"a\": \"We recognize that in many publications the reward function is defined as R_0(s, a). In this paper, we are not introducing anything special by defining reward function as R(s). In fact, we think these forms are equal, because R_0(s, a)=E_{s\\u2019}[R(s\\u2019)P(s\\u2019| s, a)] (here s\\u2019 stands for the next state).\"}", "{\"title\": \"Author response to Review #3\", \"comment\": \"Thank you for your detailed and insightful review. We hope the following address some of your concerns.\", \"q\": \"some writing problems\", \"a\": \"In our updated version, we have corrected some mistakes in grammar and citation, including the ones mentioned in this problem.\\n\\n[1] Sontag, E. D. (1998). VC dimension of neural networks. NATO ASI Series F Computer and Systems Sciences, 168, 69-96.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper suggests that one common problem encountered by reinforcement learning algorithms in open environments is \\\"data confusion\\\", which essentially means showing the same input data with different --possibly contradictory-- labels/targets.\\n\\nThe proposed solution to this conceptual problem is to split the original MDP \\\"M\\\" up into multiple simpler MDPs \\\"Mk\\\", where M does contain possibly contradictory (\\\"confusing\\\") data, while each individual Mk does not contain any such problem and, even better, is stationary.\\n\\nThe \\\"subjectivity\\\" function \\\"h\\\" then has as role to split any data tuple across Mk, possibly using extra information kappa.\\n\\nFurthermore, several theorems show that under several conditions, the return of the subjective policy (learned via Mk) is not worse than that of the \\\"traditional\\\" policy.\\n\\n\\nI lean towards rejecting this paper. The whole gist of the framework can be crudely summarized as \\\"if data contradicts, split up into non-contradictory sets using extra info.\\\" The motivation keeps repeating that no task-specific prior knowledge being necessary, but I believe this hinges on \\\"h\\\" being sensible, which might not be feasible without task-specific prior knowledge.\\n\\nFurthermore, and this is my main concern, there is not a single experiment demonstrating how any of this would behave in practice. It would be good to have one (possibly constructed) experiment showing that data confusion indeed is a problem in practice (intuitively, it is), and then a specific instantiation of the framework that solves this example. Furthermore, I am not convinced that the proposed bounds can easily be concretized for an instantiation of the proposed framework, especially when considering deep networks; again, this concern could be alleviated by an example instantiation. Proposing something that is in principle more general and \\\"in principle cannot be worse\\\" but then not demonstrating that it actually is the case is, in my opinion, not enough.\\n\\n\\n\\nFinally, and this is not a deciding factor in my rating, the paper has quite some writing problems. On the first page alone, I found a lot of spelling and grammatical mistakes (see list at end) and the notation is sometimes confusing to me. For example, \\\"R\\\" is defined as a mapping of S x /R -> [0,1], but what is \\\"/R\\\" (curly R)? And then in (1) R is used with a single argument while in (2) not anymore. I can guess what is meant, but it feels inconsistent. In Theorem 1, I believe it should be \\\"the gap \\\\delta >= 0\\\" and not \\\"the gap g >= 0\\\", no?\\n\\nAbstract and 1st paragraph mistakes (unfortunately, no line numbers in this template!): \\\"researches\\\" -> \\\"research\\\", \\\"algorithm designing\\\" -> \\\"algorithm design\\\", \\\"task-specific prior knowledge about tasks.\\\" -> \\\"task-specific prior knowledge.\\\", \\\"not known in prior\\\" -> \\\"not known a priori\\\", \\\"Classical RL model environment...\\\" -> \\\"Classical RL models environment...\\\".\\nAlso, quite some citations are missing the year, e.g. Schaul et al., Papavassiliou&Russell, ...\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper introduces the Subjective Reinforcement Learning framework to formalize the problem of using extra information to split large, nonstationary environments into separate, simple, stationary MDPs. First, the paper introduces and motivates the problem on a high level: the same state-action pairs may transit to different successive states and rewards because of nonstationary dynamics/reward function, or variance in the environment or tasks. This phenomenon is termed \\\"data confusion\\\". The paper then summarizes some related approaches to dealing with this phenomenon. Next, the paper introduces the subjective RL framework in detail in section 3:\\n- Extra information (kappa) is needed to resolve the data confusion.\\n- The \\\"subjectivity\\\" (h) is a function that maps the extra information to a vector of weights over \\\"subjective\\\" MDPs.\\n- A policy is maintained for each subjective MDP, and the overall policy is the vector product of h and the vector of subjective policies.\\nIn section 4, the paper presents 3 theorems arguing that using the subjective RL framework doesn't harm performance. The paper then gives brief guidelines for designing algorithms using the subjective RL framework before concluding.\\n\\nAt the present time I recommend rejecting the paper. It does not actually present a concrete solution method, instead simply giving brief guidelines for the reader to design algorithms by. The subjective RL framework unifies and subsumes several existing approaches, but I don't feel that in itself is a significant enough contribution to warrant publication. The theorems presented essentially argue that using the subjective RL framework does not harm performance, but there are no mentions of the computational costs involved with maintaining policies for each subjective MDP. In addition, it's not clear where the subjective MDPs come from.\\nThe paper also had issues with clarity, including many grammatical errors.\\n\\nI think this paper tackles an important problem from an interesting point of view, but stops short of giving a concrete algorithm that can be implemented and tested. It seems like a good candidate for a workshop, which could be a good opportunity for discussion and feedback.\", \"general_comments\": [\"The definition of rewards is confusing; script R was never defined.\", \"\\\"minimize objective 1\\\" should be actually be \\\"maximize objective 1\\\"?\", \"Rewards are usually defined over state-action pairs, not just states. Why choose this unconventional formulation for rewards?\"]}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces a new framework for reinforcement learning (named subjective reinforcement learning) which aims to resolve some of the inherent problems with RL in open environments. The authors posit that one problem with RL in open settings is \\u201cdata confusion\\u201d, which they describe as being situations where there are external factors (e.g. timeframe) that affect the action space differently. They propose a \\u201csubjective reinforcement learning framework\\u201d which, as I understand it, can be described as an ensemble of traditional MDP\\u2019s subject to external factors k. The paper evaluates how this subjective policy compares to traditional MDP\\u2019s in terms of theoretical bounds on performance.\\n\\nAlthough this learning framework seems like a potentially interesting future research direction, I tend to lean towards rejection for the reasons that: (1) the theoretical analysis is difficult to follow and there is sometimes a lack of clarity throughout the paper, (2) it isn\\u2019t very clear how easy this framework would be to implement aside from the theoretical guarantees and there aren\\u2019t any experiments or proofs of concept that would demonstrate the feasibility or practicality of the proposed framework in a real scenario, (3) the paper would benefit from more discussion of how their work differs from related techniques (like hierarchical RL, various forms of meta-learning, etc.).\", \"here_are_some_more_concrete_points\": [\"(1) The feasibility of this framework in a real-world scenario seems a bit hard to imagine and a strong use-case or proof-of-concept would be very helpful. I liked that this paper provided some theoretical analysis for guiding the design of these systems. However, it seems like these claims would also benefit from detailed empirical analysis and experiments. Without empirical results, I feel a bit skeptical about how straightforward it would be to implement such a system or whether it would really be significantly useful in practice. Similarly, though there are theory-based suggestions for how to optimally design such a system, it might be difficult to implement this system with optimal hyperparameters in a real-world use-case and the challenges in doing so are not really addressed.\", \"(2) In spite of claims that this method is able to be trained without domain knowledge, it seems like domain knowledge would still be necessary for things like determining what external information (K) is available and necessary, determining the appropriate N_S, etc. It may be helpful for the authors to explain a bit more about how these things can be determined in a truly agnostic way.\", \"(3) It seems like there should be more discussion of the difference from hierarchical reinforcement learning. In practice, hierarchical RL also can be used in similar ways to what\\u2019s described here. As the authors point out, hierarchical RL is not necessarily splitting into submodels that handle data confusion problems, it seems like that is a constraint that could be added into a hierarchical framework\\u2019s design.\", \"(4) I appreciate that the authors provide detailed theoretical analysis, but it can sometimes be confusing and difficult to follow. I had some trouble evaluating the correctness of several of the proofs. It may benefit from re-writing with more concise definitions of all of the variables and more clearly stated assumptions about observable information. Here are some points of confusion for me:\", \"It seems like certain letters (eg. A or K or N_s vs N_d) are being overloaded as variable names with different fonts. I realize that this is somewhat unavoidable, but I would recommend that the authors try to disentangle the naming a bit for improved clarity.\", \"In theorem 1, you stated \\\"the number of all possible data samples tends to infinity when the total number of samples N_d approaches infinity\\\". This proposition seemed confusingly worded to me. Maybe I am misunderstanding the wording, but it seems like possibly a tautology?\", \"I\\u2019m not sure I understand what is meant by \\u201cfake subjective policies\\u201d in Theorem 1. Could you explain what is meant by that and the intuition here a bit more?\", \"It\\u2019s still unclear to me how K (that is, the external information) is being collected at any given timeframe. Are you assuming that the necessary types of external information corresponding to K (your examples are \\u2018state history, out-of-MDP task encodings, samples from related tasks, etc.\\u2019) have been pre-determined? If so, how could the most-appropriate type of external information be chosen in a practical way?\", \"I also noticed a few (very minor) grammar errors that authors may want to fix, though they did not affect my review:\", \"page 1: \\\"tasks in open environments poses difficulties\\\" --> \\\"tasks in open environments pose difficulties\\\"\", \"page 1: \\\"Problem is that both...\\\" --> \\\"The problem is that both...\\\"\", \"page 2: \\\"we propose a novel framework named as Subjective reinforcement\\\" --> \\\"we propose a novel framework named Subjective reinforcement\\\"\", \"page 4: \\\"should contain no data confusion\\\" --> \\\"should not contain data confusions\\\"\"]}" ] }
SJeq9JBFvH
Deep probabilistic subsampling for task-adaptive compressed sensing
[ "Iris A.M. Huijben", "Bastiaan S. Veeling", "Ruud J.G. van Sloun" ]
The field of deep learning is commonly concerned with optimizing predictive models using large pre-acquired datasets of densely sampled datapoints or signals. In this work, we demonstrate that the deep learning paradigm can be extended to incorporate a subsampling scheme that is jointly optimized under a desired minimum sample rate. We present Deep Probabilistic Subsampling (DPS), a widely applicable framework for task-adaptive compressed sensing that enables end-to end optimization of an optimal subset of signal samples with a subsequent model that performs a required task. We demonstrate strong performance on reconstruction and classification tasks of a toy dataset, MNIST, and CIFAR10 under stringent subsampling rates in both the pixel and the spatial frequency domain. Due to the task-agnostic nature of the framework, DPS is directly applicable to all real-world domains that benefit from sample rate reduction.
[ "deep probabilistic subsampling", "compressed", "dps", "field", "deep learning", "predictive models", "large", "datasets", "datapoints", "signals" ]
Accept (Poster)
https://openreview.net/pdf?id=SJeq9JBFvH
https://openreview.net/forum?id=SJeq9JBFvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "qeMM2hxLr6", "rIS6lnLl9", "HygTnR92iS", "SkxGMh42oS", "BJxGYGYijH", "B1gZYamuor", "r1gTIpmuiH", "BkeuqUfOsr", "BJxwQLGOsH", "H1lPXEMuoH", "Syerno7kqH", "r1lbykR6YB", "r1xxdbB9KH" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1581683807910, 1576798735081, 1573854900849, 1573829641738, 1573782138501, 1573563769017, 1573563733301, 1573557904308, 1573557791291, 1573557278797, 1571924909021, 1571835609159, 1571602791693 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper1887/Authors" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1887/Authors" ], [ "ICLR.cc/2020/Conference/Paper1887/Authors" ], [ "ICLR.cc/2020/Conference/Paper1887/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1887/Authors" ], [ "ICLR.cc/2020/Conference/Paper1887/Authors" ], [ "ICLR.cc/2020/Conference/Paper1887/Authors" ], [ "ICLR.cc/2020/Conference/Paper1887/Authors" ], [ "ICLR.cc/2020/Conference/Paper1887/Authors" ], [ "ICLR.cc/2020/Conference/Paper1887/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1887/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1887/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Answer on final decision\", \"comment\": \"We thank the referees for the final decision of accepting the paper, and the final comments.\\nRegarding these final comments;\\n\\nWe now added some explanation to section 3.2, in which we explain how the bias and the variance of the grandient's estimator depend on the temperature of the softmax in Gumbel-softmax sampling. We did not investigate this bias ourselves, however we add two references that aim to reduce the gradient's estimator bias. \\n\\nFor all experiments, we had separate train, validation and test sets. As was already mentioned in the manuscript, training was stopped when the validation loss plateaued. We also tuned hyperparameters solely on the validation set, and only used the test set to run final inference for the results sections. In the final version, we included a link to our open source code, in which one can see as well that the test set is only used for final inference.\"}", "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper introduces a probabilistic data subsampling scheme that can be optimized end-to-end. The experimental evaluation is a bit weak, focusing mostly on toy-scale problems, and I would have liked to see a discussion of bias in the Gumbel-max gradient estimator.\\n\\nIt's also not clear how the free hyperparameters for this method were chosen, which makes me suspect they were tuned on the test set.\\n\\nHowever, the overall idea is sensible, and the area seems under-explored.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Further clarification about experiment regarding disjoint optimization\", \"comment\": \"As a follow-up on our answer regarding the second question, we would like to mention that we added a case in the MNIST classification experiment (DPS-topk), in which we jointly train a reconstruction network with a subsampling pattern. We subsequently train the classifier network on the reconstructed images. It shows that learning a task-adaptive (classification in this case) sampling pattern outperforms disjoint learning of sampling and the task.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your getting back to us. We are happy to share that we have extended our experiments to include the recent competing method LOUPE from Bahadir et al. (2019). Other methods in traditional compressive sampling literature focus only on the reconstruction task and thus do not fit the full scope of our experiments. Besides, the algorithms typically used for this task, e.g. proximal gradient schemes, follow an iterative nature and are therefore less applicable for real-time applications.\\nLastly, we argue that the fixed random subsampling pattern followed by an unfolded proximal gradient scheme for the CIFAR10 reconstruction case follows classical CS principles, in which partial Fourier measurements are used by an iterative proximal gradient scheme in order to reconstruct the original signal. We clearly see worse performance here compared to DPS, as a random pattern is still pseudo-random and therefore often still not able to prevent aliasing artifacts.\\n\\nWe agree that it is too strong a claim to say that DPS fully exploits the data distribution and information-need of a downstream task. We have reformulated this section to read \\\"to focus solely on the information required to solve the downstream task given the underlying data distribution\\\". We do not aim to claim that our method finds the global optimum in this situation, but rather focus on the empirical evidence that DPS performs well on a variety of tasks. We rely on the general accepted understanding that deep learning methods effectively discover patterns relevant to the supervised task. \\nThough, we added a case in which we train MNIST reconstruction, followed by a separately trained classifier. This case confirms our hypothesize that task-adaptive learning is beneficial.\\n\\nAfter further tuning the parameters, and bringing DPS-topK better in line with the implementation of DPS-top1, we published new results on DPS-top1 and topK for all cases. The algorithmic description of DPS-top1 and DPS-topK is now added in appendix b as well. \\n\\nFinally we have extended our manuscript with a more elaborate description of the Gumbel-max trick (below eq. 4), and a definition of the probability distribution (footnote 1). Moreover, tables including all the layer parameters of the task networks (appendix c), and training curves demonstrating empirical convergence (appendix d) are added. We believe this should address all the concerns mentioned before.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for the kind response and answering my questions clearly. The reviewer agrees that this paper did extensive work of deep probabilistic subsampling for compressed sensing based on the Gumbel-max trick and back-propagation.\\n\\nHowever, the justification of this paper should be realized with extensive comparison with other competitors on large datasets. The reviewer expects to see the extensive results in comparison with state-of-the-art compressed sensing methods in the revision.\\n\\nThe reviewer also expects the authors include the explanations into revision. In particular, for such description \\\"the main shortcoming of compressed sensing: \\n\\u201cThese [compressed sensing] methods, however, are lacking in the sense that they do not fully exploit both the underlying data distribution and information to solve the downstream task of interest.\\u201d\\\",\\nplease include evidence to support your claim, otherwise it is too subjective. Does that mean the proposed algorithm \\\"fully exploit both the underlying data distribution and information to solve the downstream task of interest\\\"? If so, please also provide evidences.\\n\\nThe authors' response did not include their willing to provide an algorithm description. The review would expect to see an algorithm description. In addition, a description to Gumble-max in the main text or the appendix is good for readers to get know the distribution form.\\n\\nThough it is hard to show the convergence rate in theory, an empirical convergence curve is better given.\\n\\nThe reviewer is willing to change the score if the above concerns get addressed.\"}", "{\"title\": \"Reply to review #1 (part 1)\", \"comment\": \"We thank the reviewer for the feedback. The reviewer states in his/her summary: The parameterization is used to simplify the subsampling distribution. We would like to comment on this, by stating that the reparametrization is not used for simplifying the subsampling distribution; on the contrary it actually enables sampling from this trained distribution. In fact, without this reparametrization, our generative sampling model (DPS) would not be differentiable. Below we elaborate upon the questions and raised concerns by the reviewer:\", \"question_1\": \"We respectfully disagree with the referee\\u2019s conclusions, and will elaborate on the above statements in the following. While we disagree, we did however undertake a significant effort to further clarify and provide additional evidence in the revised manuscript, taking into account these comments.\\n\\nRegarding the theoretical correctness of deep probabilistic subsampling, in section 3.2 we explain how we incorporate a well-known reparametrization trick, termed the Gumbel-max trick (Gumbel,1954), to sample from a categorical probability distribution. Note that this shares similarities with the reparameterization trick used for sampling from trained gaussian distributions in a vanilla variational autoencoder. The Gumbel-max reparametrization perturbs the logits of the categorical distribution with Gumbel noise after which, by means of the argmax, the highest value is selected. Gumbel (1954) showed that this reparametrization allows sampling from the original categorical distribution. \\nRecent state-of-the art work on a relaxation of this trick, termed Gumbel-softmax sampling (Jang et al., 2017) or the concrete distribution (Maddison et al., 2016), allows us to apply this relaxed reparametrization inside a neural network as it enables gradient calculation, which is needed for error backpropagation in the training procedure of the network. We would like to ask the reviewer what is believed to be missing from this explanation on the subsampling part of our proposed method. \\n\\nRegarding the theoretical basis used for the design of the task network; we took a theoretically principled approach by exploiting a model-driven network architecture for the CIFAR10 reconstruction problem. To that end, we unfold the iterations of a proximal gradient scheme (Mardani et al., NeurIPS, 2018), allowing for explicit embedding of the acquisition model (and therewith the learned sampling) in the reconstruction network.\\n\\nRegarding the referee\\u2019s conclusion that the manuscript lacks comparison to the approaches of (Xie & Ermon (2019); Kool et al. (2019); Pl\\u00f6tz & Roth (2018): We would like to point out that these three references all together put forward the Gumbel top-k method. Note that the use of the Gumbel top-k method for compressive sampling is also new, and in fact constitutes a specific case (constrained version with shared weights across distributions) of the proposed deep probabilistic subsampling (DPS) framework. In the MNIST experiments we already included Gumbel top-k sampling, but we will also add this for the other experiments in the revised manuscript. In addition, we added a thorough comparison of the DPS to LOUPE (Bahadir et al, 2019), a recently proposed data-driven method for subsampling.\", \"question_2\": \"We would first like to refer the referee to third paragraph of the introduction, where we explicitly formulate the main shortcoming of compressed sensing: \\n\\n\\u201cThese [compressed sensing] methods, however, are lacking in the sense that they do not fully exploit both the underlying data distribution and information to solve the downstream task of interest.\\u201d\\n\\nThen, in the list of main contributions, we write:\\n\\u201cDPS: A new regime for task-adaptive subsampling using a novel probabilistic deep learning framework for jointly learning a sub-Nyquist sampling scheme with a predictive model for downstream tasks\\u201d\", \"subquestion_2\": \"We are of course willing to further specify any details that the referee misses in the current paper. We would therefore like to kindly invite the referee to be specific about the details that he/she would like to be added to the manuscript. \\n\\nWe respectfully disagree with the referee\\u2019s conclusion that the method does not support a significant contribution. We propose a fully-probabilistic generative model for trainable sampling, that exploits both the underlying data distribution and information to solve the downstream task of interest. Our generative model builds upon recent advances on Gumbel max and top-k reparameterizations and their relaxations, showing for the first time how discrete sample selection can be done in a data-driven and task-adaptive fashion. This opens up a vast array of new opportunities in compressed sensing.\"}", "{\"title\": \"Reply to review #1 (part 2)\", \"comment\": \"\", \"question_3\": \"We know experience replay as a reinforcement learning technique for storing previous state/action pairs. However, our method does not make use of reinforcement learning, so could the reviewer please elaborate how experience replay would relate to our method?\", \"question_4\": \"In Section 4.1 (MNIST classification) we already compared our proposed sampling method to used Gumbel top-K sampling for data subsampling. We are currently also running experiments that allow for extensive comparison with the recently proposed LOUPE method by Bahadir et al. (2019).\", \"question_5\": \"A large part of the experiments in this work are focusing on compressive/partial Fourier measurements. This adequately reflects the measurement setup in many real-world problems, such as k-space measurement in magnetic resonance imaging (Lustig et al.), Xampling for ultrasound imaging (Eldar et al.), and non-uniform step frequency radar (Huang, 2014). In addition, we cover direct pixel sampling, related to real-world applications such as compressive cameras. We would like to emphasize that the proposed approach is measurement-domain agnostic, and therefore can be applied across a vast amount of real-world problem. \\n\\nIn addition, our ongoing research already shows promising results for real-world applications such as magnetic resonance imaging and ultrasound imaging. This is part of future work.\", \"question_1\": \"We specify the Gumbel-max trick in the paragraph below equation 4. To make the paper more self-contained, we will extend this paragraph to further clarify the Gumbel-max trick.\\nWe also refer to our answer to the first question of this referee, in which we elaborated more on the Gumbel-max trick as well.\", \"question_2\": \"All training parameters were tuned empirically. However, we agree it is worth elaborating on our insights regarding the influence on performance of some of them. We experienced that performance was most sensitive to the learning rates for the sampling and task models, and the temperature parameter tau of the softmax relaxation. We augmented the discussion of our revised manuscript to share these insights. .\", \"question_6\": \"The trends towards using deep learning for data-driven compressed sensing indeed has the downside of not having guarantees on finding a global minimum, as the loss surface of a NN is highly non-linear and non-convex. Still, these data-driven results have shown to be very promising (Gregor et al., 2010; Jin,2019; Bahadir et al., 2019; Mousavi, 2019)\\n\\nHowever due to the weight space symmetry problem (Goodfellow et al., 2016) the loss surface contains a vast amount of local minima with the same error value. The size of the gap between local and the global minima remains an open field of research. However, citing from Goodfellow et al. (2016):\\n\\u201cThe problem remains an active area of research, but experts now suspect that,\\nfor su\\ufb03ciently large neural networks, most local minima have a low cost function\\nvalue, and that it is not important to \\ufb01nd a true global minimum rather than to\\n\\ufb01nd a point in parameter space that has low but not minimal cost (Saxe et al.,\\n2013; Dauphin et al., 2014; Goodfellow et al., 2015; Choromanska et al., 2014)\\u201d\\n\\nAs such, we leverage the empirically-shown ability of stochastic gradient descent to optimize this non-convex function in our NN for finding local minima. Indeed there is no guarantee on finding a global optimum.\"}", "{\"title\": \"Reply to review #2 (part 2)\", \"comment\": \"\", \"question_4\": \"We thank the reviewer for pointing this out. The appropriate setting of these parameters is indeed important and we are happy to explain its impact on our outcomes. \\n\\nLambda is weighing the adherence to the data-consistency (via MSE error) and visual plausible images (via the cross-entropy discriminator loss). For higher lambda, the discriminator loss is weighted more heavily, and as such, more effort is done by the model to create visually plausible natural images, as they appear in the CIFAR10 database. A balance should be found here, as a too high lambda can cause image predictions that look rather natural, however do not resemble the target image.\\n\\nThe mu parameter weights the influence of the entropy penalty during training of the logits in the categorical distributions. For higher mu, the distributions converge quicker towards low entropy distributions, i.e. one class has a probability close to 1, as high entropy is highly penalized. There is a tradeoff here between quick convergence and explorability. When the distributions converge really quickly, the model has no chance to explore different subsampling patterns and as such will probably find subsampling patterns that are not useful \\nfor the downstream task. However, setting mu too low, can easily slow down convergence of the whole model.\\n\\nAs suggested by the referee, we will detail on the above in our revised manuscript.\", \"question_5\": \"Thanks for pointing out this typo, 70,000 is the total number of datapoints that we used, of which 60,000 were used for training. We corrected this in the revised paper.\"}", "{\"title\": \"Reply to review #2 (part 1)\", \"comment\": \"We thank the reviewer for the interesting questions regarding our work. Below we respond to these questions of the referee:\", \"question_1\": \"In compressed sensing, RIP is indeed used to provide a measure for isometry when \\u201crestricted\\u201d to k columns, i.e. given a measurement $\\\\mathbf{y}=\\\\mathbf{A}\\\\mathbf{x}$, of a k-sparse vector $\\\\mathbf{x}$. For many practical problems of interest, analysis of measurements of sparse vectors is achieved by reformulating the measurement as $\\\\mathbf{y}=\\\\mathbf{A}\\\\mathbf{\\\\Psi}\\\\mathbf{x}$, with $\\\\mathbf{\\\\Psi}$ being a sparsifying basis, and $\\\\mathbf{z}=\\\\mathbf{\\\\Psi}\\\\mathbf{x}$ being the quantity of interest. Then, RIP should be evaluated for $\\\\mathbf{A}\\\\mathbf{\\\\Psi}$. A common requirement posed for sparse bases is therefore incoherence of the columns. \\n\\nInstead, we here directly learn a mapping to $\\\\mathbf{z}$ from data, with no explicit notion of such a sparsifying basis. While this makes theoretical assessment more challenging, it alleviates the need for manual identification of a proper sparse basis for each new problem.\\n\\nWe augmented parts of the discussion of the revised manuscript to better reflect this.\", \"question_2\": \"The reviewer raises a fundamental and interesting question regarding the typical loss surface in CS compared to the one of a neural network. Indeed the loss surface of a NN is highly non-linear and non-convex, it typically contains a vast amount of local minima, as a consequence of the weight space symmetry property (Goodfellow et al., 2016), i.e. having the same loss value for a different ordering of the same weights. The size of the gap between local and the global minima remains an open field of research. However, citing from Goodfellow et al. (2016):\\n\\u201cThe problem remains an active area of research, but experts now suspect that, for su\\ufb03ciently large neural networks, most local minima have a low cost function value, and that it is not important to \\ufb01nd a true global minimum rather than to \\ufb01nd a point in parameter space that has low but not minimal cost (Saxe et al., 2013; Dauphin et al., 2014; Goodfellow et al., 2015; Choromanska et al., 2014)\\u201d\\n\\nAs such, we leverage the empirically-shown ability of stochastic gradient descent to optimize this non-convex function. Indeed there are no global convergence guarantees, but we have the strong advantage compared to typical L1-reconstruction algorithms that we do not need explicit knowledge on the, in practice often unknown, sparsifying basis.\\n\\nWe followed standard practice in deep learning by initializing all layers with their default Keras initializations, i.e. glorot uniform (Glorot, 2010), which we found to be working well. The logits of the distributions to be trained in the subsampling part of the network were initialized as a uniform, i.e. high-entropy, distribution, enabling most freedom for explorability of the sampling pattern by not explicitly setting a prior. \\n\\nWe now detail on the initializations for each layer in the appendix of the revised manuscript.\", \"question_3\": \"For linear problems such as reconstruction, there is a clear relationship between the number of input data versus number of unknowns. In this paper we focus on non-linear reconstruction, as well as other tasks such as object classification. Under this scope, it becomes unclear if we can still consider problems to be over- or underdetermined from a traditional point of view, and a more general information theoretic standpoint might proof fruitful. Concretely, as deep learning methods are optimized stochastically, they are expected to be drawn towards solutions that carry the largest signal for the downstream task. In the face of redundant input samples and under pressure of a small number of output samples, the method is thus expected to randomly select just one of these redundant samples as this would improve the loss of the model. As some tentative evidence to support this claim, we would refer you to figure 2a (96.8% removed), where almost no directly neighbouring pixels are sampled, showing a clear preference of the model for skipping redundant samples.\"}", "{\"title\": \"Reply to review #3\", \"comment\": \"We thank the reviewer for the positive and constructive feedback. Below we answer the questions and concerns:\", \"question_1\": \"We agree with the referee and will therefore include a visualization of the trained distributions using Gumbel top-k sampling and a realization of the sampling pattern. We are currently running experiments to obtain Gumbel top-k results for the \\u2018lines and circles\\u2019 and CIFAR10 experiments as well. \\n\\nSince we did not sufficiently emphasize that leveraging Gumbel top-k sampling for learning signal subsampling matrices is part of the novelty of the present work, we clarified this in the revised manuscript. In fact, using Gumbel top-k sampling in this context can be seen as a constrained version of DPS, with shared weights across the M distributions. \\n\\nTo also include previously-published baselines, we are currently running experiments with the recently proposed LOUPE method by Bahadir et al. (2019).\", \"question_2\": \"Indeed, the notion of compressed sensing has spurred vast work, ranging from sensing strategies to signal recovery algorithms. On the sensing side, sampling strategies are typically designed to satisfy the Restricted Isometry Property (RIP); describing isometry of the sensing matrix given K-sparse vectors, and thereby providing signal recovery guarantees, given an appropriate algorithm. On the algorithm side, sparsity in some basis transform is typically exploited, leveraging a wide variety of optimization algorithms spanning from proximal gradient methods to projection-over-convex-set and greedy algorithms. More recently, deep learning methods have been proposed for fast signal recovery from CS measurements, yielding state-of-the-art results. \\n\\nIn this context, DPS adopts current practices in data-driven CS recovery, but extends this to incorporate subsampling (the sensing) in an end-to-end pipeline. Such an end-to-end (sampling-to-any-task) learning strategy opens up opportunities for data-driven optimization of sensing strategies beyond theoretically-established results. \\n\\nAs pointed out by the referee, the shortcomings of disjoint optimization in classical CS are perhaps most evident when high-level tasks such as classification are part of the pipeline. \\nAs such, we are currently running additional experiments to better illustrate this.\", \"question_3\": \"We agree with the reviewer that such a comparison might be of interest. \\n\\nAs such, we are currently running additional experiments to include a comparison to Gumbel top-k (as we did for the MNIST classification case) as well as the method proposed by Bahadir et al. (2019). Notably, and unlike our method, the latter approach does not permit setting a specific subsampling rate, with this rate is only being indirectly controlled via hyperparameter settings.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper introduces a novel DPS(Deep Probabilistic Subsampling) framework for the task-adaptive subsampling case, which attempts to resolve the issue of end-to-end optimization of an optimal subset of signal with jointly learning a sub-Nyquist sampling scheme and a predictive model for downstream tasks. The parameterization is used to simplify the subsampling distribution and ensure an expressive yet tractable distribution. The new approach contribution is applied to both reconstruction and classification tasks and demonstrated with a suite of experiments in a toy dataset, MINIST, and COFAR10.\\n\\n\\nOverall, the paper requires significant improvement. \\n\\n1. The approach is not well justified either by theory or practice. There is no experiment clearly shows convincing evidence of the correctness of the proposed approach or its utility compared to existing approaches (Xie & Ermon (2019); Kool et al. (2019); Pl\\u00c2\\u0161otz & Roth (2018) ).\\n\\n2. The paper never clearly demonstrates the problem they are trying to solve (nor well differentiates it from the compressed sensing problem or sample selection problem)\\n\\n The method is difficult to understand, missing many details and essential explanation, and generally does not support a significant contribution. \\n\\n3. The paper is not nicely written or rather easy to follow. The model is not well motivated and the optimization algorithm is also not well described.\\n\\n4. A theoretical analysis of the convergence of the optimization algorithm could be needed.\\n\\n5. The paper is imprecise and unpolished and the presentation needs improvement.\\n\\n**There are so many missing details or questions to answer**\\n\\n1. What is the Gumbel-max trick? \\n2. How to tune the parameters discussed in training details in the experiments?\\n3. Why to use experience replay for the linear experiments?\\n4. Are there evaluations on the utility of proposed compared to existing approaches?\\n5. Does the proposed approach work in real-world problems?\\n6. Was there any concrete theoretical guarantee to ensure the convergence of the algorithm.\\n\\n[Post Review after discussion]\\u0010: The uploaded version has significantly improved over the first submission. It is now acceptable.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose a new approach of deep probabilistic subsampling for compressed sensing, based on Gumbel-softmax, which is interesting.\", \"a_few_points_should_be_clarified\": [\"in compressed sensing one has e.g. the restricted isometry property (RIP) related to recovery. How does the new method relate to such theoretical results? Are the results and findings along similar lines as (classical) compressed sensing theory?\", \"Methods in compressed sensing are typically convex, e.g. using l1-regularization. What are the drawbacks of using deep learning in this context, e.g. related to non-convexity? What is the role of initialization?\", \"Does the method both work for underdetermined and overdetermined problems (number of data versus number of unknowns)?\", \"What is the influence of the hyper-parameters mu and lambda in eq (14)? How should the model selection be done (currently lambda is set to 0.004 without further motivation)?\", \"MNIST: 60,000 instead of 70,000?\"]}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a learning-based adaptive compressed sensing framework in which both the sampling and the task functions (e.g., classification) are learned jointly end-to-end. The main contribution includes using the Gumbel-softmax trick to relax categorical distributions and use back-propagation to estimate the gradient jointly with the tas neural network. The proposed solution has the flexibility of able to be used in several different tasks, such as inverse problems ( super-resolution or image completion) or classification tasks. The paper is very well written.\\n\\nThe paper locates itself well in current baselines and explains Experiments mostly well. However, there are significant limitations in demonstrating the effectiveness/impact of the proposed technique: \\n1) The only comparison to another non-fixed sampling baseline is Kool et al. 2019. The visualization and a thorough comparison were missing in MNIST classification. This baseline was also missing in image reconstruction. \\n2) Compressive Sensing incorporates vast literature of algorithms focusing on different aspects of improvements; algorithms focused on classification and inverse problems. Even if done disjointly, how does the proposed joint learning is compared to those algorithms in these domains? \\n3) Top row of Figure 3 nicely explains how the learned sampling paradigm performs compared to other mechanisms (such as uniform, random, low-pass). But there is no comparision against other non-fixed techniques.\"}" ] }
H1lK5kBKvr
Semi-supervised 3D Face Reconstruction with Nonlinear Disentangled Representations
[ "Zhongpai Gao", "Juyong Zhang", "Yudong Guo", "Chao Ma", "Guangtao Zhai", "Xiaokang Yang" ]
Recovering 3D geometry shape, albedo and lighting from a single image has wide applications in many areas, which is also a typical ill-posed problem. In order to eliminate the ambiguity, face prior knowledge like linear 3D morphable models (3DMM) learned from limited scan data are often adopted to the reconstruction process. However, methods based on linear parametric models cannot generalize well for facial images in the wild with various ages, ethnicity, expressions, poses, and lightings. Recent methods aim to learn a nonlinear parametric model using convolutional neural networks (CNN) to regress the face shape and texture directly. However, the models were only trained on a dataset that is generated from a linear 3DMM. Moreover, the identity and expression representations are entangled in these models, which hurdles many facial editing applications. In this paper, we train our model with adversarial loss in a semi-supervised manner on hybrid batches of unlabeled and labeled face images to exploit the value of large amounts of unlabeled face images from unconstrained photo collections. A novel center loss is introduced to make sure that different facial images from the same person have the same identity shape and albedo. Besides, our proposed model disentangles identity, expression, pose, and lighting representations, which improves the overall reconstruction performance and facilitates facial editing applications, e.g., expression transfer. Comprehensive experiments demonstrate that our model produces high-quality reconstruction compared to state-of-the-art methods and is robust to various expression, pose, and lighting conditions.
[ "3D face reconstruction", "semi-supervised learning", "disentangled representation", "inverse rendering", "graph convolutional networks" ]
Reject
https://openreview.net/pdf?id=H1lK5kBKvr
https://openreview.net/forum?id=H1lK5kBKvr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "8jlAURBDwx", "rkx9eFq6tH", "Skx902J2FS", "HklBjUQHtB" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735051, 1571821810375, 1571712209936, 1571268252685 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1885/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1885/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1885/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes a semi-supervised method for reconstructing 3D faces from images via a disentangled representation. The method builds on previous work by Tran et al (2018, 2019). While some results presented in the paper show that this method works well, all reviewers agree that the authors should have provided more experimental evidence to convincingly demonstrate the benefits of their method. The reviewers are also unconvinced by how computationally expensive this method is or by the contributions of the unlabelled data to the performance of the proposed model. Given that the authors did not address the reviewers\\u2019 concerns, and for the reasons stated above, I recommend rejecting this paper.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Overview:\\nThis paper introduces a model for image-based facial 3D reconstruction. The proposed model is an encoder-decoder architecture that is trained in semi-supervised way to map images to sets of vectors representing identity (which encodes albedo and geometry), pose, expression and lighting. The encoder is a standard image CNN, whereas the decoders for geometry and albedo rely on spectral graph CNNs (similar to e.g. COMA, Ranjan\\u201918). \\nThe main contribution of the work with respect to the existing methods is the use of additional loss terms that enable semi-supervised training and learning somewhat more disentangled representations. Authors report quantitative results on MICC Florence, with marginal improvements over the baselines (the choice of the baselines is reasonable).\", \"decision\": [\"The overall architecture is very similar to existing works such as COMA (Ranjan\\u201918) and (Tran\\u201919), including the specific architecture for geometry decoders, and thus the contributions are primarily in the newly added loss terms.\", \"I also find the promise of \\u201cdisentangled\\u201d representation a bit over-stated, as the albedo and base geometry still seem to be encoded in the same \\u201cidentity\\u201d vector (see related question below).\", \"The numerical improvements seem fairly modest with respect to (Tran\\u201919). In addition, there is no numerical ablation study that would demonstrate the actual utility of the main contributions (such as adversarial loss): there are qualitative results but they are not very convincing.\", \"Thus, the final rating \\u201cweak reject\\u201d.\", \"Additional comments / typos:\", \"I am not fully following the argument about sharing identity for albedo and shape on p2: \\u201calbedo and face shape are decoded ...\\u201d. Would it not be more beneficial to have a fully decoupled representation between the albedo and the facial geometry? I do not see how albedo information would be useful for encoding face geometry and vise-versa.\", \"Authors claim that one of the main drawbacks e.g. of (Train\\u201919) is the fact that they train on data generated from linear 3DMM. This is indeed the case, but it does not seem like here the authors fully overcome this issue: they do have additional weakly-supervised data, but they still strongly rely on linear 3DMM supervision (p6, \\u201cpairwise shape loss\\u201d, \\u201cadversarial loss\\u201d), and do not seem to provide experimental evidence that the model will work without it.\", \"In particular, the \\u201cadversarial training\\u201d actually corresponds to learning the distribution of the linear 3DMM. Would it not mean that ultimately the model will be limited to learning only linear ? Could you please elaborate on this?\", \"p3: \\u201callows ene-to-end \\u2026 training\\u201d\", \"p3: \\u201cframework to exact \\u2026 representations\\u201c.\", \"p8: \\u201cevaluation matric\\u201d\"], \"update\": \"Authors did not provide any response, thus I keep my rating.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents an encoder-decoder architecture to reconstruct 3D face from a single image with disentangled representations: identity, expression, pose, and lighting. The authors develop a semi-supervised training scheme to fully exploit the value of large amount of unlabeled face images from unconstrained photo collections. Experimental results on MICC Florence and AFLW2000-3D verify the efficacy of the proposed method.\\n\\nThe presentation and writing are clear. The problem solved in this paper aligns with real applications.\\n\\nMy concerns regarding this paper are as below.\\n1) What are the training computational complexity and testing time cost of the proposed method? Since speed is very important for real applications.\\n2) The datasets used for evaluation are quite old. More experiments on more recent challenging benchmarks are needed to verify the superiority of the proposed method, e.g., IJB-B/C, etc.\\n3) Some related works are missing and need to be discussed, e.g., Joint 3D Face Reconstruction and Dense Face Alignment from A Single Image with 2D-Assisted Self-Supervised Learning [Tu et al., 2019], 3D-Aided Dual-Agent GANs for Unconstrained Face Recognition [Zhao et al., T-PAMI 2018], 3D-Aided Deep Pose-Invariant Face Recognition [Zhao et al., IJCAI 2018], etc.\\n4) Format of references should be consistent.\\n\\nBased on my above comments, I give the rate of WR. If the authors could solve my concerns in rebuttal, I would like to further adjust my rate.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a semi-supervised and adversarial training process to exploit the value of unlabeled faces and overcome the limitation of a linear 3DMM and the nonlinear models proposed early (Tran & Liu (2018), Tran et al (2019)). This approach designs a framework to exact nonlinear disentangled representations from a face image with the help of loss functions including face recognition loss, shape pairwise loss and adversarial loss. This framework's contribution is demonstrated with experiments which show this model achieves state-of-the-art performance in face reconstruction.\", \"this_paper_should_be_rejected_because\": \"(1) the experiments are not representative enough and the results are controversial,\\n(2) this paper does not clearly demonstrate how they exploit the value of the unlabeled training images,\\n(3) the creative progress of this paper is not typical compared to the early nonlinear model (Tran & Liu (2018), Tran et al (2019)).\", \"main_argument\": \"The experiments do not provide convincing evidence of the correctness of the proposed approach or its utility compared to existing approaches. The results are not representative enough and missing many details:\\n1) What is the ratio of unlabeled training images and labeled training images?\\n2) Why only show the results of Cooperative and Indoor situation? \\n3) Why the standard deviation of the result in Cooperative situation is higher than the early models?\\n4) Why the mean value of the result in Indoor situation is higher?\\n5) Why so few situations and datasets your experiments run on?\\n6) How did you initialize the parameters and the weights?\\n7) How about the time-consuming and memory-consuming of your model?\\n\\nThe paper does not demonstrate the difference and progress between its model and the early nonlinear model clearly (Tran & Liu (2018), Tran et al (2019)). This paper points out that they fully exploit the value of unlabeled face data, but there are few evidences in this paper to support that. And it also points out the time-consuming problem of early models, but there are no experiment results show how efficient its model is.\", \"the_loss_functions_are_also_not_convincing_enough\": \"1) How to choose or initialize the value of lambda center in the Face recognition loss?\\n2) Have you demonstrated the solution you used in Shape smooth loss which aims to solve the vertices do not satisfy the Laplacian equation?\"}" ] }
Bkxd9JBYPH
Representing Model Uncertainty of Neural Networks in Sparse Information Form
[ "Jongseok Lee", "Rudolph Triebel" ]
This paper addresses the problem of representing a system's belief using multi-variate normal distributions (MND) where the underlying model is based on a deep neural network (DNN). The major challenge with DNNs is the computational complexity that is needed to obtain model uncertainty using MNDs. To achieve a scalable method, we propose a novel approach that expresses the parameter posterior in sparse information form. Our inference algorithm is based on a novel Laplace Approximation scheme, which involves a diagonal correction of the Kronecker-factored eigenbasis. As this makes the inversion of the information matrix intractable - an operation that is required for full Bayesian analysis, we devise a low-rank approximation of this eigenbasis and a memory-efficient sampling scheme. We provide both a theoretical analysis and an empirical evaluation on various benchmark data sets, showing the superiority of our approach over existing methods.
[ "Model Uncertainty", "Neural Networks", "Sparse representation" ]
Reject
https://openreview.net/pdf?id=Bkxd9JBYPH
https://openreview.net/forum?id=Bkxd9JBYPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "lCUVPPi_Kq", "B1xS0tY3oH", "HJxc4WunoH", "HJx192Dnor", "HJgI0TqosH", "HJxo21UPoH", "SyxXqyLvsB", "SJlxE0rPsB", "Bklgj6SDsB", "rJlKRnHPoH", "rJxnUJb6cH", "S1xjOAscqr", "r1xd1v3u9H", "r1gQ31udtS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798735020, 1573849548782, 1573843250419, 1573842055297, 1573789134338, 1573506995190, 1573506955194, 1573506600367, 1573506455582, 1573506257126, 1572831059892, 1572679283008, 1572550368490, 1571483563047 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1884/Authors" ], [ "ICLR.cc/2020/Conference/Paper1884/Authors" ], [ "ICLR.cc/2020/Conference/Paper1884/Authors" ], [ "ICLR.cc/2020/Conference/Paper1884/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1884/Authors" ], [ "ICLR.cc/2020/Conference/Paper1884/Authors" ], [ "ICLR.cc/2020/Conference/Paper1884/Authors" ], [ "ICLR.cc/2020/Conference/Paper1884/Authors" ], [ "ICLR.cc/2020/Conference/Paper1884/Authors" ], [ "ICLR.cc/2020/Conference/Paper1884/AnonReviewer5" ], [ "ICLR.cc/2020/Conference/Paper1884/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1884/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1884/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper presents a variant of recently developed Kronecker-factored approximations to BNN posteriors. It corrects the diagonal entries of the approximate Hessian, and in order to make this scalable, approximates the Kronecker factors as low-rank.\\n\\nThe approach seems reasonable, and is a natural thing to try. The novelty is fairly limited, however, and the calculations are mostly routine. In terms of the experiments: it seems like it improved the Frobenius norm of the error, though it's not clear to me that this would be a good measure of practical effectiveness. On the toy regression experiment, it's hard for me to tell the difference from the other variational methods. It looks like it helped a bit in the quantitative comparisons, though the improvement over K-FAC doesn't seem significant enough to justify acceptance purely based on the results.\\n\\nReviewers felt like there was a potentially useful idea here and didn't spot any serious red flags, but didn't feel like the novelty or the experimental results were enough to justify acceptance. I tend to agree with this assessment.\", \"title\": \"Paper Decision\"}", "{\"title\": \"A general response to all the reviewers.\", \"comment\": \"We sincerely thank all the reviewers for their time and efforts. The paper has been thoroughly revised in light of your thoughtful feedback. In this post, we attempt to shortly summarize the main points of paper and changes in the new revision.\", \"a_summary\": \"This work presents a sparse information form of multivariate normal distribution (MND) to represent model uncertainty of neural networks, in oppose to fully factorized Gaussian and matrix normal distributions. With Laplace Approximation as a backbone, we point out that model uncertainty can be inferred in information form of MND by adding a diagonal correction term. Despite being more general formulation, MND suffers from its inherent intractable complexity. Consequently, we demonstrate a solution by newly introducing (a) a sparse information form, (b) a sparsification algorithm, and (b) its tightly coupled analytical sampler. Lastly, we theoretically and empirically show state-of-the-art performance.\\n\\nAs a result, we demonstrate a way to tackle non-trivial challenges that are associated with intractability of MND which presents itself into a novel expression for the model uncertainty. We firmly believe that Bayesian Deep Learning need better representation of the parameter posterior along with better approximate Bayesian inference. In this sense, our work can be a stepping stone towards this direction.\", \"the_current_revision_contains_following_key_changes\": \"(a) notations are made consistent [R4 and R5].\\n\\n(b) a cleaner presentation style including figures and examples that better explains the concept [R4].\\n\\n(c) a clearer statement on the main novelty in related works [R3 and R5].\\n\\n(d) moved our theoretical results to appendix [R2].\\n\\n(e) expanded the summary of main contributions, pointing at specific parts [R2, R4].\\n\\n(g) additional experiments and related discussions to address reviewer's feedback [R4 and R5].\\n\\nIn particular, Hamiltonian Monte Carlo (Neal 1996) and Bayes by Backpropagation (Blundell 2015) have been added as benchmarks to include a sampling method (as a better ground truth) and a fully factorized variational inference method. Moreover, we have added more ablation studies. Lastly, we also show the reduction of complexity due to the low rank approximation for our classification experiments.\\n\\nWe individually address the reviewers in more detail below.\"}", "{\"title\": \"Please find the experiments you have requested in the new revision.\", \"comment\": \"Please find the experiments you have requested in the new revision. In particular:\\n\\n1. We have added HMC in our toy experiments and revised the description of our experiment section. We have also added Bayes by Backprop (instead of Graves 2011) for this experiment which is also based on variational inference and fully factorized Gaussian. A main reason was that it was slightly more recent than Graves 2011 and often compared baselines (which we hope you agree).\\n\\n2. We have added a part that shows reduction in complexity. Please find the related discussions in section 4.2 of new revision.\\n\\nYour review has improved our paper significantly! Thank you again and please let us know if you have another valuable feedback.\"}", "{\"title\": \"Thank you for engaging in a discussion with us.\", \"comment\": \"Thank you so much for your further comments. We address your questions below.\", \"on_low_rank_approximation\": \"\\\"if you gain more from $D$ or you loss more from the sparsification\\\"\\n\\nThis is a brilliant question. In short, you are correct - it is a hypothesis: keeping the diagonals exact while sparsifying the off-diagonals should result in a better estimates of model uncertainty (equivalently keeping the information content of a node exact while sparsifying the weak links between the nodes from a graphical interpretation of information matrix). Consequently, we have updated our experiments section where we discuss this point.\\n\\nOn the other hand, we also point out that this is a well motivated hypothesis by practice. In SLAM literature (as a Bayesian tracking problem), so-called sparse information filters works under this hypothesis (map posterior in [1] is equivalent to the parameter posterior given the data; the recursion of Bayesian tracking can be generalized with a batch of data as well) and has been a compelling alternative to Kalman filters when the canonical form of full Gaussian distribution was intractable or inefficient. [1-5] are practical examples where this hypothesis worked well in practical applications.\\n\\nNow, in case of deep neural networks, our experiments verify this hypothesis. We find this valid as the spectrum (or eigenvalues) of information matrix tends to be sparse for DNNs (this is a major difference to SLAM problems and a reason why we introduced spectrum sparsification instead) and we find a sparse information form to perform well when compared to fully factorized and matrix normal distributions (as we use Laplace approximation, the approximate inference computations and the parameter posterior of a model are exactly the same; so only a main difference was the representation of model uncertainty).\\n\\nDespite these, we acknowledge that this is a limitation of our work. When the spectrum of information matrix is non-sparse, it is clearly debatable if this sparse information form can be an alternative to fully factorized and matrix normal distributions. This point has a strong connection to information geometry and Bayesian deep learning, and we feel there is a lack of foundations from a theoretic perspective. Nevertheless, we hope that our work is a stepping stone towards the goal of providing useful sparse expression to represent model uncertainty of deep neural networks.\\n\\nOn $D$ being positive.\\n\\nOur apologies if this point was not clear. Let us rephrase: with a low-rank approximation and adding the diagonal correction afterwards, you can ensure that D remains strictly positive by choosing rank K (resulting in rank L=J+K) so that $[(U_{A_{1:a}} \\\\otimes U_{G_{1:g}})\\\\Lambda_{1:L} (U_{A_{1:a}} \\\\otimes U_{G_{1:g}})]_{ii}^T < \\\\mathbb{E} \\\\left [ \\\\delta \\\\theta_i^2 \\\\right ]$ in accordance of Lemma 3. This introduces another hyperparameter but it can be made automatic since the low rank approximation and diagonal correction are applied in off-line without involving data. We have also discuss this in a paragraph below Lemma 3.\\n\\nWe thank you again for engaging in a discussion with us. Please let us know if you have more valuable feedback. We will incorporate them.\", \"references\": \"[1] Simultaneous Localization and Mapping With Sparse Extended Information Filters. 2004.\\n\\n[2] Multi-Robot SLAM With Sparse Extended Information Filters. 2003.\\n\\n[3] Square root SAM: Simultaneous location and mapping via square root information smoothing. 2006.\\n\\n[4] Exactly sparse delayed-state filters. 2005\\n\\n[5] Simultaneous Localization and Mapping (SLAM): Part II. 2006 (for a survey).\"}", "{\"title\": \"Thanks for your detailed rebuttal\", \"comment\": \"Thanks for your detailed rebuttal and revisions, the paper looks much better now. And you rebuttal has resolved most of previous concerns.\\n\\n# Low-rank approximation\\nYou are right, low-rank approximation is necessary for sampling from the diag-corrected information form. But I feel it debatable that if you gain more from $D$ or you loss more from the sparsification. \\n\\n# $D$ being positive\\nYou mentioned that \\\"However, with low-rank approximation and adding the diagonal correction afterwards, you can ensure that D remains strictly positive.\\\" Sorry I don't get it, why D is strictly positive? It is still possible that some of you low-rank diagonal terms are bigger than the true diagonal, isn't it?\"}", "{\"title\": \"Continued.\", \"comment\": \"On \\\"Laplace Approximation\\\":\\n\\n- In fact, the Fisher should be scaled by N and therefore, these are technically correct. We refer to references [1,2] for technical details. We have also briefly noted on this point for the current revision.\\n\\nOn \\\"Low-rank Approximation\\\":\\n\\n- We address each points of your concerns below.\\n\\n1. Please find the explanation above on why low-rank approximation is necessary. It is not to compute the eigen-system of A and G. It is to sample from the resulting distribution from its information formulation (eq. 6). Indeed K-FAC is not a computationally expensive method either but diagonal correction to the eigenbasis of K-FAC results in an Multivariate Normal Distribution whereas K-FAC results in matrix normal distribution. Please compare eq. 3 and eq. 6 and try to sample from eq. 6. You will see that it is non-trivial without the low-rank approximation. We have added a figure, an example and a table to address this point in the new revision.\\n\\n2. We apologize for this, and we have added explicit formulas. Thank you for the good suggestion.\\n\\n3. Diagonal correction is added after the low rank approximation as illustrated in algorithm 2 of old version. The mathematics of computing the diagonal term is the same and we made algorithm 2 in order to provide an overview. Now, we also show this with a figure (figure 1) and comment in the footnote of equation 8 (new revision). Thank you for pointing this out and it has improved the clarity of the paper.\\n\\nOn \\\"Experiments\\\":\\n\\n- Our experiments validates the proposed method and are carefully designed to show the benefits of our approach. \\n\\nThe choice of architectures are to ensure that the given low-rank approximation is necessary. In fact, [4] uses fully connected layers with small sizes only and for this, one needs no low-rank approximation (with the architecture of [4] we cannot say we validated our method). On the other hand, the 3rd layer of our architecture on MNIST contains significantly more large number of parameters where the low rank approximation is strictly necessary (as a representative for scalability). We hope that this point is automatically answered when you understand why the low rank approximation is necessary. \\n\\nFurthermore, we choose to evaluate on the Fisher estimates instead of the adversarial examples, and the mis-classification uncertainty for in-domain sets are already covered in our experiments. One novel part in our experiments is that we evaluate both in-domain and out-domain sets with the same hyperparameter choices. This is because we found that [2]'s way of evaluating can be misleading: one can choose the hyperparameter so that it performs well for out-of-distribution sets but do not generalize to the other metrics that is calibration and accuracy for in-domain datasets. However, comparing on direction evaluations on the Fisher estimates on MNIST with smaller architectures seem good idea and we will try to include them before the revision period ends. (@update: instead of this experiment we have focused on your updated review and so, for the toy regression experiments, we added an ablation study where we lower the ranks of DEF and also show EFB results. Please find the discussion part of section 4.1 in the new revision, and please let us know if you this concerns you.)\\n\\nLastly, we have also made significant efforts of grid searching hyperparameters in order to ensure that the comparisons are fair and all the benchmarks are implemented in an optimal way. We hope that these aspects are considered as merits for you, as we constantly experienced that comparisons in this field can be rather poor after having reproduced the results of several papers. As a final remark on this point, we do not see why it is better to use the same architecture to [2] and we wait for your enlightening answer.\\n\\n[1] Optimizing Neural Networks with Kronecker-factored Approximate Curvature. 2015\\n\\n[2] A scalable Laplace approximation for neural networks. 2018\\n\\nAs a concluding remark, we thank you again for a very detailed review and we sincerely hope you understand the necessity of low rank approximation with this new revision.\"}", "{\"title\": \"Thank you for very useful comments.\", \"comment\": \"Thank you for very useful comments, and your review has improved our paper significantly. We start by addressing your main points.\", \"you_comment\": \"\\\"And the paper's notations and presentations are too messy to be an accepted paper.\\\"\\n\\n- We sincerely apologize if you had hard time reading our paper. We have thoroughly revised to improve the clarity of the paper in the new revision. Overall, we believe the paper is more accessible and clearer (thanks to your review!).\", \"your_other_points\": \"Now, we address the other points you have concerns on.\\n\\nOn \\\"Diagonal Corrections\\\":\\n\\n- 1. As explained in Lemma 3, the matrix D requires to be always positive and this is indeed, not always true. However, with low-rank approximation and adding the diagonal correction afterwards, you can ensure that D remains strictly positive. Furthermore, eigenvalue clipping or finding closest positive definite matrices are other techniques that can be employed (which appear quite often in 2nd order optimization communities). We have explained this point in the new revision.\\n\\n2. The inversion of matrix D is stable because of the prior precision term and scaled by the number of data points (see equation 9). Our derivation of the analytical sampler works on equation 9 which we clarify in the new revision (we omitted these term for achieving better clarity). \\n\\nOn \\\"Writing\\\":\\n\\n- We apologize if you had to guess a lot. We have improved the clarity of the paper significantly. \\n\\n1. All the notations are introduced in the new revision. We apologize for our careless mistakes.\\n\\n2. All the typing mistakes are corrected in the new revision. We apologize for our careless mistakes.\\n\\n3. We have incorporated your suggestions on notations for EK-FAC. Unfortunately, we do not move EK-FAC part to background section to keep the story-line the same (R5 finds the structure clear). Instead, we clearly point out what parts of the paper contains our contributions in the introduction (in the summary of our contributions) and shortened the parts on EK-FAC. Please let us know if this is unsatisfactory.\\n\\n4. We did not claim that our proposed method also has closer approximations in terms of the Fisher inverse and therefore, no proofs are given. Please check again the paragraph below Corollary 1. \\n\\n5. We apologize for our careless mistakes and we have made significant efforts to improve the clarity of the paper in the new revision.\"}", "{\"title\": \"We hope for further discussions.\", \"comment\": \"Thank you for the time you spent reviewing our paper. Here, we express our concerns about strong claims you have made.\", \"you_claim\": \"\\\"The experimental results are not convincing, as both the considered scenarios are limited and the comparisons are too poor (no consideration of state of the art alternatives).\\\"\\n\\n- Our experiments show the benefits of our approach and the comparisons to the state-of-the-art alternatives are provided.\\n\\n1. The considered scenarios are carefully controlled experiments that show case the benefits of our approach. The toy regression examples show (1) the quality of uncertainty estimation and (2) the quality of the Fisher approximations. Our classification experiments show (1) the quality of uncertainty estimation in a more realistic dataset, and (2) the necessity of our low-rank approximation. Please support your claim stating why the considered scenarios are limited.\\n\\n2. We compare to the state-of-the art alternatives that are both scalable and training-free. Amongst this class of methods, the presented baselines are strong alternatives and often applied in practice. Please let us know what you exactly meant by \\\"the state-of-the-art alternatives\\\".\", \"you_comment\": \"\\\"The provided corollaries are not actually helpful and should be put in the appendix.\\\"\\n\\n- We have incorporated this suggestion and all the theoretical analysis are in the appendix for the current revision. Thank you for your valuable opinion on this.\\n\\nAs a concluding remark we sincerely hope for further discussions.\"}", "{\"title\": \"We hope for further discussions with you.\", \"comment\": \"Thank you for the time you spent reviewing our paper and we sincerely hope for further discussions with you. We address your comments below.\", \"you_comment\": \"\\\"The paper makes a certain contribution to existing Laplace approximations for the task in terms of accuracy and scalability. However, it is incremental and the novelty is a bit low, compared to many recent closely related works, for example, [1-5].\\\"\\n\\n- Our proposed contributions have not been introduced in references you mention ([1-5]) as a fact, and these are modest increment similar to all research papers (we argue it is a matter of presentation). \\n\\n1. Our contributions namely (a) diagonal correction to eigenbasis, (b) a low rank approximation preserving Kronecker structure in eigenvectors, (c) an algorithm to achieve the previous point, and (d) derivation of analytical sampler, have not appeared in the references you mention ([1-5]). We get your feelings that point (a) heavily builds on existing works but other points (b-d) are not. Furthermore, all these points are sensible and non-obvious, and empirical results confirm that the utility of our contributions are significant in terms of \\\"accuracy and scalabilty\\\" as you acknowledge. Lastly, all these contributions lead to representing model uncertainty in sparse information form, which conceptually, is different to references [1-5] and literature of Bayesian Deep learning .\\n\\n2. We understand your sentiment: we start by adding a diagonal correction term to [3] for the framework of [4]. However, in a high level abstraction, all the references you mention can be phrased in a way that appear incremental, despite being highly influential works. Examples: (a) [1] extends [6] by introducing Kronecker product for the Fisher estimates. (b) [2] extends [1] to Gauss Newton instead of natural gradient. (c) [3] extends [1] by introducing a re-scaling term in eigenbasis. (d) [4] extends [7] by employing [2] for Laplace approximation. (e) [5] extends [8] by employing [3] for variational inference. In short, it is a matter of presentation (which we prioritized readers understandings rather than sounding completely new).\\n\\n[1] Optimizing Neural Networks with Kronecker-factored Approximate Curvature. 2015\\n\\n[2] Practical Gauss-Newton Optimisation for Deep Learning. 2017\\n\\n[3] Fast Approximate Natural Gradient Descent in a Kronecker-factored Eigenbasis. 2018\\n\\n[4] A scalable Laplace approximation for neural networks. 2018\\n\\n[5] Eigenvalue Corrected Noisy Natural Gradient. 2018\\n\\n[6] Natural Gradient Works Efficiently in Learning. 1998\\n\\n[7] A Practical Bayesian Framework for Backpropagation Networks. 1992\\n\\n[8] Noisy Natural Gradient as Variational Inference. 2018\\n\\nAs a concluding remark, we sincerely hope to hear more from you in what you meant by \\\"compared to many recent closely related works\\\".\"}", "{\"title\": \"Thank you for very useful comments\", \"comment\": \"Thank you for very useful comments, and your review has improved our paper significantly. We start by addressing your main points.\", \"you_comment\": \"\\\"I would be also interested to see what the additional time complexity by adding a diagonal correction term is and how more efficient it gets by low-rank approximation in experimental details.\\\"\\n\\n- This is a brilliant idea and we promise to add an empirical results in the current revision. We comment on time complexity of adding a diagonal correction term for the inference part. As a short note, all the computations required for adding a diagonal correction term does not involve data and are thus, off-line. Please find an overview in the last paragraph of section 2.3.\", \"on_your_minor_comments\": \"\\\"Page 4: It would have been easier to understand if there had been a notational distinction between exact eigenbasis and KFAC eigenbasis in defining V. Also, \\u201cIn equation equation 10\\u201d $->$ \\u201cIn equation 10\\u201d.\\\"\\n\\n- This suggestion has been incorporated. We apologize for our careless mistakes.\\n\\n\\\"Page 7: In Lemma 2, the order in describing the low-rank estimate does not appear to be correct. Also, in Lemma 4, should not there be a hat on $I_{efb}$ and $I_{kfac}$?\\\"\\n\\n- We apologize for the confusion (indeed there should be a hat). We have made this more clear in the new revision.\\n\\n\\\"Page 9: The colour scheme in Figure 3 looks visually harder to read.\\\"\\n\\n- We promise to change this either before the end of revision period.\\n\\n\\\"Page 14: In equation 15 and 16, there should be a bracket for (2 $\\\\pi$), and in Appendix B, some variables (e.g. p, k, R) are left unexplained.\\\"\\n\\n- We apologize for the confusion and we have made this clear in the new revision.\\n\\n\\\"Page 14, 15:\\\".\\n\\n- We apologize for the confusion and we have made this clear in the new revision.\\n\\n\\\"Page 3: In equation 2 and 4, is it $A_{i-1}$ or $A_i$?\\\"\\n\\n- Yes you are correct. We apologize the the carelessness and this is made clear in the new revision.\\n\\n\\\"Page 16: In equation 27, the dimension of X $\\\\in \\\\mathcal{R}^{m \\\\times m}$ seems incorrect. Also, in the last line, doesn\\u2019t $X \\\\odot D$ have different dimensions (similarly in equation 22)?\\\"\\n\\n- We apologize for the confusion and we have made this clear in the new revision. In fact, we have presented this section with a better presentation style.\\n\\nWe thank you again for a very detailed review. Your comments have significantly improved our paper.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #5\", \"review\": \"The submitted paper presents a method of approximating the posterior distribution over the DNN parameters based on a Laplace Approximation scheme. It extends on the previous work by adding a diagonal correction term to the Kronecker-factored eigenbasis and also suggests a low-rank representation of Kronecker-factored eigendecomposition. Empirical evaluations were done to show that the proposed method has a more accurate uncertainty estimation compared to the previous work.\\n\\nOverall, the paper is well-organized and easy to follow, although mathematical notations are not consistent throughout the paper. It is well-referenced, and the derivations look mostly correct (explained in minor comments). The main idea of the paper is convincing and well-motivated. To my knowledge, the proposed method of adding a correction term has not been introduced before. However, it is more of an incremental contribution to the existing works. In that sense, I am slightly concerned that its novelty is limited.\\n\\nThe experiments are not comprehensive. For the toy regression problem, a comparison to Hamiltonian Monte Carlo would be more informative. Moreover, it would be helpful to report the comparison with factorized variational methods (e.g. Graves, 2011) and experiment on modern architectures. I would be also interested to see what the additional time complexity by adding a diagonal correction term is and how more efficient it gets by low-rank approximation in experimental details.\", \"minor_comments\": \"*Page 4: It would have been easier to understand if there had been a notational distinction between exact eigenbasis and K-FAC eigenbasis in defining V. Also, \\u201cIn equation equation 10\\u201d -> \\u201cIn equation 10\\u201d.\\n*Page 7: In Lemma 2, the order in describing the low-rank estimate does not appear to be correct. Also, in Lemma 4, shouldn\\u2019t there be a hat on I_{efb} and I_{kfac}?\\n*Page 9: The colour scheme in Figure 3 looks visually harder to read.\\n*Page 14: In equation 15 and 16, there should be a bracket for (2 \\\\pi), and in Appendix B, some variables (e.g. p, k, R) are left unexplained.\\n*Page 14, 15: In equation 23 and proposition 1, \\\\mathcal is missing, and vec operator is missing in equation 22.\\n*Page 3: In equation 2 and 4, is it A_{i-1} or A_i?\\n*Page 16: In equation 27, the dimension of X \\\\in \\\\mathcal{R}^{m \\\\times m} seems incorrect. Also, in the last line, doesn\\u2019t X \\\\odot D have different dimensions (similarly in equation 22)?\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper studies the Laplace approximation for Bayesian inference of a neural network. Specifically, it proposes a diagonal correction and a further low-rank approximation to the Kronecker-factored eigenbasis for more accurate approximation of the Fisher information matrix and better scalablility, respectively. The proposed diagonal correction is shown to have a smaller residual error in F-norm. Experiments are given to show that the proposed Laplace approximation makes more accurate uncertainty estimations. \\n\\nThe paper makes a certain contribution to existing Laplace approximations for the task in terms of accuracy and scalability. However, it is incremental and the novelty is a bit low, compared to many recent closely related works, for example,\\n\\nOptimizing Neural Networks with Kronecker-factored Approximate Curvature. 2015\\nPractical Gauss-Newton Optimisation for Deep Learning. 2017\\nFast Approximate Natural Gradient Descent in a Kronecker-factored Eigenbasis. 2018\\nA scalable Laplace approximation for neural networks. 2018\\nEigenvalue Corrected Noisy Natural Gradient. 2018\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"Laplace approximation has been an important tool for obtaining uncertainty estimation for deterministic models. To efficiently approximate the Hessian matrix for neural networks, Ritter et. al (2018) proposes to use K-FAC . Motivated by the relatively inaccurate approximations of K-FAC, this paper proposes to improve KFAC approximations by combining eigen-basis corrections (EK-FAC, George et al, 2018) and diagonal corrections. The paper shows that the proposed method has smaller Frobenius approximate errors compared to K-FAC and EK-FAC. To further reduce the computational costs, the paper proposes a low-rank approximation by keeping only the L largest eigenvalues. Empirically, the paper demonstrates improved calibration and out-of-distribution entropies compared to previous approaches.\\n\\n# Diagonal Corrections. \\nWith the diagonal correction approximating the Fisher better, the paper shows the computation can still be conducted in the scale of W, which is similar to K-FAC. However, the method requires the diagonal correction matrix D is always positive, which might not be true. Moreover, because the computation requires D^{-1}, clipping D to a small constant will bring up stability issues. I wonder how this problem is tackled in this paper. \\n\\n# Writing \\nThe paper's notations are messy, which requires a lot of intellectual guesses to understand the conveyed idea. \\n1) Notations are not introduced, such as $\\\\delta \\\\theta$, $W_{map}^{IV}$, the MN distribution in the appendix.\\n2) Notations are typoed. Eq(3) $N(0, A^{-1} \\\\otimes G^{-1}) = MN(0, G^{-1}, A^{-1})$; The bottom paragraph in Page 4; eq(8).\\n3) Notations are abused. In particular for $V$ and $\\\\Lambda$ when introducing EK-FAC. The paper uses $V$ for both true eigenbasis and the EK-FAC eigenbasis. In addition, the EK-FAC part should be moved to the background section.\\n4) The paragraph below Corollary 1 says the author can prove the proposed method also has closer approximations in terms of the Fisher inverse. But no proofs are given.\\n5) Caption of Figure4.\\n\\n# Laplace Approximation \\nFor eq(4, 8, 12), The Hessian in Laplace approximations should be divided by $N$. Although the paper also mentions the scaling below those equations. But technically eq(4, 8, 12) are wrong and I don't know whether the experiments really did the scaling or not. \\n\\n# Low-rank Approximation\\n1) It is not clear why the low-rank approximation is necessary. The computational costs are inevitable to compute the eigen-system of A and G. Why do we need the low-rank approximation after that ? K-FAC is not a computationally expensive method either. \\n2) I cannot understand the proofs of Lemma 2. In fact, I don't know what $ I_{1:L}^{top}, I_{1:K}^{top}$ means. More explicit formulas should be given for clarity. \\n3) Lemma 4 states $I_ii = (\\\\hat{I}_{def})_ii$. Although the diagonal correction makes $I_ii = (I_{def})_ii$, the low-rank approximation makes them unequal again. Or I guess you use a different D in eq(13) from the D in eq(10) ? \\n\\n# Experiments \\nThe paper needs more experiments to validate the proposed method. Firstly, for the MNIST experiments, it is better to use the same architecture as in Ritter et al (2018) for direct comparisons. Beyond that, the adversarial attack experiment and the mis-classification uncertainty experiment in Ritter et al (2018) seem to be good choices as well. \\n\\n# Overall\\nThe paper proposes a diagonal corrected EK-FAC, achieving better approximation of the Fisher matrix. Interestingly, the paper shows that this corrections doesn't add too much computations. However, the proposed low-rank approximation doesn't seem necessary. And the paper's notations and presentations are too messy to be an accepted paper.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"The contribution of the paper is marginal, as the principle of imposing Gaussians on the network to perform Bayes is not new. The Laplace-based approximation is half-baked and certainly much better techniques for Bayes exist in the recent literature; furthermore, it's selection is not substantiated enough, and, of course, it represents no novelty. The experimental results are not convincing, as both the considered scenarios are limited and the comparisons are too poor (no consideration of state of the art alternatives). The provided corollaries are not actually helpful and should be put in the appendix.\"}" ] }
rJe_cyrKPB
GroSS Decomposition: Group-Size Series Decomposition for Whole Search-Space Training
[ "Henry Howard-Jenkins", "Yiwen Li", "Victor Adrian Prisacariu" ]
We present Group-size Series (GroSS) decomposition, a mathematical formulation of tensor factorisation into a series of approximations of increasing rank terms. GroSS allows for dynamic and differentiable selection of factorisation rank, which is analogous to a grouped convolution. Therefore, to the best of our knowledge, GroSS is the first method to simultaneously train differing numbers of groups within a single layer, as well as all possible combinations between layers. In doing so, GroSS trains an entire grouped convolution architecture search-space concurrently. We demonstrate this with a proof-of-concept exhaustive architecure search with a performance objective. GroSS represents a significant step towards liberating network architecture search from the burden of training and finetuning.
[ "architecture search", "block term decomposition", "network decomposition", "network acceleration", "group convolution" ]
Reject
https://openreview.net/pdf?id=rJe_cyrKPB
https://openreview.net/forum?id=rJe_cyrKPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "9jjSWkFvGw", "Hyg1RnaIoS", "rkesS2pIoH", "Bkeqau6Lir", "BJlh8d6LoH", "HJloTnFxqB", "HkgEA-hk5r", "Hyx77nBRFS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734989, 1573473479195, 1573473346785, 1573472449562, 1573472339528, 1572015299172, 1571959244265, 1571867674529 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1883/Authors" ], [ "ICLR.cc/2020/Conference/Paper1883/Authors" ], [ "ICLR.cc/2020/Conference/Paper1883/Authors" ], [ "ICLR.cc/2020/Conference/Paper1883/Authors" ], [ "ICLR.cc/2020/Conference/Paper1883/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1883/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1883/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The authors use a Tucker decomposition to represent the weights of a network, for efficient computation. The idea is natural, and preliminary results promising. The main concern was lack of empirical validation and comparisons. While the authors have provided partial additional results in the rebuttal, which is appreciated, a thorough set of experiments and comparisons would ideally be included in a new version of the paper, and then considered again in review.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewer #3 (2 of 2)\", \"comment\": \"\\u201cThere is no comparison with existing work, e.g. parametrization of the network with Tucker [1, 2] or CP [3]\\u201d\\n\\nSince we do not propose new mechanics or optimisation explicitly for the decomposition of the weight tensors, at any particular rank configuration the final result should be similar to other papers which employ BTD for network compression, which have been compared to other works. A contribution of GroSS, however, is that it allows search between these rank configurations. As such, we did not optimise fine-tuning or search strategy in favour of demonstrating that GroSS did enable search in the grouped convolution space.\\n\\nTherefore, GroSS should be compared with rank selection such as VBMF proposed in [4] (employed in [1]). We apply VBMF to our 4-layer network weights to estimate the rank of each layer, choosing the nearest value which satisfies the BTD to grouped bottleneck requirements. We then set the decomposition to this predicted rank (16, 8, 16), fine-tune and search, as described in the paper. The results are shown below:\\n\\nConfig | MACs | Accuracy \\n------------------------ | -------- | ------------ \\nVBMF [4]: 16 8 16 | 2.51M | 83.33 (0.10)\\n2 32 64 (in paper) | 2.36M | 83.84 (0.12)\\n4 16 64 | 2.22M | 83.93 (0.13)\\n\\nWe are able to find lower cost configurations with improved accuracy over the VBMF estimation. In fact, in [1] they stated, \\u201cAlthough we can obtain very promising results with one-shot rank selection, it is not fully investigated yet whether the selected rank is really optimal or not.\\u201d Here we demonstrate that GroSS is a tool to capable of investigating this, and that VBMF is not optimal in our case. \\n\\n[4] Nakajima, Shinichi, et al. \\\"Global analytic solution of fully-observed variational Bayesian matrix factorization.\\\" Journal of Machine Learning Research 14.Jan (2013).\"}", "{\"title\": \"Response to Reviewer #3 (1 of 2)\", \"comment\": \"\\u201cThe method is interesting, however the novelty is low. There is already large bodies of work on parametrizing neural networks with tensor decomposition, including coupled decomposition.\\u201d\\n\\nWhile there are works on parameterising and factorising networks with tensor decomposition, rank selection for decomposition remains relatively unexplored. \\nThe novelty of GroSS comes, not from the mechanics of decomposition, where we can use standard BTD due to the process described in Eq.6, but instead from the formulation of the search space as the combination (and interaction) of a number of series components. Each of these can be changed on-the-fly and therefore allow for simultaneous training. Here, we apply this series formulation to rank search for factorisation of networks, but we envisage similar series-based search can be used for a number of search tasks, such as number of channels and kernel dimensions.\\n\\n\\u201cHow is the method different to training several network with the same Tucker parametrization but different ranks?\\u201c\\n\\nGroSS differs from the training of individual rank configurations since GroSS is able to train each rank factorisation of each layer, as well as all the possible combinations of ranks between layers. To explore our 4-layer network this would require 252 individual training runs, and 4^12 for our VGG16 network. However GroSS allows for all these configurations to be fine-tuned simultaneously, in a single training run.\\n\\n\\u201cWhy the convolution R should be grouped? Should it not be a regular convolution?\\u201d\\n\\nWe perform BTD, rather than pure Tucker decomposition. BTD is the extension of Tucker decomposition as it is the factorisation of a single tensor into the sum of multiple Tucker decompositions. When the number of Tuckers present in the BTD sum and the size of each Tucker kernel are set to the specific values, as described in our paper, the factorisation becomes equivalent to the grouped bottleneck architecture. Full derivation of this can be found in (paper ref: Yunpeng et al., 2017).\\n\\n\\u201cThe model is a simple 4-layer network, not fully described. An established architecture, such as ResNet should be employed.\\u201d\\n\\nWe agree and now have provided preliminary search results for VGG16 on CIFAR10, where we decompose all but the first convolutional layer into sizes [1, 4, 16, 32], therefore the number of possible configurations from our GroSS decomposition of VGG16 is 4^12. We implement a rudimentary breadth-first search to find configuration proposals. We can provide full implementation details of the search, network definition and fine-tuning strategy in an appendix.\\n\\nConfig | MACs | Accuracy\\n-------------------------------------------------- | -------------- | ----------\\nBaseline 4s | 9.36M | 91.00\\n1, 4, 1, 32, 1, 1, 16, 1, 4, 16, 32, 4 | 8.84M | 91.28\\nBaseline 16s | 29.04M | 91.48\\n1, 32, 16, 32, 16, 16, 32, 32, 32, 1, 4, 16 | 26.19M | 91.56\\nFull Network | 313.74M | 91.52 \\n\\nHere, we perform similar testing to that with our 4-layer network. We set baseline configurations, where every layer is set to the same rank (4 or 16). We then employ GroSS to find a configuration which requires fewer MACs, while acheiving higher accuracy than the baseline. This was possible in each case, and notably one of our found configurations outperforms the original network before factorisation, requiring an order of magnitude fewer operations.\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"\\u201cOverall, this is a well-written and soundly derived contribution. However, it is quite niche and---while the authors frame it as a form of NAS---in my view, this contribution is more in the realm of hyperparameter search for grouped convolutions, and not NAS in general. I would recommend reframing the introduction to make this fact more explicit, as the approach does not provide a general strategy for differentiable NAS.\\u201d\\n\\nWhile we agree that the specific method discussed in the paper is applied only to grouped convolutions search, we believe series-based representations of networks is a more general contribution to the architecture search task. We envisage a number of tasks where series-based search spaces can be trained, evaluate and searched using the philosophy as GroSS, such as number of channels present in a bottleneck layer or even kernel dimensions. We will aim to make the distinction between the series-based train and search philosophy, and our the specific application of GroSS for grouped convolution search.\\n\\n\\u201cIn addition, the empirical results are relatively shallow, with only one dataset and without detailed discussion of the variance of the results.\\u201d\\n\\nWe would like to point you towards the additional results presented in our response to Reviewer #3 for VGG16 on CIFAR10. We are continuing to work on more experiments, and hope to add them soon.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"\\u201cI did not know about the expansion function, and while I trust the authors that it is correctly used, I would have like either more explanations on how it works or some reference.\\u201d\\n\\nAs far as we are aware, we are the first to exploit the expansion of grouped convolution weights. We can provide more detail on the appearance of the expansion, however it is dependent on how the specific tensor/convolution library stores weights. Therefore, we hope to provide more intuition of how the expansion function works in the paper.\\n\\n\\u201cCan you justify the softmax and the very high temperature? For N = 8, s_1 will be sampled 98.2% of the time s_2 1.8% and the other sampling probabilities are close to neglibigle. While I understand it seems to work better in practice, it looks extremely aggressive.\\u201d\\n\\nWhen decomposing into multiple group sizes, each successive size in the series only aims to capture information not approximated by the previous order term. In Table 2, we show that even a depthwise factorisation of the network is able to recover almost all of the original accuracy (83.99 vs 81.74). Therefore, most of the energy of the approximation should be captured by the lowest rank term in the decomposition series. This provides intuition that the increasing rank terms in the series should be sampled with frequency that reflects the energy which they capture in the approximation, hence a high sampling temperature.\\n\\n\\u201cIn 4.4 you say you perform finetuning for 150 epochs, which is huge, while on the abstract you said \\\"GroSS represents a significant step towards liberating network architecture search from the burden of training and finetuning\\\". Can you comment?\\u201d\\n\\nWe found that for each single configuration, convergence was most reliably achieved by fine-tuning for 100 epochs. When a network is factorised using GroSS it is fine-tuned for a longer schedule of 150 epochs, but provides 252 configurations in our 4-layer network (4^12 configurations for our VGG16 decomposition). There are very likely more optimal fine-tuning strategies for both individual configurations and GroSS, but they still provide fair comparison to each other. If both training strategies were optimised, we would still expect to see the number of additional configurations trained by GroSS to vastly exceed the relative increase in number of epochs.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary\\n---\\nThis paper proposes to learn simultaneously all the parameters of grouped convolutions by factorizing the weights of the convolutions as a sum of lower rank tensors. This enables architecture search for the convolution parameters in a differentiable and efficient way.\\n\\nComments\\n---\\nI think was paper is well written, and was clear at least until 3.2. I believe some clarifications could be useful here, it is not written clearly that t' and u', the 2 first dimensions of the core are R times smaller than t and u. There is some explanation in the bracket (3) but 1) it should be stated clearly in the text, 2) I believe there are several typos on the 4th line of bracket (3) making it hard to understand.\\n\\nI did not know about the expansion function, and while I trust the authors that it is correctly used, I would have like either more explanations on how it works or some reference.\\n\\nCan you justify the softmax and the very high temperature? For N = 8, s_1 will be sampled 98.2% of the time s_2 1.8% and the other sampling probabilities are close to neglibigle. While I understand it seems to work better in practice, it looks extremely aggressive.\\n\\nIn 4.4 you say you perform finetuning for 150 epochs, which is huge, while on the abstract you said \\\"GroSS represents a significant step towards\\nliberating network architecture search from the burden of training and finetuning\\\". Can you comment?\\n\\nAs you say GroSS is an alternative to NAS (for the convolutions parameters that is), is the GroSS method proposed really faster and more accurate than a NAS baseline for finding these architectures?\\n\\nI don't find the column titles in Table 3 to be always informative. \\\"After train\\\" means after the finetuning? I took me some time to realize the delta was the delta in accuracies, it is not very informative and it was not clear for me for some time what it meant. Either the titles should be chosen more carefully or the caption should be more precise I believe.\\n\\nIn figure 1, the legend should be more informative, at least incorporate a \\\"alpha\\\" or \\\"temperature\\\" title in the legend.\\n\\nConclusion\\n---\\nWhile the method is interesting I am wondering whether GroSS enables more efficient architecture search that tradional methods as there is still a long finetuning step, furthermore it can only be applied to grouped convolutions parameters. As the authors present it in the abstract and introduction as an alternative to NAS, I believe a comparison to a NAS would be needed.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary: The authors introduce GROSS---a reformulation of block tensor decomposition, which allows multiple grouped convolutions (with varying group sizes) to be trained simultaneously. The basic idea is to reformulate the BTD so that higher-order decompositions can be expressed as functions of lower-order decompositions. Given this nesting, it is possible to implicitly train the lower-order decompositions while training the higher-order ones.\\n\\nThe authors frame this contribution as a form of \\\"neural architecture search\\\" (NAS), arguing that this allows researchers to simultaneously train grouped-convolution CNNs with varying group sizes. After the simultaneous training based on the GROSS approach, the researcher can then select the group size that gives the best performance/accuracy tradeoff. The selected model can be further fine-tuned on the task, and the authors found that this improved performance. \\n\\nEmpirical results on the CIFAR-10 dataset show that the proposed approach performs as expected, allowing for the simultaneous training of CNNs with grouped convolutions of varying orders. The results show the the proposed approach can find \\\"better\\\" solutions than a simple search over fixed architectures.\", \"assessment\": \"Overall, this is a well-written and soundly derived contribution. However, it is quite niche and---while the authors frame it as a form of NAS---in my view, this contribution is more in the realm of hyperparameter search for grouped convolutions, and not NAS in general. I would recommend reframing the introduction to make this fact more explicit, as the approach does not provide a general strategy for differentiable NAS. In addition, the empirical results are relatively shallow, with only one dataset and without detailed discussion of the variance of the results.\", \"reasons_to_accept\": [\"Well-written\", \"Sound and well-motivated algorithm\", \"Potential applications in cases where grouped convolutions are useful\", \"Empirical results demonstrate validity of the proposed approach\"], \"reasons_to_reject\": [\"Relatively niche contribution incorrectly framed as general contribution to NAS\", \"Limited empirical analysis (e.g., only one dataset).\"]}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose to express the weight of a convolutional neural network as a coupled Tucker decomposition. The Tucker formulation allows for an efficient reformulation. The weigths of the sum of Tucker decompositions allows is randomly set at each iteration during training, with each term of the sum having a different rank.\\n \\nThe method is interesting, however the novelty is low. There is already large bodies of work on parametrizing neural networks with tensor decomposition, including coupled decomposition.\\n\\n\\nHow does the proposed method compared to the related method DART? And to a simple coupled decomposition?\\nHow is the method different to training several network with the same Tucker parametrization but different ranks? What about memory and computational efficiency?\\nIn any case, these should be compared to.\", \"the_notation_should_be_kept_consistent_throughout\": \"e.g. either use t,u,v,w or d1,d2,d3,d4. Notation should be unified in the text and captions (e.g. Table 1).\\nIn 3.2, when specifying the size of G, should it be G_r? Same for B and C.\\n\\nWhy the convolution R should be grouped? Should it not be a regular convolution?\\n\\nFor ot', u' being the group-size, what is o? It was not introduced.\\n\\nThe response reconstruction is only useful if the same, uncompressed network is already trained, would not be applicable for end-to-end training.\\n\\nThe model is a simple 4-layer network, not fully described. An established architecture, such as ResNet should be employed.\\n\\nExperiments should be carried on ImageNet, or at least not just on CIFAR10.\\n\\nThere is no comparison with existing work, e.g. parametrization of the network with Tucker [1, 2] or CP [3]\\n\\n[1] Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications, ICLR 2016\\n[2] T-Net: Parametrizing Fully Convolutional Nets with a Single High-Order Tensor, CVPR 2019\\n[3] Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition, ICLR 2015\"}" ] }
SklD9yrFPS
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
[ "Roman Novak", "Lechao Xiao", "Jiri Hron", "Jaehoon Lee", "Alexander A. Alemi", "Jascha Sohl-Dickstein", "Samuel S. Schoenholz" ]
Neural Tangents is a library for working with infinite-width neural networks. It provides a high-level API for specifying complex and hierarchical neural network architectures. These networks can then be trained and evaluated either at finite-width as usual or in their infinite-width limit. Infinite-width networks can be trained analytically using exact Bayesian inference or using gradient descent via the Neural Tangent Kernel. Additionally, Neural Tangents provides tools to study gradient descent training dynamics of wide but finite networks in either function space or weight space. The entire library runs out-of-the-box on CPU, GPU, or TPU. All computations can be automatically distributed over multiple accelerators with near-linear scaling in the number of devices. In addition to the repository below, we provide an accompanying interactive Colab notebook at https://colab.research.google.com/github/google/neural-tangents/blob/master/notebooks/neural_tangents_cookbook.ipynb
[ "Infinite Neural Networks", "Gaussian Processes", "Neural Tangent Kernel", "NNGP", "NTK", "Software Library", "Python", "JAX" ]
Accept (Spotlight)
https://openreview.net/pdf?id=SklD9yrFPS
https://openreview.net/forum?id=SklD9yrFPS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "DCC3FVzwnE", "BylarJj2iH", "rygwzyihjB", "HyxHSX5hjr", "HJe9z9V3iH", "HyxjHFEnsr", "ryl8AD42oB", "S1g-lnZhir", "Bke8siZhiB", "HJxkA_b3sS", "B1gt-wb3or", "BkgVyRg2jB", "S1eqydg2iB", "ryxYhr0aKr", "H1emYsn6Yr", "BJgNczUaFB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734958, 1573855045510, 1573854991322, 1573851965491, 1573829137694, 1573828931043, 1573828557810, 1573817321057, 1573817245842, 1573816519313, 1573816065358, 1573813723869, 1573812193943, 1571837360776, 1571830651202, 1571803787946 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1882/Authors" ], [ "ICLR.cc/2020/Conference/Paper1882/Authors" ], [ "ICLR.cc/2020/Conference/Paper1882/Authors" ], [ "ICLR.cc/2020/Conference/Paper1882/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1882/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1882/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1882/Authors" ], [ "ICLR.cc/2020/Conference/Paper1882/Authors" ], [ "ICLR.cc/2020/Conference/Paper1882/Authors" ], [ "ICLR.cc/2020/Conference/Paper1882/Authors" ], [ "ICLR.cc/2020/Conference/Paper1882/Authors" ], [ "ICLR.cc/2020/Conference/Paper1882/Authors" ], [ "ICLR.cc/2020/Conference/Paper1882/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1882/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1882/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper presents a software library for dealing with neural networks either in the (usual) finite limit or in the infinite limit. The latter is obtained by using the Neural Tangent Kernel theory.\\n\\nThere is variance in the reviewers' scores, however there has also been quite a lot of discussion, which has been facilitated by the authors' elaborate rebuttal. The main points in favor and against are clear: on the positive side, the library is demonstrated well (especially after rebuttal) and is equipped with desirable properties such as usage of GPU/TPU, scalability etc. On the other hand, a lot of the key insights build heavily on prior work of Lee et al, 2019. However, judging novelty when it comes to a software paper is more tricky to do, especially given that not many such papers appear in ICLR and therefore calibration is difficult. This has been discussed among reviewers. \\n\\nIt would help if some further theoretical insights were included in this paper; these insights could come by working backwards from the implementation (i.e. what more can we learn about infinite width networks now that we can experiment easily with them?).\\n\\nOverall, this paper should still be of interest to the ICLR community.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thank you for the quick response! A bit more on novelty\", \"comment\": \"Dear Reviewer 3,\\n\\nThank you for your very quick reply and for updating your score. We would still like to push back on the novelty aspect.\\n\\n>>> I agree that the current version of the codebase is considerably more developed than it was in early May but am still not particularly convinced that it's a distinct work given that it is fundamentally based in a similar language/functionality.\\n\\nWhile we certainly understand the initial confusion about the overlap with Lee et al, 2019, we believe that we have provided a substantial and precise summary of contributions that are specific to this submission, in both the rebuttal and updated text, backed up by a code diff (see https://openreview.net/forum?id=SklD9yrFPS&noteId=B1gt-wb3or ).\\n\\nWe do not understand how using the same programming (or mathematical) language as Lee et al. 2019 can be an issue, since this is standard practice. We wish to reiterate that the key feature of the library (\\u201cnt.stax\\u201d, specification and computation of exact infinite-width kernels: https://github.com/neural-tangents/neural-tangents/blob/master/neural_tangents/stax.py ) is completely new in our library, was not released, was not used, and was not necessary for Lee et al. 2019. It was not \\u201cmore developed\\u201d, but designed and implemented essentially from scratch.\\n\\nIn this light, and again appreciating the time and thought you have given to our work, we would ask you to reconsider your score again. Thank you!\"}", "{\"title\": \"Reply\", \"comment\": \">>> Edit: post rebuttal, I'm bumping my score to a weak reject but would have minimal qualms if this paper were to\\n>>> be accepted. I find the further experiments performed by the authors of very good quality overall, but I'm still not \\n>>> particularly satisfied by their `argument that the codebase itself is distinct enough from separate related work. It's \\n>>> unfortunately a bit hard, from a machine learning researcher side, to review the quality of a codebase in and of itself. \\n\\nWe appreciate that it is difficult to review a codebase. To help with this we\\u2019d like to discuss the development of \\u201cneural_tangents.stax\\u201d which is the main focus of this work and was developed entirely after May. There has been a significant amount of work on NNGP and NTK methods over the past few years to compute NNGP and NT kernels for a growing set of architectures. In addition to noting that components ought to arbitrarily compose with one another (which I believe was not widely understood prior to this) development of \\\"neural_tangents.stax\\\" required arriving at efficient implementations for FC [1,2,3], CNN [4, 5, 6, 7], and pooling [5,7] kernels which were known in the literature along with FanOut, FanInSum, LayerNorm, parallel, and serial which were not explicitly known. As far as we are aware we are also the first to implement convolutions with arbitrary padding, shapes, and strides. Moreover, we wrote code to automatically parallelize this over large datasets. Finally, we note that our contribution also includes the neural tangent cookbook notebook which, as far as we are aware, includes the first computation of the mean and (now, thanks to reviewer 2) variance posterior prediction of the MSE loss. We believe that this represents a substantial research contribution.\\n\\n[1] Exponential expressivity in deep neural networks through transient chaos\\nPoole et al. ; NeurIPS 2016\\n[2] Deep Neural Networks as Gaussian Processes\\nLee et al. ; ICLR 2018\\n[3] Neural Tangent Kernel: Convergence and Generalization in Neural Networks\\nJacot et al. ; NeurIPS 2018\\n[4] Dynamical Isometry and a Mean Field Theory of CNNs\\nXiao et al. ; ICML 2018\\n[5] Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes\\nNovak et al.; ICLR 2019\\n[6] Deep Convolutional Networks as shallow Gaussian Processes\\nGarriga-Alonso et al.; ICLR 2019\\n[7] On Exact Computation with an Infinitely Wide Neural Net\\nArora et al.; NeurIPS 2019\"}", "{\"title\": \"Great questions!\", \"comment\": \">> A couple of follow-up questions arise though that may be useful for further questions\\n>> 1) What depths of networks were typically found throughout the different datasets? \\n\\nGreat question! We looked at it for FC Relu and Resnet + Fixup NTKs. While we do not think it really belongs in the paper, for your interest here is the plot: https://github.com/neural-tangents/neural-tangents/blob/master/iclr_figures/optimal_depth.pdf. We see that mostly single layer networks were selected for both architectures. It would be interesting to do future research, in Neural Tangents, to see if we can understand this!\\n\\n>> 2) Is there any suggestion as to why fixup initialization schemes ought to perform better?\\n \\nResidual networks cause gradients to explode which translates to poor conditioning of the NTK. It\\u2019s likely that fixup-style initialization schemes improve this conditioning. Again, this would be interesting research to perform using Neural Tangents.\"}", "{\"title\": \"Thank you for the revisions\", \"comment\": \"Will number these for clarity:\\n\\n1) Thanks for the updates on the comparison to the fully bayesian setting, I appreciate this comparison.\\n\\n2) Thanks for specifically spelling out what's being done here - it makes the paper considerably more legible.\\n\\n3) I had thought that might be the case - thank you for being specific as to how these matrices are being inverted.\"}", "{\"title\": \"Thank you for the further experiment\", \"comment\": \"Thanks for providing the experiment with the UCI datasets, I find it to be a quite interesting demonstration of what your library can do, and as such I've bumped my score to weak reject.\\n\\nA couple of follow-up questions arise though that may be useful for further questions\\n1) What depths of networks were typically found throughout the different datasets? \\n2) Is there any suggestion as to why fixup initialization schemes ought to perform better?\"}", "{\"title\": \"Thank you for the clarification\", \"comment\": \"Thank you for the clarification here - interesting that Hayou et al, 2019 cite the library as its own paper by late May (in my understanding).\\n\\nI agree that the current version of the codebase is considerably more developed than it was in early May but am still not particularly convinced that it's a distinct work given that it is fundamentally based in a similar language/functionality.\\n\\nApologies for the terse reply.\"}", "{\"title\": \"[1/4] Addressing Major Comments\", \"comment\": \"Thanks for your careful review of our work. We\\u2019re happy that you enjoyed the paper, found the library easy to use, and that you might use it in the future! We hope that this fact alone helps to convince you that researchers in the community might benefit from learning about Neural Tangents at ICLR.\\n\\n----------------------------------------------------------------------------------------------------\\n>>> While I really enjoyed reading the paper and believe that this library could be extremely practically useful, I vote to reject this paper because I do not feel that it has sufficient novelty to be a paper on its own in light of Lee et al, 2019. [...]\\n\\n>>> However, my primary concern with this paper is that it\\u2019s not sufficiently distinct from the previous work of Lee et al, 2019. After all, most of the experiments in that paper would have required the type of implementation that is described in greater detail in this paper.\\n\\nWe would like to clarify the relationship between this work and Lee et al. 2019. We have added a summary of this discussion to section A in the revised version of the manuscript. \\n\\nTL;DR \\n\\n1) Neural Tangents (NT) is emphatically different from the code of Lee et al, 2019 at the time of their paper submission (larger by thousands of lines of code (LOC)), and \\n\\n2) The features in NT extend far beyond what was open-sourced with Lee et al, 2019 at the time of submission _and_ what could have been necessary for that paper.\", \"specifically\": [\"1) Raw code difference\", \"Lee et al, 2019 at the time of submission: https://github.com/google/neural-tangents/tree/d42cc0f0281001d5885ed3969b61d69c8ccf4a15\", \"Our codebase at HEAD: https://github.com/neural-tangents/neural-tangents\", \"Most importantly, the +9,500/-2,500 LOC diff: https://github.com/neural-tangents/neural-tangents/compare/Lee_et_al_2019..master (we imported their code into a separate branch of our repo to show the difference)\", \"2) Feature difference: at the time of the submission of Lee et al. 2019, their open-sourced code only had the following features (see github link above):\", \"Linearization (equivalent of \\\"nt.linearize\\\"),\", \"Single-sample estimate of the empirical ntk (equivalent of \\\"nt.empirical_ntk_fn\\\"),\", \"Finite-time output mean evolution (equivalent of \\\"nt.predict.gradient_descent_mse\\\", \\\"nt.predict.gradient_descent\\\", \\\"nt.predict.momentum\\\").\", \"One can easily check that the vast majority of experiments in Lee et al. only used this functionality of Neural Tangents. Only a few experiments used very simple fully-connected ReLU kernels, which were not produced with Neural Tangents but an internal Tensorflow implementation (known via personal correspondence with the authors). These kernels are not unto themselves specific to Lee et al 2019; for example the arccosine kernel dates back to Cho and Saul in 2009.\", \"Any post-submission developments in their repository that were neither used nor necessary for their paper should not be considered published results but rather work concurrent to ours. [P.S. on an unrelated note, one could argue that even treating the original Lee et al, 2019 codebase as published is debatable, since they themselves, along with at least one other paper (https://arxiv.org/pdf/1905.13654v2.pdf) cited the code as a separate unpublished work]\", \"NT has all the features of Lee at al 2019 (with authors\\u2019 permission) at the time of submission, and:\", \"Most notably, a high-level modular library \\\"nt.stax\\\" to specify and do inference with infinite NTK/NNGPs analytically for many NN layers. This is the highlight of the paper and was not used in / released with / necessary for Lee et. al, 2019.\", \"Multi-device GPU/TPU support.\", \"Parallelizable Monte-Carlo sampling of NTK and NNGP kernels.\", \"Taylor series function expansion.\", \"A richer suite of prediction functions including finite/infinite time NTK/NNGP mean/covariance prediction.\", \"Unification of all of these features to work together seamlessly.\", \"We hope this, together with the updates in the text, helps clarify the contributions of our paper.\"]}", "{\"title\": \"[2/4] Addressing Major Comments\", \"comment\": \"----------------------------------------------------------------------------------------------------\\n>>> To be able to vote to accept this paper, I will have to see an experiment that is practically performed with the current library in order to distinguish it from previous work (specifically Lee et al, 2019). In recent work, Arora et al, 2019 (Note: I do not consider this reference in my review other than to be mentioned as an example of an experiment that could be run with your library) run neural tangent kernels on tabular data using kernel SVMs. One other potential example would be a kernel SVM in this manner on CIFAR-10. An alternative example would be to exploit the Gaussian process representation and test out both NTKs and NNGPs in comparison to standard kernels for GPs and NNs on UCI regression tasks.\\n\\nPlease note that _almost all_ the experiments in our paper used features specific to our library only. Precisely:\\n\\n- Figures 2, 3 require computing exact kernels for the infinitely WideResNet. \\n- Figure 5 demonstrates scalability to multi-gpu machines.\\n- All code listings demonstrate the main feature of our library which is seamless definition of any neural network architecture of both finite and infinite widths at no extra mental/typing cost.\\n- Figure 6 (new revision) uses higher-order taylor expansion of a neural network.\\n\\nWe stress that _none_ of the above were open-sourced or used in / necessary for Lee et al, 2019 [P.S. on a minor note, Figure 1 used the analytic Erf kernel, derived but not released / used in Lee et al, 2019]\\n\\nAll this having been said, we agree that it would be nice to have a practical demonstration of the convenience provided by Neural Tangents on the example of Arora et al.\\u2019s results on the UCI dataset, as you suggested. Arora et al. provided clean code to reproduce their experiments. We seamlessly substituted the NTK implementation of Arora et al. with Neural Tangents. As a result, we were able to consider a wider range of architectures, finding that by selecting models on a per-experiment basis we were able to provide a marginal improvement of the Arora et al. result from 81.95% to 82.03%. We include a discussion of this experiment in Appendix C.\\n\\n\\n----------------------------------------------------------------------------------------------------\\n[...] >>> Again, I am very concerned with originality in comparison to Lee et al, 2019. Even checking out the link to their codebase provides a github repo that is quite similar to this one. Given that ICLR is a venue of similar domain to NeurIPS, it\\u2019s not clear to me why this paper ought to be anything other than a separate supporting tech report. If this paper had been submitted to something like SysML (edit: or JMLR MLOSS), I would see the distinctness instead.\\n\\nPlease see replies above. TL;DR code released with the submission of Lee et al. 2019 had a tiny fraction of the functionality our library offers, and (from personal correspondence) their paper neither used nor had to use the main features of our library (flexible, general, and efficient specifications and evaluation of exact NNGP/NTK kernels). We again stress that this can be verified by the diff link mentioned above (https://github.com/neural-tangents/neural-tangents/compare/Lee_et_al_2019..master), and we are happy to clarify any other questions regarding the overlap.\\n\\n\\n----------------------------------------------------------------------------------------------------\\n>>> Clarity: I find the paper to be extremely well-written and easy to follow. The addition of code snippets throughout is very well done, even if it\\u2019s a bit overkill. I don\\u2019t know what adding a half page long description of an infinitely wide WideResNet adds to the paper when that space could be better used by another experiment.\", \"we_have_decided_to_highlight_the_wideresnet_snippet_since_it\": \"1) Presents exactly the use-case that would be extremely tedious / not practical at all to implement without our library. Our library handles all the topology, striding, padding, performance optimizations etc, while allowing to specify the infinite networks simultaneously with the finite model, at _absolutely no_ extra mental effort.\\n\\n2) Gives the reader a non-trivial, practical example of using our library for complex models.\\n\\nNonetheless, we agree that results on the UCI dataset might be useful as well and so we have added them in section C in the new revision!\"}", "{\"title\": \"[3/4] Addressing Minor Comments\", \"comment\": \"----------------------------------------------------------------------------------------------------\\n>>> In Figure 1 on the right, I would have liked to have seen the posterior predictive for a NNGP with the same kernel as well. \\n\\nGreat idea, done in the new revision (Figure 1, left in red).\\n\\n\\n----------------------------------------------------------------------------------------------------\\n>>> In Figure 2, why is the NNGP slower to converge to the analytic values here? Obviously, the rates of convergence are the same, but the constants seem different.\\n\\nCurrently we are not aware of any rigorous results explaining the respective rates or constants. A very naive take is that empirical NTK, as an outer product of Jacobians, sums over a larger number of [admittedly dependent] random entries (\\\"O(N^2 * d)\\\", where \\\"N\\\" is width and \\\"d\\\" is depth) than NNGP, which is an outer product of the activations (\\\"O(N)\\\"). However, since the same random variables are involved in the computation of the NTK and NNGP, we are not certain the observed effect is not architecture / dataset dependent.\\n\\n\\n----------------------------------------------------------------------------------------------------\\n>>> In Figure 3 (and throughout the experiments), does \\u201cfull Bayesian inference for CIFAR10\\u201d mean that you treated the classification labels as regression targets? If so, how was classification error measured.\\n\\nYou are correct, the targets were converted to mean-zero vectors like \\\"[-0.1, \\u2026, 0.9, \\u2026, -0.1]\\\", where 0.9 is assigned to the correct class index, and -0.1 to all others. The error was computed as the \\\"1-accuracy\\\" where accuracy is the fraction of samples where the argmax of the model output is at the correct class index. We have updated the description of Figure 3 in the new revision.\\n\\n\\n----------------------------------------------------------------------------------------------------\\n>>> In Section 3.1, you mention that the library \\u201cleverages block-diagonal structure\\u201d to invert the CIFAR10 covariance matrix for each class (still 50k x 50k). Possibly this is because I haven\\u2019t had the chance to use TPUs, but I\\u2019m currently struggling to see how one could form and invert (via Choleskys) matrices of this size (50k x 50k) on a standard GPU (or CPU). Could the authors please clarify how they did this (whether through iterative methods, another structure exploiting trick, lots of memory, etc.)?\\n\\nThank you for the question, \\n\\n1) A 32-bit 50k x 50k matrix has a size of ~10Gb, which (together with auxiliary variables like targets and train-test kernel matrix) is pushing the limit of many modern GPUs/TPUs and inference is indeed not feasible on these accelerators. \\n\\n2) However, calling \\\"[jax.]scipy.linalg.solve(..., sym_pos=True)\\\" is perfectly doable on a CPU, and runs in about 3 minutes on a laptop with 2.9 Ghz Intel Core i9 (6 cores) and 32 Gb of RAM for a 50k x 50k training set kernel matrix and 50k x 10 training targets. [P.S. due to a technical bug in JAX/XLA (https://github.com/google/jax/issues/1644) at the time of writing \\\"jax.scipy.linalg.solve\\\" fails for matrices larger than 46,340 x 46,340, but the issue is unrelated to compute/memory and the original \\\"scipy.linalg.solve\\\" works fine for 50k.]\\n\\n3) Our library allows to effortlessly leverage both fast GPUs and typically larger amount of CPU RAM by computing the kernel on [multiple] GPUs and performing inference on CPU. For this the user only needs to pass \\\"store_on_device=False\\\" to the \\\"batch\\\" decorator (https://github.com/neural-tangents/neural-tangents/blob/408c07d938458bbe80da3e66e420eb1fb84cbe33/neural_tangents/utils/batch.py#L395), i.e. making the kernel be computed in batches on [multiple] GPUs and collected into a single matrix for further inference in the CPU RAM.\\n\\nWe have expanded the discussion in the new revision (section B.3) to mention the above.\"}", "{\"title\": \"[4/4] Summary Diff Table\", \"comment\": \"For your convenience, we provide in this comment the diff\", \"https\": \"//github.com/neural-tangents/neural-tangents/compare/Lee_et_al_2019..master\\n\\nand the brief table of differences between the code released by Lee et al., 2019 at the time of their submission, and our work.\\n\\nThank you again for the careful review. We hope, having addressed your concerns regarding differences between this work and Lee et al. that you will consider increasing your score.\\n\\n\\u2554\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2566\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2566\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2557\\n\\u2551 Codebase \\u2551 Released with Lee et al., 2019 \\u2551 Ours \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Line of code \\u2551 1400+ \\u2551 6600+ \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Empirical kernel \\u2551 NTK \\u2551 NTK/NNGP \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Weight space linearization \\u2551 Yes \\u2551 Yes \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Higher-order Taylor series expansion\\u2551 No \\u2551 Yes \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Monte-carlo sampling for empirical NTK/NNGP \\u2551 No \\u2551 Yes \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Multi-device parallelization \\u2551 No \\u2551 Yes \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Dense Layer \\u2551 No \\u2551 Yes \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Nonlinearities \\u2551 No \\u2551 ReLU, Erf, Abs, LeakyRelu, ABReLU \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Convolution \\u2551 No \\u2551 Any paddings, strides, filter shapes \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Average pooling \\u2551 No \\u2551 Global, local, any strides / shapes \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Flattening \\u2551 No \\u2551 Yes \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 LayerNorm \\u2551 No \\u2551 Yes \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Skip-connection \\u2551 No \\u2551 Yes \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Global self-attention \\u2551 No \\u2551 Yes \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Finite-time inference of the posterior \\u2551 NTK, Mean \\u2551 Mean, covariance, NNGP/NTK \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Infinite-time inference of the posterior \\u2551 No \\u2551 Mean, covariance, NNGP/NTK \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Dropout \\u2551 No \\u2551 Coming soon \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Standard (non-ntk) parameterization \\u2551 No \\u2551 Coming soon \\u2551\\n\\u255a\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2569\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2569\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u255d\"}", "{\"title\": \"AnonReviewer2 Rebuttal\", \"comment\": \"Thank you for the careful review, we\\u2019re happy you found our library useful and beneficial to the ML community. Below we believe we have addressed your concerns, and we hope you can increase your score as a result.\\n\\n\\n----------------------------------------------------------------------------------------------------\\n>>> 1. The theory and formulae of NTKs and NNGPs were well developed. This work mostly consists of implementing and modularizing them. The research contribution is relatively low.\\n\\nICLR explicitly calls for \\u201cimplementation issues, parallelization, software platforms, hardware\\u201d (https://iclr.cc/, bottom). We believe that Neural Tangents will unlock qualitatively new avenues for research by making computations on infinite networks tractable for non-experts and orders of magnitude easier for theoretical practitioners. This is increasingly true as work on infinite networks continues to attract interest from the community.\\n\\nOn a separate note, a significant intellectual effort went into designing and efficiently implementing the library while keeping it scalable, flexible, and easy to use (see section 3.2 [new revision number]). By means of analogy, there is an immense gap between knowing the mathematical formulae of a convolutional layer and having a general and user-/accelerator-friendly implementation. We believe this kind of gaps should not be underestimated, and the novelty of our approach is itself a research contribution.\\n\\n\\n----------------------------------------------------------------------------------------------------\\n>>> 2. As commented in the paper and if I understand correctly, the current library cannot scale to large datasets for CNNs with pooling. This would make the computation much more expensive (and probably infeasible without additional techniques and huge computing power) as mentioned in [Novak et al. 2019] and [Arora et al. 2019]. However pooling seems extremely useful for NTKs and NNGPs on image datasets. I think this makes this work somewhat less exciting than it may sound.\\n\\nYou are correct that CNN-GPs/NTKs with pooling are _very_ compute-hungry. However, we would like to highlight that \\n\\n1) We did successfully run experiments on 8K CIFAR10 subsets for a WideResNet with pooling in Figure 3, and we have further run pooling experiments on the 45K CIFAR10 training set and achieved a slight improvement over the prior state of the art in [Arora et al. 2019a] with our library (see the table below, GAP = global average pooling, best values marked with **).\\n\\n\\u2554\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2566\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2566\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2566\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2566\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2557\\n\\u2551 Model \\u2551 NNGP acc \\u2551 NTK acc \\u2551 NNGP loss \\u2551 NTK loss \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 WResNet-LayerNorm-depth_28 \\u2551 73.7 \\u2551 72.8 \\u2551 0.0501 \\u2551 0.0501 \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 CNN-GAP-Relu-depth_10 \\u2551 78.84 \\u2551 *77.84* \\u2551 0.0454 \\u2551 *0.0462* \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 CNN-GAP-Relu-depth_20 \\u2551 *79.38* \\u2551 76.98 \\u2551 *0.0447* \\u2551 *0.0462* \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 CNN-GAP-Erf-depth_10 \\u2551 71.32 \\u2551 71.3 \\u2551 0.0538 \\u2551 0.054 \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Arora et al. 2019 [GAP] \\u2551 - \\u2551 77.43 \\u2551 - \\u2551 - \\u2551\\n\\u2560\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u256c\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2563\\n\\u2551 Li et al. 2019b [GAP] \\u2551 78.49 \\u2551 77.63 \\u2551 - \\u2551 - \\u2551\\n\\u255a\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2569\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2569\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2569\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2569\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u2550\\u255d\\n\\n[Arora et al. 2019] https://arxiv.org/pdf/1904.11955v2.pdf\\n[Li et al. 2019] https://arxiv.org/pdf/1911.00809v1.pdf\\n\\n\\n2) Our library provides the [parallelizable] \\\"nt.monte_carlo_fn\\\" method to Monte-Carlo estimate compute-heavy kernels, and we have established reasonable convergence for a WideResNet in Figure 2. The question of how good of a tradeoff between accuracy and time / memory the MC method provides is still admittedly open and left for future work.\\n\\n3) Pooling CNN kernels is arguably an emerging field of study, and we believe that as groups with large computing power demonstrate their good performance, studying these kernels (e.g. on small datasets) and developing novel approximation / mimicking / \\u201cinspired-by\\u201d techniques will attract a lot of research attention. We believe our library will facilitate such research greatly, and serve as a platform to deliver new results to the users.\"}", "{\"title\": \"Suggestions implemented!\", \"comment\": \"Thank you for the careful review and great suggestions! We believe to have addressed all your comments, and hope that you could increase the score as a result.\\n\\n\\n--------------------------------------------------------------------------------------------------\\n>>> I would like to see an additional metric for performance comparison of probabilistic models, which is often used in the GP literature: mean negative log probability.\\n\\nThank you for the suggestion, we have added negative log likelihood (NLL) measurement in the updated version. With model\\u2019s marginal likelihood (train) we did model selection across different depths and plotted accuracy/mean squared error/marginal NLL. In the appendix (Figure 7), we included test NLL\\u2019s for fully connected and convolutional models. Since the predictive covariance of the WideResNet kernel has high condition number (due to the pooling layers, see https://openreview.net/pdf?id=Bkx1mxSKvB section C), obtaining numerical stable NLL measures was more challenging.\\n\\n\\n--------------------------------------------------------------------------------------------------\\n>>> It would also be interesting to see how the posterior variance (e.g., Fig. 1 right) evolves over the entire space during training. \\n\\nThank you for the suggestion, done in the new revision!\\n\\n\\n--------------------------------------------------------------------------------------------------\\n>>> I would have preferred a more detailed discussion about the implementation on transforming tensor ops to kernel ops in Section 3.\\n\\nAgreed - in the new revision, we have expanded the text with section 3.1. demonstrating the tensor-to-kernel ops translation.\\n\\n\\n--------------------------------------------------------------------------------------------------\\n>>> For the summary of contributions, can you give the corresponding section number to refer to when you demonstrate each feature? For example, is the 4th feature (i.e., exploring weight space perspective) demonstrated in the paper?\\n\\nGreat suggestion, done in the new revision. We have also added an experiment demonstrating linearization / taylor expansion (4th feature, section B.6, Figure 6). Please also see the existing example in `examples/weight_space.py` and https://github.com/neural-tangents/neural-tangents#weight-space.\\n\\n\\n--------------------------------------------------------------------------------------------------\\n>>> Can the authors elaborate on the ease of expanding their library for the new developments in this field?\\n\\nThank you for the question, we have elaborated on the process of extending the library to new layers in the new revision in section B.7 (see also new section 3.1 for the mathematical aspect of deriving new NTK/NNGP results). In general, we believe the process to be fairly straightforward, apart from the cases of:\\n\\n- Certain nonlinearities: to derive the layer kernel propagation expression, the user has to compute the covariance of the nonlinearity (and its derivative, for NTK) applied to correlated Gaussian variables. As discussed in section E (new revision), some such nonlinearities may not have known exact expressions for these covariances, and either \\\"nt.empirical_kernel_fn\\\" or other specialized approximations need to be employed.\\n\\n- Weight sharing between different layers in the network is not currently supported and may require some nontrivial work, but it is on our radar.\\n\\nFinally, once we de-anonymize the repository, we will be using the Github issue and project tracker to inform and engage the community in the library development and planning, and provide support for users and developers!\\n\\n\\n-------------------------------------------------------------------------------------------------\\n>>> Minor issues:\\n\\n>>> Page 1: Gaussian Procesesses?\\n>>> Page 4: it\\u2019s infinite?\\n>>> Fig. 4: I would have preferred the indices to be placed as subscripts instead of superscripts.\\n>>> Page 8: it\\u2019s order of dimensions?\\n\\nThank you, all fixed in the new revision except for the Figure indices: we stick to superscript usage to follow an established tradition in prior work [1-4, ...] of using superscript for layer numbers and subscript for hidden units / channels.\\n\\n[1] Jaehoon Lee, Yasaman Bahri, Roman Novak, Sam Schoenholz, Jeffrey Pennington, and JaschaSohl-dickstein. Deep neural networks as gaussian processes. https://arxiv.org/pdf/1711.00165.pdf\\n\\n[2] Alexander G. de G. Matthews, Jiri Hron, Mark Rowland, Richard E. Turner, and Zoubin Ghahramani.Gaussian process behaviour in wide deep neural networks. https://arxiv.org/pdf/1804.11271.pdf\\n\\n[3] Roman Novak, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, Greg Yang, Jiri Hron, Daniel A. Abolafia,Jeffrey Pennington, and Jascha Sohl-Dickstein. Bayesian deep convolutional networks with many channels are gaussian processes. https://arxiv.org/pdf/1810.05148\\n\\n[4] Adri\\u00e0 Garriga-Alonso, Carl Edward Rasmussen, and Laurence Aitchison. Deep convolutional networks as shallow gaussian processes. https://arxiv.org/pdf/1808.05587.pdf\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: A Jax based neural tangents kernel library is introduced, with native GPU, TPU, and XLA support. Due to the correspondences with infinite neural network kernels (NNGPs), these kernels are also able to be computed for (essentially) free. Layers which do not emit an analytical form (e.g. Tanh or Softplus) can be implemented using Monte Carlo estimates. Several engineering-based experiments are performed demonstrating the potential scalability of their library.\\n\\nWhile I really enjoyed reading the paper and believe that this library could be extremely practically useful, I vote to reject this paper because I do not feel that it has sufficient novelty to be a paper on its own in light of Lee et al, 2019.\", \"edit\": \"post rebuttal, I'm bumping my score to a weak reject but would have minimal qualms if this paper were to be accepted. I find the further experiments performed by the authors of very good quality overall, but I'm still not particularly satisfied by their `argument that the codebase itself is distinct enough from separate related work. It's unfortunately a bit hard, from a machine learning researcher side, to review the quality of a codebase in and of itself.\", \"significance\": \"Having played around with the code a bit, I find that the library itself is of very high quality and is pretty straightforward to use. I could definitely see myself using this library in the future for research work.\\n\\nHowever, my primary concern with this paper is that it\\u2019s not sufficiently distinct from the previous work of Lee et al, 2019. After all, most of the experiments in that paper would have required the type of implementation that is described in greater detail in this paper. \\n\\nTo be able to vote to accept this paper, I will have to see an experiment that is practically performed with the current library in order to distinguish it from previous work (specifically Lee et al, 2019). In recent work, Arora et al, 2019 (Note: I do not consider this reference in my review other than to be mentioned as an example of an experiment that could be run with your library) run neural tangent kernels on tabular data using kernel SVMs. One other potential example would be a kernel SVM in this manner on CIFAR-10. An alternative example would be to exploit the Gaussian process representation and test out both NTKs and NNGPs in comparison to standard kernels for GPs and NNs on UCI regression tasks.\", \"originality\": \"Again, a very efficient and easy to use implementation of neural tangent kernels would be a great boost to the community. This is doubly so as Jax is easy and pretty straightforward to use and is quite numpy like.\\n\\nAgain, I am very concerned with originality in comparison to Lee et al, 2019. Even checking out the link to their codebase provides a github repo that is quite similar to this one. Given that ICLR is a venue of similar domain to NeurIPS, it\\u2019s not clear to me why this paper ought to be anything other than a separate supporting tech report. If this paper had been submitted to something like SysML (edit: or JMLR MLOSS), I would see the distinctness instead.\", \"clarity\": \"I find the paper to be extremely well-written and easy to follow. The addition of code snippets throughout is very well done, even if it\\u2019s a bit overkill. I don\\u2019t know what adding a half page long description of an infinitely wide WideResNet adds to the paper when that space could be better used by another experiment.\", \"quality\": \"I find the experiments performed to be very well constructed. Below are a few mostly minor comments on the experiments:\\n\\nIn Figure 1 on the right, I would have liked to have seen the posterior predictive for a NNGP with the same kernel as well. \\n\\nIn Figure 2, why is the NNGP slower to converge to the analytic values here? Obviously, the rates of convergence are the same, but the constants seem different.\\n\\nIn Figure 3 (and throughout the experiments), does \\u201cfull Bayesian inference for CIFAR10\\u201d mean that you treated the classification labels as regression targets? If so, how was classification error measured.\\n\\nIn Section 3.1, you mention that the library \\u201cleverages block-diagonal structure\\u201d to invert the CIFAR10 covariance matrix for each class (still 50k x 50k). Possibly this is because I haven\\u2019t had the chance to use TPUs, but I\\u2019m currently struggling to see how one could form and invert (via Choleskys) matrices of this size (50k x 50k) on a standard GPU (or CPU). Could the authors please clarify how they did this (whether through iterative methods, another structure exploiting trick, lots of memory, etc.)?\", \"second_edit\": \"I was also unable to respond to the final comment about the UCI experiments in its own comment, but thank you for providing the estimated depths. These results definitely show the potential software promise of the codebase and open some interesting new research questions as a result.\", \"references\": \"Arora, S., et al., Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks, https://arxiv.org/abs/1910.01663\\n\\nLee, J., et al., Wide Neural Networks of any Depth Evolve as Linear Models Under Gradient Descent, NeurIPS, 2019, https://arxiv.org/abs/1902.06720\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"POST-REBUTTAL COMMENTS\\n\\nI appreciate the response from the authors. \\n\\nI particularly like the comparison table in the response to the other reviewer and ought to be highlighted in the paper.\\n\\nIf I were to start this line of research, I would be inclined to expand on the codebase. The contribution is significant. Hence, I am bumping up my score to accept.\\n\\n\\nPRIOR FEEDBACK\\n\\nThe contribution of this work lies in providing a library for working with the existing variants of infinite-width neural networks and avoiding the need to derive the NNGP and NT kernels for each architecture by hand. The authors have firstly shown performance comparisons between inferences between finite vs. infinitely wide neural networks. The authors then go into some implementation details with their library. The authors have provided the code and cookbook in the links provided in the abstract. On the overall, I like this effort which is timely.\", \"some_additional_suggestions_below\": \"I would like to see an additional metric for performance comparison of probabilistic models, which is often used in the GP literature: mean negative log probability.\\n\\nIt would also be interesting to see how the posterior variance (e.g., Fig. 1 right) evolves over the entire space during training. \\n\\nI would have preferred a more detailed discussion about the implementation on transforming tensor ops to kernel ops in Section 3.\\n\\nFor the summary of contributions, can you give the corresponding section number to refer to when you demonstrate each feature? For example, is the 4th feature (i.e., exploring weight space perspective) demonstrated in the paper?\\n\\nCan the authors elaborate on the ease of expanding their library for the new developments in this field?\", \"minor_issues\": \"\", \"page_1\": \"Gaussian Procesesses?\", \"page_4\": \"it\\u2019s infinite?\\nFig. 4: I would have preferred the indices to be placed as subscripts instead of superscripts.\", \"page_8\": \"it\\u2019s order of dimensions?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work develops a library for working with a class of infinitely wide neural networks, in particular those corresponding to neural tangent kernels (NTKs) and neural network Gaussian processes (NNGPs). The theory for these two kernels was well developed in a series of recent papers, and this library provides an automatic way to transform any appropriate neural net architecture into its corresponding NTK and NNGP.\\n\\nInfinitely wide neural networks have been a popular subject of theoretical research and have been observed to have highly nontrivial performance on a variety of tasks (e.g. CIFAR-10 classification). It's really nice to see the development of such a library, which I believe could benefit the deep learning community a lot, especially for theoretical research on NTK.\\n\\nI appreciate this work a lot. Currently I can only give weak accept instead of accept for a couple of reasons:\\n1. The theory and formulae of NTKs and NNGPs were well developed. This work mostly consists of implementing and modularizing them. The research contribution is relatively low.\\n2. As commented in the paper and if I understand correctly, the current library cannot scale to large datasets for CNNs with pooling. This would make the computation much more expensive (and probably infeasible without additional techniques and huge computing power) as mentioned in [Novak et al. 2019] and [Arora et al. 2019]. However pooling seems extremely useful for NTKs and NNGPs on image datasets. I think this makes this work somewhat less exciting than it may sound.\"}" ] }
SJgw51HFDr
Sparse Weight Activation Training
[ "Md Aamir Raihan", "Tor M. Aamodt" ]
Training convolutional neural networks (CNNs) is time consuming. Prior work has explored how to reduce the computational demands of training by eliminating gradients with relatively small magnitude. We show that eliminating small magnitude components has limited impact on the direction of high-dimensional vectors. However, in the context of training a CNN, we find that eliminating small magnitude components of weight and activation vectors allows us to train deeper networks on more complex datasets versus eliminating small magnitude components of gradients. We propose Sparse Weight Activation Training (SWAT), an algorithm that embodies these observations. SWAT reduces computations by 50% to 80% with better accuracy at a given level of sparsity versus the Dynamic Sparse Graph algorithm. SWAT also reduces memory footprint by 23% to 37% for activations and 50% to 80% for weights.
[ "Sparsity", "Training", "Acceleration", "Pruning", "Compression" ]
Reject
https://openreview.net/pdf?id=SJgw51HFDr
https://openreview.net/forum?id=SJgw51HFDr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "qhsMxvSKZR", "rylmGRcnjB", "SkekD9c2oS", "S1euzFc2iB", "BJlFU6ucjr", "ryxlrvZqjB", "SylCZRFtor", "Hyx9DUqmsH", "rklaf8q7jB", "Bye0erq7jr", "SylcTNqXsH", "ryxGyqqk9H", "S1xw9KbyKH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1576798734928, 1573854731262, 1573853782967, 1573853456404, 1573715280761, 1573685048257, 1573654021695, 1573262946070, 1573262869384, 1573262581785, 1573262530421, 1571953113536, 1570867599090 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1881/Authors" ], [ "ICLR.cc/2020/Conference/Paper1881/Authors" ], [ "ICLR.cc/2020/Conference/Paper1881/Authors" ], [ "ICLR.cc/2020/Conference/Paper1881/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1881/Authors" ], [ "ICLR.cc/2020/Conference/Paper1881/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1881/Authors" ], [ "ICLR.cc/2020/Conference/Paper1881/Authors" ], [ "ICLR.cc/2020/Conference/Paper1881/Authors" ], [ "ICLR.cc/2020/Conference/Paper1881/Authors" ], [ "ICLR.cc/2020/Conference/Paper1881/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1881/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper is proposed a rejection based on majority reviews.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thank you for your review\", \"comment\": \"We have added the comparison with the lottery ticket hypothesis in the appendix. Also, we have added a detailed performance estimation on the sparse accelerator in Appendix C.\"}", "{\"title\": \"Thank you for your review\", \"comment\": \"We have resolved all of your mentioned issues and have added most of the rebuttal discussions either in the appendix or clarified in the paper. As per asked by you and reviewer 3, we have added a detailed performance estimation on the sparse accelerator in Appendix C.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"We have added most of the rebuttal discussions either in the appendix or clarified in the paper. We have added a detailed performance estimation on the sparse accelerator in Appendix C.\", \"summary\": \"\", \"storage_overhead\": \"Under the assumption of uniformly distributed sparsity, the storage overhead of index array grows logarithmically with the sparsity. The exact growth factor is Log2(S/(1-S)) where S is the sparsity.\", \"computation_overhead\": \"The ratio of index computation overhead to the sparse computation is inversely proportional to the output channel size. The proportionality constant is a property of the accelerator and is independent of the weights and input activations size. Therefore, the overhead is small compared to the benefit obtained by sparse computations. The full derivation is in the Appendix C.\"}", "{\"title\": \"Thanks for addressing my comments\", \"comment\": \"Dear authors,\\n\\nThanks for the efforts in resolving my comments.\\n\\nI think the discussions in the rebuttal will be very helpful for readers to better understand the contribution. And I would suggest to add the discussions to the appendix.\", \"i_will_keep_the_current_rating_and_i_will_raise_the_score_to_7_if_the_following_results_can_be_presented\": \"The authors mentioned the indexing overhead can be small in the emerging architectures for sparse accelerators. Could the authors uses some of the efficiency reports in the referred papers to give us a more grounded estimates on efficiency. My only concern here is that FLOPS is not the most trustworthy metric in many scenarios [1], and the domain readers might be more convinced a better evaluation metric like (estimated) latency can be reported.\\n\\n[1] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware. Cai et al.\"}", "{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you for your feedback and questions.\\n\\nFirst, we would like to clarify that the objective of the paper is not to prune the model but rather to train the network faster (with fewer computations and less memory bandwidth). We just happen to do that by sparsifying computation. SWAT does indeed have the side effect of pruning the network, but it does not do so nearly as well as papers that set out with pruning as their primary objective which was never the goal of our work. To the best of our knowledge, the large body of literature on pruning increases training time since those works either employ an expensive 3-stage pruning pipeline or introduce additional computations for computing parameter saliency. There is a large body of work that tries to accelerate training by reducing the number of iterations to reach convergence (e.g., Batch Normalization, ADAGRAD, ADAM, RMSPROP). Our work differs in that it focuses on how to reduce the computation per iteration. Prior work we are aware of that tackles the problem of accelerating training by reducing computation per iteration are meProp [a] and DSG [b] which we compared against in the paper.\\n\\nSecond, SWAT sparsifies the entire training process i.e. sparsifies the forward as well as the backward pass.\", \"novelty\": \"Our novelty is showing that the network can be trained with high sparsity without loss in accuracy and the algorithm generalizes well even to complex architectures such as deeper and wider networks on large datasets. \\n\\nWhile not stated explicitly in the current draft, we believe the following is an important and novel aspect of our submission. At ICLR 2019 the Lottery Ticket Hypothesis [c] (best paper winner) showed the difficulty of training with a sparse architecture and that sparse training is very sensitive to initial conditions. The Lottery Ticket showed if one could pick the right initial conditions for the weights, one can train with a sparse network. SWAT is interesting in that it does train a sparse network without the need for oracle information about initialization values. \\n\\nWe believe the crucial difference that enables SWAT to work despite the observation in the Lottery Ticket Hypothesis paper is the following. SWAT updates which weights are part of the sparse network rather than attempting to train a single unchanging sparse network. SWAT may work because it dynamically searches for the sparse architecture that will work with a given set of initial conditions.\\n\\nWhy it is Top-K function\\nIn the appendix we have shown that the Top-K function is the sparsifying function which causes minimum deviation in the cosine distance between original vector and the sparsified instance of that vector. Therefore we have used the Top-K function for sparsification. There are works such as [d] [e] which has shown that cosine similarity is a useful metric for measuring convergence.\", \"comparison_to_other_state_of_the_art_pruning_models\": \"Again, our objective is NOT to prune the model (at any training cost) but rather accelerate the training of a given network. Pruning was just an interesting by-product of our algorithm. We believe if one is concerned with model compression, one could take the resulting network achieved by using SWAT (more quickly trained) and then apply one of the many pruning algorithms to it. In other words, the works are orthogonal.\\n\\n[a] meprop: Sparsified back propagation for accelerated deep learning with reduced overfitting.\\n[b] Dynamic Sparse Graph for Efficient Deep Learning.\\n[c] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks.\\n[d] Scalable Methods for 8-bit Training of Neural Networks.\\n[e] The High-Dimensional Geometry of Binary Neural Networks.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"I am the emergency reviewer. Sorry for the late.\", \"this_paper_studies_a_very_interesting_topic\": \"eliminating small magnitude components of weight and activation vector instead of eliminating small magnitude components of gradients. A clear interpretation and definition towards the forward and backward propagation is presented. The difference between meProp versus SWAT is also plain. Based on some experiment results shown in Figure2, authors announced that accuracy is extremely sensitive to sparsification of output gradients. Thus algorithms SWAT and SAW are proposed to prune the model, which are respectively training with sparse weights and activations, and SWAT only sparsifies the backward pass. Top-K selection is implemented to select which components are set to zero during sparsification.\", \"strengths\": \"1. The writing logic ascends step by step.\\n2. Authors showed the harmfulness of the sparsity of gradients by experiment results. Also the comparison between the sparsity of weights and activations are meaningful.\\n3. Sufficient experiments are done to generalize SWAT to different models, and the results are fascinating on ImageNet.\", \"weaknesses\": \"It's a borderline paper. \\n1. lack of novelty. The paper has shown a lot experiment results on basic models, but the raising of Top-K algorithm is not novel. Why it is Top-K but not other metrics for selecting zero components? In this view, the paper is likely to be a project summary.\\n2. Less comparison to other basic pruning models. More experiments should be done to compare SWAT with other sota pruning models. Then the results will be convincing.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your feedback and questions. We provide answers below.\\n\\n1. The main point of our paper is to reduce computation required to train. The effect we observed that accuracy improved was only seen for small datasets (like CIFAR 10). The results in our submission are based upon three runs. We have run more experiments and believe the effect seen actually may simply have arisen due to random variation (due to different random seeds). As the improvements that were shown were small and not the main point of our paper we will remove the claim about improved accuracy from the revised paper and include revised figures averaging over more runs. That said, to answer the question the training accuracy for these cases was uniformly close to 100% for dense and SWAT up to 40% sparsity.\\n\\n2. The metric we are using for measuring \\u201cdegree of convergence\\u201d is the number of epochs it takes to reach the saturation accuracy. So our observations that the rate of convergence is not impacted follow from our observation that the change in accuracy from one epoch to the next for validation accuracy saturates around the same epoch for both SWAT and dense training. As shown in Figure 5, when the learning rate is 0.1 (i.e. between epoch 0 and 30) the SWAT algorithm reaches the saturation accuracy around the 15th epoch approximately the same epoch when the baseline algorithm also reaches saturation. Similarly, when the learning rate is 0.01 (i.e. between epoch 0-40th) both SWAT and the baseline saturate at epoch 35th.\\n\\nSecond, we want to clarify that we do not use any early stopping criteria but rather run a fixed number of epochs.\\n\\n\\n3. We have empirically confirm this approach works as follows. We define \\u201cTop-K period\\u201d as the number of iterations between computing the threshold for Top-K. The table below shows top1 validation accuracy from single runs for CIFAR 100 on ResNet-18 with different Top-K periods (i.e., Top-K is computed after every 10, 25, 50 and 100 iterations respectively). This data suggests the converged accuracy is indeed not impacted significantly (if at all) when employing our proposed efficient Top K implementation.\\n\\n\\t\\t\\tSparsity 70%\\t\\tSparsity 90%\", \"1_iteration\": \"76.41\\t\\t 73.81\", \"10__iteration\": \"76.59\\t\\t 73.64\", \"20__iteration\": \"76.03\\t\\t 73.45\", \"50__iteration\": \"76.06\\t\\t 74.09\", \"100__iteration\": \"76.52\\t\\t 73.29\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"4. Figure 7 includes all the operations for the convolutional layer, batch-normalization layer, linear layer and the Top-K operation assuming Top-K is implemented using BFRT+thresholding operation. The GFLOPs calculation doesn\\u2019t consider how these operations would be implemented in sparse format.\\n\\nWe believe SWAT would be well suited for emerging sparse accelerator hardware designs that contain dedicated hardware (e.g., for indexing). There have been many recent publications describing such sparse accelerators [1-3] and they show significant performance and energy improvement even in the presence of irregular sparsity. We have left evaluation of runtime of SWAT on such accelerators as future work.\\n\\n[1] SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks\\n[2] Cambricon-X: An accelerator for sparse neural networks\\n[3] Cnvlutin: Ineffectual-neuron-free deep neural network computing\\n\\n5. Figure 8(a) shows the memory access reduction for weights only and Figure 8(b) shows the reduction for activations only, both only for the backward pass. Depending on the batch size, weights and activations represent 50-70% of total memory traffic in the backward pass. SWAT does not reduce the overhead of activation gradients, but these are immediately consumed (when evaluating the prior layer during backprop) and could therefore potentially (depending on batch-size) even be stored in on-chip memory. \\n\\n6. Yes, activations and weights of BN layers are indeed not sparsified in SWAT. Empirically, we found that sparsifying weights and activations are harmful to convergence. This is because the weight (gamma) of BN layers is a scaling factor for an entire output channel, therefore, making even a single BN weight (gamma) zero makes the entire output channel zero. Similarly, dropping activations affects the mean and variance computed by BN. Empirically we found that the BN layer is extremely sensitive to changes in the per channel mean and variance. For example, when ResNet18 is trained on CIFAR 100 using SWAT with 70% sparsity and we sparsify the BN layer activations, accuracy is degraded by 4.9% compared to training with SWAT without sparsifying the BN layers. Therefore, the activations of batch-normalization layer are not sparsified. \\n\\nThe parameters in a BN layer constitute less than 1.01% to the total parameters in the network and the total computation in the BN layer is less than 0.8% of the total computation in one forward and backward pass. Therefore, not sparsifying batch-normalization layers only affects the activation overhead in the backward pass. Currently we are working on reducing the activation overhead for the batch-normalization layer.\"}", "{\"title\": \"Response to Reviewer 1 (Minor Things)\", \"comment\": \"Thank you for your feedback and questions. We provide answers below.\\nMinor Comments\\n1- We will summarize the algorithm in section 2.2 in the final version.\\n\\n2- We wanted to decrease the training time for running many experiments. Experimentally, we found that the network learns the most in the first learning regime (epoch 0 to 30) and therefore the network should be trained for the same number of epochs for the first learning regime. In the second (epoch 30 to 60) and the third learning regime (epoch 60 to 90), we realized that the accuracy attained after 10 epochs is closer to the saturation accuracy attained in that learning regime. Therefore, we decrease the duration of the second and third learning regime to only 10 epochs instead of 30 epochs each, thereby reducing total epoch by 40 which reduces the training time by 44%. If needed we will put the result for training for 90 epochs for some architecture.\\n\\n3- Could you please elaborate and let us know which section of the paper are you referring to?\\n\\n4- We will cite and add necessary comments for any of the missing references.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your feedback and questions. We provide answers below.\\n\\nSome Comments\\n1 First, we want to clarify that Top-K weights and activations are used for computing gradient but dense gradients are used for updating the weights. Specifically, the convolution between the sparse input activation and dense output activation gradient will create a dense weight gradient. Thus, updates are not limited to only the Top-K components. Even parameters that have been dropped during the forward and backward pass get updated i.e. the entire weight gradient is used to update the parameters and there is no masking of the weight gradient. \\n\\nThus, SWAT doesn\\u2019t perform hard elimination of the parameter and it allows the algorithm to capture the dynamic sparsity present in the model since a parameter eliminated at some early iteration may come back in the later training iteration because of the updates. \\n\\nThe SWAT algorithm is trying to capture the dynamic sparsity in the model, therefore the Top-K operation should be performed periodically but the period can be increased as the training proceeds because the chosen Top-K parameters are unlikely to change at the later training iterations. To demonstrate this we perform a new experiment. We train ResNet-18 on CIFAR 100 for 150 epochs (learning rate decayed at epoch 50,100) using a version of SWAT which computes Top-K periodically but with increasing Top-K period for later epochs. The Top-K schedule used during training is shown below:\\n\\t \\nTop-K period\", \"epoch_______0_to_50\": \"three times per epoch\", \"epoch___50_to_100\": \"once per epoch\", \"epoch_100_to_150\": \"once per 5 epochs\", \"the_below_table_shows_top_1_accuracy\": \"Top-K period\\t\\t\\t Sparsity 70% \\t \\tSparsity 90% \\t\\nAbove Top-K schedule 76.36\\t\\t 73.63\", \"1_iteration\": \"76.41\\t\\t 73.81\", \"note\": \"1 epoch has 392 iterations\\n \\n2- Yes, the paper reports only the theoretical compute and memory bandwidth reduction. The exact performance and energy benefit would be proportional to the reported savings but will depend on the underlying hardware. Note: achieving practical speed-up on GPUs would be difficult because of the unstructured sparsity but the speed-up could be obtained on CPUs and sparse accelerators. There are many works which have proposed sparse accelerators for exploiting unstructured sparsity such as [1-3].\\n\\n[1] SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks\\n[2] Cambricon-X: An accelerator for sparse neural networks\\n[3] Cnvlutin: Ineffectual-neuron-free deep neural network computing\\n\\n3- All of the regularization based pruning work focused on reducing the network weights by using regularizer during training for promoting weight sparsity. Sparse weights will only accelerate one part of backward pass i.e. input activation gradient computation will be fast but not the weight gradient computation. The weight gradient computation will still involve convolution between dense activations and dense output activation gradients. The computation in both parts of the backward pass is roughly of the same order as shown below\\n\\n \\t\\t\\t WeightGradientComputation InputActivationGradientComputation\\n\\tResNet-18\\t\\t 3.6 GFLOP\\t\\t\\t5.5 GFLOP\\n\\tVGG-16\\t\\t 30.7 GFLOP\\t\\t\\t30.7GFLOP\\n\\tDenseNet-121\\t\\t5.7 GFLOP\\t\\t\\t6.4 GFLOP \\nTherefore, speedup for early regularization based pruning would be limited by the weight gradient computation. Moreover, such works generally use more expensive optimization methods such as proximal gradient descent compared to standard gradient descent. \\n\\n4- Yes, for example, instead of sorting individual values, channel-based sorting method could be used. The importance of channel could be measured or learned during the training itself. One other sorting approach would be to consider the parameter as well as its gradient during sorting. These are interesting future directions and should be investigated but it is beyond the scope of our current work. \\n\\nSecond, these methods do not lose accuracy since their compression ratio is small, for example only 27% compression in compression-aware training for ImageNet dataset; whereas, SWAT achieves much higher compression rate such as 50% with only 0.26-1.1% loss in accuracy for ImageNet. Note: SWAT training does not employ any fine-tuning so users may perform fine-tuning later to further reduce the gap.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes SWAT as a training algorithm for sparse networks on different architectures. The paper claims being able to reach a level of sparsity with no drop in accuracy. The goal is to minimize the computations during training time. To this end, SWAT sets to zero the vectors where necessary. Different from other approaches, SWAT uses sparse computation in the forward and backward passes. The intuition behind is that eliminating small components does not have an impact on the training process but can be used to minimize the computation required.\", \"some_comments\": [\"The paper is a bit on the empirical side with a decent number of experiments to demonstrate the effectiveness of the proposal. I am on the border between accepting and rejecting.\", \"The top-K implementation is interesting. Page 7 suggests the top-K do not change during training which is reasonable as the update is limited to those components. Would it be possible to avoid completely that compute and quickly select K early in the training process? I would find that an interesting future direction.\", \"In the experimental section, I missed actual numbers. At the moment, if I understand correctly, the paper is based on theoretical compute savings. How feasible is this considering the sparsity of the operation (assuming unstructured sparsity)?\", \"In the case of structured sparsity, how this differs from the early pruning process of regularization based pruning algorithms? For instance, in the first reference (compression-aware training), the authors claim the model can be compressed in the early training. If that is the case, how different is SWAT from those type of methods? In those related works, the accuracy does not drop. Implementation wise, those algorithms do make the backward pass also sparse (setting to 0 the gradients).\", \"At the moment, the algorithm is using a magnitude-based sorting. Would it be possible to have other sorting approaches?\"], \"minor_things\": [\"for clarity, I would summarize the algorithm in section 2.2 rather than in the appendix.\", \"I am surprised by the imagenet training setting. Why only training for 50epochs? The standard training process is 90epochs changing the learning rate in the 30th and 60th.\", \"I guess the S% sparsity contribution can be improved (rephrased). If the training algorithm sets to zero N parameters seems to me obvious that there will be no drop in accuracy compared to that training process. What is the drop in accuracy referring to?\", \"check the references. While the list is quite comprehensive, some of them are not referred in the text. Please, add comments where appropriate.\"]}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studies training neural networks with sparse weights and sparse activations (SWAT training). By using sparse weights in forward passes as well as sparse weights and activations in backward passes, SWAT can reduce the computation overhead and also reduce the training memory footprint. The primary contributions of the paper are in three folds: 1) The authors empirically compare the impact of (activation) gradient sparsity and weight + activation sparsity on the model performance---the comparison shows that the weight + activation sparsity has less influence on the model accuracy; 2) Across different models on CIFAR and ImageNet dataset, SWAP can reduce the training flops by 50% to 80% while using roughly 2 to 3x less training memory footprint saving (weight + activation); 3) The authors empirically study on why training using top-K based sparsification can attain strong model accuracy---the magnitude-based top-K approach can roughly preserve the directions of the vectors.\\n\\nI think the claimed contributions are well-validated in general. The design decisions of the approach are well supported by empirical observations and the components of the approach (different top-K methods) are studied properly. Additionally, I like the authors' synthetic-data studies to shed light on why top-K based sparsity can work well. Given the above reason, I give week accept and I am willing to raise the score if the following questions / concerns can be resolved in the rebuttal / future draft:\\n\\n1. In results such as in figure 4, we observe that using intermediate levels of sparsity can actually demonstrate better generalization performance than the dense baseline training approach. I was wondering if this is because the default hyperparameter produces better training loss in sparse training than in dense training, and consequently the sparse training test performance is also improved over dense training. Without showing this, it is not fully convincing that intermediate sparsity helps prevent overfitting and generalizes better (as the authors discussed in the text).\\n\\n2. For \\\"Impact on Convergence\\\" in section 3.2, it is not clear to me what the authors are using as a metric for the degree of convergence. Thus I can not evaluate the claims here.\\n\\n3. For \\\"Efficient Top K implementation\\\" in section 3.2, the authors suggest only computing the K-th largest elements periodically to further improve efficiency. However the empirical evidence of whether this approach will significantly degrade the model performance at the end of training is not provided.\\n\\n4. For the GFLOPS comparison in Figure 7, could the authors elaborate what operations are included into the count? As the sparse operations requires additional indexing operations for computation, I was wondering whether the GFLOPS can realistically reflect the real latency / energy efficiency of the SWAT approach.\\n\\n5. How the memory access count calculated at the end of page 7? Is it counting the number of float point values (activations, activation gradients, weights) that needs to be fetched for forward and backward pass?\\n\\n6. At the first paragraph in page 8 (last paragraph above section 4), do the authors imply that the activations of BN layers is not sparsified? Could the authors provide a bit more evidence on how (any why) sparsification of BN activation impacts the model performance.\"}" ] }
B1xwcyHFDr
Learning Robust Representations via Multi-View Information Bottleneck
[ "Marco Federici", "Anjan Dutta", "Patrick Forré", "Nate Kushman", "Zeynep Akata" ]
The information bottleneck principle provides an information-theoretic method for representation learning, by training an encoder to retain all information which is relevant for predicting the label while minimizing the amount of other, excess information in the representation. The original formulation, however, requires labeled data to identify the superfluous information. In this work, we extend this ability to the multi-view unsupervised setting, where two views of the same underlying entity are provided but the label is unknown. This enables us to identify superfluous information as that not shared by both views. A theoretical analysis leads to the definition of a new multi-view model that produces state-of-the-art results on the Sketchy dataset and label-limited versions of the MIR-Flickr dataset. We also extend our theory to the single-view setting by taking advantage of standard data augmentation techniques, empirically showing better generalization capabilities when compared to common unsupervised approaches for representation learning.
[ "Information Bottleneck", "Multi-View Learning", "Representation Learning", "Information Theory" ]
Accept (Poster)
https://openreview.net/pdf?id=B1xwcyHFDr
https://openreview.net/forum?id=B1xwcyHFDr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "pComHFNCL", "Sximmm9ox-", "Pr4eFU2vJ", "SJgPFlDnoH", "HJgPS1v2oH", "HygAlJDhsH", "rJgX9T8njB", "rJgolpUhsS", "BJxd8x7rqH", "SkglAno0YS", "HJx75ajTtr" ], "note_type": [ "official_comment", "comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1583167741176, 1582774298036, 1576798734899, 1573838975285, 1573838655335, 1573838581533, 1573838218780, 1573838066653, 1572315216470, 1571892424025, 1571827082769 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper1880/Authors" ], [ "~Xuefeng_Du1" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1880/Authors" ], [ "ICLR.cc/2020/Conference/Paper1880/Authors" ], [ "ICLR.cc/2020/Conference/Paper1880/Authors" ], [ "ICLR.cc/2020/Conference/Paper1880/Authors" ], [ "ICLR.cc/2020/Conference/Paper1880/Authors" ], [ "ICLR.cc/2020/Conference/Paper1880/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1880/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1880/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Obtaining paired input data\", \"comment\": \"Hi, thank you for expressing interest in our research and for the insightful question.\\nThe unsupervised multi-view setting approached in this paper refers to scenarios in which two redundant sources of information are accessible, while the target label is not.\\n\\nExamples of datasets that fit into this framework include pictures obtained with multi-lens cameras, audio signals from different microphones in the same room, consecutive frames in temporally consistent videos and multi-lingual corpora amongst many others, which have been described in previous multi-view learning literature [Xu 2013, Li 2018]\\n\\nEven when redundant sources of information are not available, it is possible to exploit known properties of the down-stream task and apply independent data augmentations to produce a multi-view dataset without having access to any label information (as visualized in Figure 1 and described in section 3.3 and 5.2).\\n\\nFor this reason, we believe the unsupervised multi-view setting described in this work does not impose any fundamental restriction on the standard unsupervised settings, even if defining meaningful data augmentation strategies or identifying redundant sources of information can be challenging depending on the end-goal task and specific data-generating process.\"}", "{\"title\": \"Questions with respect to the paired input data\", \"comment\": \"Hi, recently I read this paper and I found the idea really appealing but I have some concerns with respect to the \\\"unsupervised setting\\\" proposed in the paper. It seems like we have to input two images within the same class to the model (claimed in the beginning of section 3) but in that we are in the \\\"unsupervised\\\" setting, how can we obtain the paired labeled data without knowing the labels, especially when we are dealing with large-scale dataset.\\n\\nAny suggestions will be appreciated. \\n\\nThanks,\"}", "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper extends the information bottleneck method to the unsupervised representation learning under the multi-view assumption. The work couples the multi-view InfoMax principle with the information bottleneck principle to derive an objective which encourages the representations to contain only the information shared by both views and thus eliminate the effect of independent factors of variations. Recent advances in estimating lower-bounds on mutual information are applied to perform approximate optimisation in practice. The authors empirically validate the proposed approach in two standard multi-view settings.\\nOverall, the reviewers found the presentation clear, and the paper well written and well motivated. The issues raised by the reviewers were addressed in the rebuttal and we feel that the work is well suited for ICLR. We ask the authors to carefully integrate the detailed comments from the reviewers into the manuscript. Finally, the work should investigate and briefly establish a connection to [1].\\n\\n[1] Wang et al. \\\"Deep Multi-view Information Bottleneck\\\". International Conference on Data Mining 2019 (https://epubs.siam.org/doi/pdf/10.1137/1.9781611975673.5)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"1) How limiting is the multi-view assumption? Are there well-known cases where it doesn't hold? I feel it would be hard to use, say, with text. Has this been discussed in the literature? Some pointers or discussion would be interesting.\\n\\nWe answered this above as part of shared question (1)\\n\\n2) Sketchy dataset: Could the DSH algorithm (one of the best prior results) be penalized by not using the same feature extractor you used?\\n\\nThe DSH algorithm may be limited by their use of an AlexNet instead of a VGG network, but they also fine-tune the AlexNet as part of their model, while our model directly uses the features unmodified. So it's unclear how these two changes affect the performance on the balance. We were unable to produce an updated version of their results with a VGG network because they do not provide code or hyper-parameter settings for training their models, or details on how they did the pretraining. In order to facilitate future comparison, we will release the preprocessed dataset with the 5 splits used for our experiments upon paper acceptance.\\n\\n\\n\\n3) Sketchy dataset: Can a reference for the {Siamese,Triplet}-AlexNet results be provided? For reproducibility, what is the selected \\\\beta?\\n\\nA reference for {Siamese,Triplet}-AlexNet results has been added to Table 1 together with the selected value of beta for MIB.\\n\\n4) I find it very hard to believe that the accuracy stays constant no matter the number of examples per label used. How can an encoder be trained on 10 images? Did I misunderstand the meaning of this number? Can this be clarified?\", \"the_mnist_experiments_are_performed_with_the_following_procedure\": \"i) Using the whole training set without any labels (i.e. unsupervised) we train the encoder network which generates a representation for each input picture. We then freeze the weight of the encoder network.\\n\\nii) Then the randomly chosen set of training labels are used to train a linear classifier to map from encoded samples to the labels. Note here that in our experiments, the chosen amount of labels does indeed range from one example per label, up to and including all labels.\\n \\niii) Each classifier is evaluated on a disjoint test set. This is done by first encoding the unseen test set using the encoder trained in i) and plugging the representations into the linear classifier trained in ii)\\nIn this setting, the encoder has access to the whole (unlabeled) training set, but the classifier is trained using different amounts of examples.\\nThe MIB model produces a representation that contains approximately 2.3 nats (Figure 4) which corresponds roughly to the amount of information associated with a categorical distribution on 10 classes with uniform probability (ln10 ~2.3 nats). Further investigating this interesting property, we visualized the representation produced by our model by projecting the representation of the test set on the 2 principal components (Figure 7 added in Appendix G.4.1). The representation for the test digits roughly consists of 10 linearly separable clusters, which explains why 10 examples are sufficient to align cluster centroids and labels.\\n\\n5) Again for reproducibility, listing the raw numbers for the MNIST experiments would be nice.\\n\\nWe added appendix G.4.1 in which we report accuracy for different numbers of examples and mutual information estimation corresponding to the comparison reported in Figure 4.\\n\\n6) If I understood the experiments correctly, \\\"scarce label regime\\\" is used for both the MIR-Flickr and MNIST datasets, meaning two different things (number of labels per example vs number of examples per label), which is slightly confusing. \\n\\nBy \\u201cscarce label regime\\u201d, we mean a reduced number of labeled examples.\\nThis translates into slightly different settings for single-label (MNIST) and multi-label (Flickr) classification problems.\\nIn both cases, we simulate the lack of labels by picking a subset of labeled examples with the same distribution p(x,y) as the original training set.\\nSince the label distribution on MNIST is uniform, we generate training subsets by picking the same number of examples for each class.\\nThe same procedure is not possible for the Flickr experiments as the label distribution is uneven and each example has multiple labels. For this reason, we uniformly subsample the original training set without replacement.\\nThe x-axis in Figure 3 and 4 have been updated to consistently report the number of labeled examples used for training a classifier on top of the specified representation.\\n\\n7) Typos\\n\\nWe thank the reviewer for identifying the mistakes, which have been fixed in the current version of the paper.\"}", "{\"title\": \"Response to Reviewer 2 (Part 2)\", \"comment\": \"5.a) In Figure 4, it seems that VAE (with beta=4) outperforms MV-InfoMax.\\ni) Why the \\\"pseudo-second view\\\" does not help Mv-Infomax in this scenario?\\n\\nAt inference time both the VAE and MV-InfoMax models have access to only one view.\\nFor this reason, MV-InfoMax can exploit the second view only to identify which features of the input are kept and which ones are discarded at training time. As mentioned in Section 4, MV-InfoMax has no incentive to discard any information since any representation that contains at least the information that the two views have in common is equally optimal according to its training objective. Empirically, the representation obtained with MV-InfoMax contains a lot of superfluous information (~11 nats for MV-InfoMax and ~14 nats for InfoMax ), which can justify their reduced performance.\\n\\nThe Variational Autoencoder model, on the other hand, makes use of a compression term that is regulated by the hyper-parameter beta. The compression is completely agnostic of the label and we have no guarantees that the label information is retained, but, in practice, for any beta<=4 most of the label information is kept (~2 nats). A VAE trained with beta=4 produces a representation which contains most of the label information but is less influenced by other variations in the input (~7 nats) when compared to InfoMax and MV-InfoMax. We hypothesize that the slightly better performance of the VAE model can be explained by the interplay between the amount of label and superfluous information in the representations.\\nNumerical measurements of accuracy and mutual information have been added to Table 2 in the appendix to facilitate the comparison between the different models. \\n\\nii) Why VAE is clearly better than Infomax?\\n\\nThe training objective of VAE (beta=0) and InfoMax are equivalent in their goal of maximizing the amount of information included in the representation. We believe that the difference between their performance is due to the use of different strategies to maximize the same quantity. As reported in [McAllester & Stratos (2018)] estimators based on lower-bounds of a Kullback-Leibler divergence (as the Jensen-Shannon or InfoNCE estimator used for the InfoMax models) are generally worse than the ones based on difference of entropies (as VAE with beta=0 ) when estimating high values of Mutual Information (i.e. the mutual information between the input and the representation). The same estimator works better for the MIB model since the amount of information to estimate and maximize is lower (i.e. the shared information across the two views).\\n\\n5.b) In Figure 3, you might also tune beta for VCCA and its variants, like what you did for VAE/VIB in a single view. \\n\\nThe models based on VCCA have multiple parameters which play the same role as beta (weights for each of the each of the two cross-modal and uni-modal reconstruction terms). The interplay between these parameters is complicated and so we chose not to try to explore this space since we weren't sure what insight it would provide, and we didn't expect competitive results given that the beta=1 version of our model when trained with only 2% of the labels was already outperforming the best version of VCCA when it was trained with all of the data.\\n\\n6) Do you think your approach can be extended to more than two views easily? For me, it seems the extension is not trivial, as it requires o(n^2) terms in your loss for n views.\\n\\nWe addressed this above as part of shared question (2)\"}", "{\"title\": \"Response to Reviewer 2 (Part 1)\", \"comment\": \"1) In the paper, the authors said the original formulation of IB is only applicable to supervised learning. That is true, but the variational information bottleneck paper [Alexander A. Alem et al. 2017] already showed the connection of unsupervised VIB to VAE in the appendix.\\n\\nIn supervised settings, VIB allows one to create a representation that discards irrelevant input information. The unsupervised extension of VIB mentioned in [Alemi et al. (2017)] is equivalent to the beta-VAE model, in which the beta hyper-parameter regulates the trade-off between distortion and rate [Alemi et al. (2018)]. In this setting, however, we have no guarantees that the information discarded by the model is irrelevant for the task. Our model, on the other hand, makes use of a source of redundant information (unsupervised multi-view setting) to create a representation that discards only irrelevant information.\\nWe updated our claims in the introduction to clarify the distinction between the three models. \\n\\n2) I would not consider the data augmentation used to extend single-view data to \\u201cpseudo-multiview\\u201d as a contribution. This has been done before (e.g. in the multiview MNIST experiment part of the paper \\\"On Deep Multi-View Representation Learning\\\").\\n\\nWe updated claim (3) in the introduction to clarify that our contribution does not consist in the definition of \\u201cpseudo-multiview\\u201d but rather in connecting the well-known data augmentation procedure to the mutual redundancy condition introduced in this work.\\nWe believe that this connection could be useful as it defines some constraints on the function class used for augmentation, which can be expressed and described using information-theoretic quantities.\\n\\n 3) Which MV-InfoMax do you really compare to? You listed a few of them: (Ji et al., 2019; Henaff et al., \\u00b4 2019; Tian et al., 2019; Bachman et al., 2019) in the related work section.\\n\\nThe version of MV-InfoMax reported in the experiments is based on Tian et al (2019), with the only difference that we used an alternative mutual information estimator. The InfoNCE estimator used in [Tian et al (2019)] allows for faster computation but usually results in slightly worse estimations than the Jensen-Shannon estimator [Poole 2018]. For this reason, we decided to consistently use the Jensen-Shannon estimator for the rest of the experiments as it results in better performance for all three models.\\n\\nWe added appendix G.4.1 to report the performance and mutual information estimation obtained by using both InfoNCE and Jensen-Shannon estimators for InfoMax, MV-InfoMax, and MIB. We also added a few sentences in the experimental section to clarify our mutual information estimation choice.\\n\\n4) I think the authors should also make a more careful claim on their results in MIR-Flickr. I\\u2019d rather not saying MIB generally outperforms MV-InfoMax on MIR-Flickr, as MIB does not (clearly) outperform MV-InfoMax when enough labeled data is available for training downstream recognizers. But MIB does clearly outperform MV-InfoMax when scaling down the percentage of labeled samples used.\\n\\nWe updated our claim in the abstract of the paper to clarify this point by specifically noting that we provide state-of-the-art results only in the label-limited regime.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"1) The proposed method only provides two views of the same underlying entity, what about 3 or more views?\\n\\nWe addressed this above as part of shared question (2)\\n\\n2) Can this method be used for multi-modality case?\\n\\nWe addressed this above as part of shared question (1)\\n\\n3) What about the time efficiency of the proposed method?\\n\\nThe proposed MIB architecture involves training 2 encoders and one auxiliary architecture as in [Tian et al. (2019), Bachman et al. (2019)]. Such models typically train faster than the ones that involve the use of decoders (e.g. VAE, MVAE, VCCA), or involve more complicated architectures consisting of multiple modules (e.g. GDH, DSH). On the other hand, since the beta hyper-parameter has to be slowly increased during training to ensure stability, the total number of training steps required for convergence is slightly bigger than InfoMax and MV-InfoMax. Both training time per epoch and the total number of training steps for convergence mostly depend on the specific choice of mutual information estimation and corresponding auxiliary neural network architecture and is a current subject of exploration in recent work [Poole et al. (2019), Belghazi et al. (2018)].\"}", "{\"title\": \"Shared Response to Reviewers\", \"comment\": \"We thank the reviewers for their useful feedback and comments. The following addresses questions asked by multiple reviewers:\", \"shared_question_1\": \"Applicability of Multi-View Information Bottleneck and mutual redundancy assumption\\n(Addresses Reviewer 1's first question and Reviewer 3's second question)\\n\\nThe multi-view literature has explored and discussed the applicability of the assumption that each view is sufficient for correct classification [Zhao (2017)]. They find that this assumption holds in a wide variety of circumstances resulting in a large community exploring this space. Our mutual redundancy assumption is weaker than the standard multi-view assumption, requiring only that the two views are \\\"equally certain\\\" about the label allowing it to be applied in an even wider set of circumstances. Specifically, it can be applied in multi-modal settings, such as the MIR-Flickr and SBIR (Sketchy dataset) settings shown in our experiments, as well as text-based tasks such as translation and paraphrasing where both text views contain the same relevant (semantic) information. Two additional points are also worth noting: \\n\\n1) Our method can be applied even if the mutual redundancy assumption holds only approximately (I(v_1;y|v_2)+I(v_2;y|v_1)<epsilon), by using a lower value of beta, to reduce the pressure to remove all information which is not shared. We can see this empirically with the Flickr dataset where the mutual redundancy constraint is clearly violated since some of the tags are not as predictive of the labels as the full image (see Figure 6 in appendix G.3 as an example). In this setting, the best accuracy is obtained with a smaller beta, while higher values result in less informative but more robust representations (Figure 3). \\n\\n2) If the two views are complementary, containing mostly unrelated information, then there's probably little to be gained by treating them as separate views. In this scenario, we can just as well concatenate them into a single view and treat them accordingly, by, for example, applying the single-view data augmentation version of our method to capture known invariances or symmetries of the specific task.\\nWe clarified this by adding a paragraph in our conclusive discussion.\", \"shared_question_2\": \"Multi-View Information Bottleneck with more than 2 views\\n(Addresses Reviewer 2's 6th question and Reviewer 3's first question)\\n\\nWe believe our method can be extended to handle more than two views, but such an extension cannot be done trivially and so we have left it to future work. Specifically, the mutual redundancy condition introduced in this work is not generally transitive (example in Appendix D). As a consequence, the sufficiency guarantee implied by Theorem B.2 does easily generalize for an arbitrary number of views, even when the mutual redundancy condition is respected by each pair of them. Thus extending to more views requires considering more restrictive assumptions that take into account higher-order interactions between the different views and the label.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper extends the information bottleneck method of Tishby et al. (2000) to the unsupervised setting. By taking advantage of multi-view data, they provide two views of the same underlying entity. Experimetal results on two standard multi-view datasets validate the efficacy of the proposed method.\\nI have three questions about this work.\\n1. The proposed method only provides two views of the same underlying entity, what about 3 or more views?\\n2. Can this method be used for multi-modality case?\\n3. What about the time efficiency of the proposed method?\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"In this paper, the authors extend the Information Bottleneck method (to build robust representations by removing information unrelated to the target labels) to the unsupervised setting. Since label information is not available in this setting, the authors leverage multi-view information (e.g., using two images of the same object) , which requires assuming that both views contain all necessary information for the subsequent label prediction task. The representation should then focus on capturing the information shared by both views and discarding the rest. A loss function for learning such representations is proposed. The effectiveness of the proposed technique is confirmed on two datasets. It is also shown to work when doing data augmentation with a single view.\", \"Overall the paper is well motivated, well placed in the literature and well written. Mathematical derivations are provided. Experimental methodology follows the existing literature, seem reasonable and results are convincing. I do not have major negative comments for the authors. This is however not my research area and have only a limited knowledge of the existing body of work.\", \"Comments/Questions:\", \"How limiting is the multi-view assumption? Are there well-known cases where it doesn't hold? I feel it would be hard to use, say, with text. Has this been discussed in the literature? Some pointers or discussion would be interesting.\", \"Sketchy dataset: Could the DSH algorithm (one of the best prior results) be penalized by not using the same feature extractor you used?\", \"Sketchy dataset: Can a reference for the {Siamese,Triplet}-AlexNet results be provided?\", \"Sketchy dataset: for reproducibility, what is the selected \\\\beta?\", \"I find it very hard to believe that the accuracy stays constant no matter the number of examples per label used. How can an encoder be trained on 10 images? Did I misunderstand the meaning of this number? Can this be clarified?\", \"Again for reproducibility, listing the raw numbers for the MNIST experiments would be nice.\", \"If I understood the experiments correctly, \\\"scarce label regime\\\" is used for both the MIR-Flickr and MNIST datasets, meaning two different things (number of labels per example vs number of examples per label), which is slightly confusing.\"], \"typos\": \"\", \"page_1\": \"it's -> its\", \"page_6\": \"the the -> the\", \"page_7\": \"classifer -> classifier\", \"page_8\": \"independently -> independent\"}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This is a good multiview representation learning paper with new insights. The authors propose to learn variables z_1 and z_2, which are consistent, contain view-invariant information but discard as much view-specific information as possible.\\nThe paper relies on mutual information estimation and is reconstruction-free. It is mentioned in some previous works (e.g. Aaron van den Oord et al. 2018), that reconstruction loss can introduce bias that has a negative effect on the learned representation.\\nComparing to existing multiview representation learning approaches that try to maximize the mutual information between learned representation and the view(s), this paper clearly defines superfluous information that we should try to throw away and figure out how to obtain sufficiency learned representation for output. The authors also draw clear connections between a few existing (multiview) representation learning methods to their proposed approaches.\\nThe experimental results on the right side of Figure 3, deliver a very interesting conclusion. In low-resource case, robust feature (obtained by using the larger beta, discarding more superfluous information) is crucial for achieving good performance. While when the amount of labeled data samples is enough, vice-versa.\", \"here_are_my_major_concerns\": \"1.\\tIn the paper, the authors said the original formulation of IB is only applicable to supervised learning. That is true, but the variational information bottleneck paper [Alexander A. Alem et al. 2017] already showed the connection of unsupervised VIB to VAE in the appendix.\\n2.\\tI would not consider the data augmentation used to extend single-view data to \\u201cpseudo-multiview\\u201d as a contribution. This has been done before (e.g. in the multiview MNIST experiment part of the paper \\\"On Deep Multi-View Representation Learning\\\").\\n3.\\tWhich MV-InfoMax do you really compare to? You listed a few of them: (Ji et al., 2019; Henaff et al., \\u00b4 2019; Tian et al., 2019; Bachman et al., 2019) in the related work section.\\n4.\\tI think the authors should also make a more careful claim on their results in MIR-Flickr. \\nI\\u2019d rather not saying MIB generally outperforms MV-InfoMax on MIR-Flickr, as MIB does not (clearly) outperform MV-InfoMax when enough labeled data is available for training downstream recognizers. But MIB does clearly outperform MV-InfoMax when scaling down the percentage of labeled samples used.\\n5.\\tRegarding baselines/experiments\\na.\\tIn Figure 4, it seems that VAE (with beta=4) outperforms MV-InfoMax. Why the ``\\\"pseudo-second view\\\" does not help Mv-Infomax in this scenario? Why VAE is clearly better than Infomax?\\nb.\\tIn Figure 3, you might also tune beta for VCCA and its variants, like what you did for VAE/VIB in a single view. \\n6.\\tDo you think your approach can be extended to more than two views easily? \\nFor me, it seems the extension is not trivial, as it requires o(n^2) terms in your loss for n views.\\nBut this is minor.\"}" ] }
Bke89JBtvB
Batch-shaping for learning conditional channel gated networks
[ "Babak Ehteshami Bejnordi", "Tijmen Blankevoort", "Max Welling" ]
We present a method that trains large capacity neural networks with significantly improved accuracy and lower dynamic computational cost. This is achieved by gating the deep-learning architecture on a fine-grained-level. Individual convolutional maps are turned on/off conditionally on features in the network. To achieve this, we introduce a new residual block architecture that gates convolutional channels in a fine-grained manner. We also introduce a generally applicable tool batch-shaping that matches the marginal aggregate posteriors of features in a neural network to a pre-specified prior distribution. We use this novel technique to force gates to be more conditional on the data. We present results on CIFAR-10 and ImageNet datasets for image classification, and Cityscapes for semantic segmentation. Our results show that our method can slim down large architectures conditionally, such that the average computational cost on the data is on par with a smaller architecture, but with higher accuracy. In particular, on ImageNet, our ResNet50 and ResNet34 gated networks obtain 74.60% and 72.55% top-1 accuracy compared to the 69.76% accuracy of the baseline ResNet18 model, for similar complexity. We also show that the resulting networks automatically learn to use more features for difficult examples and fewer features for simple examples.
[ "Conditional computation", "channel gated networks", "gating", "Batch-shaping", "distribution matching", "image classification", "semantic segmentation" ]
Accept (Poster)
https://openreview.net/pdf?id=Bke89JBtvB
https://openreview.net/forum?id=Bke89JBtvB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "3oNYtwtzz", "rkeAl96dir", "Bkl6sIs_sH", "rkgAVHjOjr", "SygoXqOR5r", "rJe55cygqH", "SJepfyO6Fr", "H1l-C03WYB", "Bkekd9Q0dr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1576798734870, 1573603830215, 1573594789291, 1573594421826, 1572928035144, 1571973778284, 1571811092755, 1571045065349, 1570810471169 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1879/Authors" ], [ "ICLR.cc/2020/Conference/Paper1879/Authors" ], [ "ICLR.cc/2020/Conference/Paper1879/Authors" ], [ "ICLR.cc/2020/Conference/Paper1879/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1879/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1879/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1879/Authors" ], [ "~Shanghua_Gao1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper describes a method to train a convolutional network with large capacity, where channel-gating (input conditioned) is implemented - thus, only parts of the network are used at inference time. The paper builds over previous work, with the main contribution being a \\\"batch-shaping\\\" technique that regularizes the channel gating to follow a beta distribution, combined with L0 regularization. The paper shows that ResNet trained with this technique can achieve higher accuracy with lower theoretical MACs. Weakness of the paper is that more engineering would be required to convert the theoretical MACs into actual running time - which would further validate the practicality of the approach.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewer #3\", \"comment\": [\"We thank the reviewer for their thoughtful review and constructive suggestions. Responses are included inline:\", \"We moved Figure 7 to the main paper as suggested. A regular neural network does show similar patterns. However, the interpretability of the model behavior could potentially become easier with gated networks. Training neural networks with gates helps to better align the filters with the actual features, and dynamic allocation of filters encourages the relevance to be distributed only to a limited set of units. Analyzing the execution patterns of the gates can potentially make interpretation easier. For example, gates that are mostly active refer to feature extractors that are general and not task/class dependent. Gates which are very selective and rarely execute are more specialized and refer to features that appear only for specific tasks/classes.\", \"The gathering of active channels doesn't incur much overhead, as you can see in our CPU tables. GPU may not be the only target device, and it is possible to make a non-gathering implementation of this. We would like to refer the reviewer to our first point in our response to reviewer 1. We tend to disagree with that 2X reduction in MAC is not impressive (E.g. in MobileNet V3 paper, the improvements are below a 2x decrease in latency at the same accuracy as MobileNet V2).\", \"We added ResNet14 and ResNet26 to create more points for the baseline. The models which are 10X and 20X wider have very large MAC usage and including more points will push the rest of the curves to a small portion in the figure. We still increased the margin so the results of the wider networks are more visible. Please see the revised version in the PDF (Fig3c). We are providing the full version in this link: https://ibb.co/xJPCwrV\", \"We agree with the reviewer and grouped the curves by giving them similar color tones so they are better matched. We also separated the ablation studies in a separate sub-figure to make the original figures less busy. Please find the improved version in the revised version of our paper. Any suggestions to improve the graphs are welcome from the reviewer.\", \"We expect the results for the ResNet50-L0 models to follow the same trend. We started the experiments but generating multiple points to show the Accuracy-Mac trade-off will unfortunately not be ready by the rebuttal deadline.\", \"The L1 loss suggested by the reviewer is actually a loss we tried first. While the performance of the models was worse than training with the L0 loss, we found that setting a target rate as in $|E(x) - 0.5|$ causes an undesirable property. The output distribution of a large number of gates becomes unimodal and centered at 0.5. Such gates are not conditional and act as a random dropout gate. By adding the Gumbel noise and taking the argmax, the gate is half the times on and half the times off and by this, the loss objective is easily minimized. Minimizing this loss, therefore, will not result in the formation of conditional gates. We compared the results achieved by this loss, the L0, and batch-shaping loss for the CIFAR-10 experiments and they can be found in this link: https://ibb.co/r6jMBqq\", \"Great care was put into the set-up, and there are good reasons for each step. We found that introducing the L0 loss too early in training can permanently deactivate gates (always off) and hinder the learning of useful features. It can basically reduce the effective network capacity very early in the training and potentially hurt performance. The L0 loss is, however, required if we want to save more computation and is closer to the actual objective we're interested in, trading off as much compute conditionally for performance. The conditionality itself is a means to an end, and the L0-loss better reflects the actual practical trade-off. For the semantic segmentation, we primarily wanted to explore if channel gated networks are amenable to the semantic segmentation task and did not focus on saving compute. The experiment setup for CIFAR-10 is however similar to ImageNet and L0 was used during training with the same scheme.\", \"Thank you for pointing us to these references. We added them to the related work section!\"]}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for their review and positive assessment of our work.\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": [\"We would like to thank Reviewer 1 for their review and constructive comments. Our responses inline:\", \"There can be two possible ways to implement this. An inefficient implementation for GPU, would do the gather op as in the CPU case (details of CPU implementation below). However, this would lead to a lot of in-GPU memory movement. An efficient implementation could take the boolean mask in the kernel, and only do computations for the output channels that are active. If the convolutional computation is tiled properly over the output dimension, this should result in an output channel sparse convolution with negligible overhead. This can definitely be done, but the actual implementation for GPUs is beyond the scope of this paper. We would also like to mention that there are a lot of other devices/cores that run neural networks, and we feel our results not only hinge on the availability of a GPU kernel. Many networks on mobile devices are still run on the CPU, where our on-device results hold perfectly. Other devices, like Qualcomm's Snapdragon HVX, can easily use the conditionality, since the amount of parallel computation that is done over many cores is lower, and the memory access more sequential. For many such mobile use-cases, the kernel-implementation of our work has almost no overhead.\", \"For the GPU measurements, we first recorded the gating patterns of the entire images in the validation set. For each input image, a sparse model (with a fewer number of convolution kernels in each layer) was defined based on the gating pattern. The computation time was then reported for the sparse model. The overhead caused by the gating modules is included in the wall-time calculation.\", \"CPU implementation: As stated in the paper, the results in the table are for an actual CPU implementation and not simulated. In our experimental setting, the actual latency is shown in the table, compared to theoretical FLOP reduction.\", \"Consider $W_{1} \\\\in R^{c_{1}^{in} \\\\times {c_{1}}^{out} \\\\times k \\\\times k}$ and $W_{2} \\\\in R^{c_{2}^{in} \\\\times {c_{2}}^{out} \\\\times k \\\\times k}$ representing the weight tensors of the first and second layers in a ResNet block, where ${c_{1}}^{out} = c_{2}^{in}$. For each ResNet block, we first use the output of the gates to generate a mask. Using this mask, we slice the original weight tensor of the first layer in the block and apply conv2d on the input featuremap using the sliced weight tensor $W_{1} \\\\in R^{c_{1}^{in} \\\\times c^{slice} \\\\times k \\\\times k}$. We next apply masking for the first batch normalization layer. The input to the second layer is a featuremap with lower number of channels. Using the same mask, we slice the weight tensor of the second layer $W_{2} \\\\in R^{c^{slice} \\\\times {c_{2}}^{out} \\\\times k \\\\times k}$ and apply the conv2d layer using this tensor.\", \"We added the results and timings of ResNet50 to Table 1.\", \"Model | GPU (ms) | CPU (ms) | Params | MACs | Top-1 Acc\", \"ResNet50 | 1.75\\u00b13.0e-5 | 184.05\\u00b11.8e-4 | 25.55M | 4.09G | 0.761\", \"We also added details of our actual CPU implementation and the simulation for GPU timing to the appendix.\"]}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper's focus is on conditional channel-gated networks. Conventional ConvNets process images by computing all the filters, which can be redundant since not all the filters are necessary for a given image. To eliminate this redundancy, this work aims at computing a channel gating on-the-fly, to determine what filters can be turned off. The core contribution of the paper is to propose a \\\"batch-shaping\\\" technique that regularizes the channel gating to follow a beta distribution. Such regularization forces channel gates to either switch on or off. Combined with l_0 regularization, the proposed training technique improves the performance of channel gating: ResNet trained with this technique can achieve higher accuracy with lower theoretical MACs.\\n\\nOverall, the paper proposes a simple yet effective trick for training gated networks. The paper is well written, and experiments are sufficient in demonstrating the effectiveness of the method. \\n\\nThe main concern for the paper is whether such granular control on the convolution filters can be practically useful. For Conventional ConvNets whose computation is fixed regardless of the input, scheduling the computation on the hardware static and therefore can be easily optimized. When it comes to dynamic networks, especially at such a granular level, it is not clear whether the theoretical complexity reduction can directly translate to actual efficiency (such as latency) improvement. In section 5.2, the author mentions \\\" We simulated this for the GPU in the same table.\\\". Can you elaborate on how you \\\"simulated\\\" the GPU time? How is the simulation done? How well does it predict the actual implementation? Can you implement an efficient kernel for this and show the actual speedup? For the CPU runtime, can you explain in more detail the experimental setting? Can you report the actual latency improvement against theoretical FLOP reduction? For the result in Table 1, why the result of the original ResNet50 is not reported?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: This paper studies conditional channel gated networks. The network is designed to disable certain channels depending on the inputs. This can be used to save computation. The idea is built on top of \\u201cConvolutional Networks with Adaptive Inference Graphs.\\u201d The authors propose the technique of batch shaping to encourage the marginal statistics of the gating to be selective to different inputs. With similar inference time, the gated network can achieve better accuracy it can afford adding more layers in the network.\", \"detailed_comments\": [\"The results show quite significance difference compared to ConvNet-AIG, which demonstrates that batch-shaping is very helpful. Table-1 show 3% increase in accuracy compared ResNet-18 by using similar inference time. It is good that the paper reports wall-clock time measurements.\", \"It\\u2019s good to see some visualizations in the paper, including the image samples and gate locations. I recommend to move Figure 7 to the main paper. A regular neural network can also be used to visualize the sensitivity of patterns of specific neurons. What would be the qualitative differences?\", \"1-2x reduction in MAC is not super impressive, especially taking into consideration of the overhead for gathering the active channels for convolution.\", \"Figure 3a) plot is cut off on the right. The baselines only have a single point in the plot, I guess it is also valid to simply add/remove layers in the baseline models to generate a curve in the plot.\", \"ResNet-50-L0 is missing in Figure 3b). It would be better if the plots can be grouped better. Currently there are too many lines and it is hard to understand the differences.\", \"It would be good to see comparisons to some other alternatives to batch shaping. For example, one can penalize so that the average value is around 0.5 by using a L1 loss |E(x) \\u2013 0.5|.\", \"The ImageNet experiment has a very complicated set-up, where L0 loss is applied in the middle of the training. Is this necessary? How important is this step? What would happen if L0 loss is not applied in ImageNet? And what would happen if L0 loss is applied from the beginning? Why is L0 loss not applied in other experiments (e.g. CIFAR or Cityscapes), will L0 loss be beneficial on these benchmarks as well?\", \"There are a number of related works on adaptive spatial attention for faster inference, which can be included in the related work section.\", \"1) M. Figurnov, M. D. Collins, Y. Zhu, L. Zhang, J. Huang,D. P. Vetrov, and R. Salakhutdinov. Spatially adaptive computation time for residual networks. CVPR, 2017.\", \"2) X. Li, Z. Liu, P. Luo, C. C. Loy, and X. Tang. Not all pixelsare equal: Difficulty-aware semantic segmentation via deeplayer cascade. CVPR, 2017.\", \"3) M. Ren, A. Pokrovsky, B. Yang, R. Urtasun. SBNet: Sparse Blocks Network for Fast Inference. CVPR, 2018.\"], \"conclusion\": \"The batch shaping technique introduced in this paper has significant improvement on networks that exploit conditional inference. Further understanding of the effect of L0 loss and other alternative loss function is recommended. My overall rating is weak accept.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper describes a method to train a network with large capacity, only parts of which are used at inference time in an input-dependent manner. This leads to accuracy gains without an increase in inference cost. Fine-grained conditional selection is done, using gating of Individual convolutional feature maps. A new method termed \\u201cbatch shaping\\u201d to regularize the network to encourage that the features are used conditionally is introduced and combined with additional regularizer adapted from prior work.\\n\\nThere has been a large body of work along the same research direction. Few of the prior works have focused on fine-grained selection of features, and the ones that have, such as Gao et al, have used a fixed number of features (top-k) across examples instead of dedicating more computation to more difficult examples. In addition, the current work outperforms related prior work through the use of the new regularization technique (batch shaping).\\n\\nThe paper contains thorough comparison to related prior works on three datasets. It also ablates the contribution of the separate aspects of the method -- the fine-grained gating, the batch shaping regularizer, and the L0 penalty. The results demonstrate the all of these aspects contribute to improvements over prior work and result in good accuracy/efficiency trade-offs.\\n\\nAlthough this research is not a large departure from prior work, the novelty of the batch shaping regularizer, the thorough empirical study, the experimental gains, and the clarity of the paper makes this a solid contribution.\"}", "{\"comment\": \"Thank you for your comment and for catching this notation error. The output of the fully connected layer can indeed take negative values and should be denoted as $\\\\hat{\\\\pi_{k}}$ rather than $\\\\pi_k$. We define the logits $\\\\hat{\\\\pi_{k}}=ln({\\\\pi_k})$. We will correct the notation in the revised version.\", \"title\": \"Response to question about Eq.5\"}", "{\"comment\": \"Thanks for your excellent work.\\nThe \\u03c0i is the output of the second fc layer according to the statement \\\"The second fully connected layer linearly projects the features to unnormalized probabilities \\u03c0k\\\". It seems that \\u03c0i might be negative. And ln() with negative number cause nan.\\nHow can you avoid this problem?\", \"title\": \"question about ln(\\u03c0i) in Eq.5\"}" ] }
rJg851rYwH
Making the Shoe Fit: Architectures, Initializations, and Tuning for Learning with Privacy
[ "Nicolas Papernot", "Steve Chien", "Shuang Song", "Abhradeep Thakurta", "Ulfar Erlingsson" ]
Because learning sometimes involves sensitive data, standard machine-learning algorithms have been extended to offer strong privacy guarantees for training data. However, in practice, this has been mostly an afterthought, with privacy-preserving models obtained by re-running training with a different optimizer, but using the same model architecture that performed well in a non-privacy-preserving setting. This approach leads to less than ideal privacy/utility tradeoffs, as we show here. Instead, we propose that model architectures and initializations are chosen and hyperparameter tuning is performed, ab initio, explicitly for privacy-preserving training. Using this paradigm, we achieve new state-of-the-art accuracy on MNIST, FashionMNIST, and CIFAR10 without any modification of the fundamental learning procedures or differential-privacy analysis.
[ "differential privacy", "deep learning" ]
Reject
https://openreview.net/pdf?id=rJg851rYwH
https://openreview.net/forum?id=rJg851rYwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "vtSBjN2fdJ", "HkgRvJUhor", "H1xIuTmhoB", "SJgELT72sH", "ryeAJTm2sB", "SyePp2XnsB", "HylXFnQ2oH", "BkeOIhXniB", "Skx5Yj8pcr", "B1gcmtH15B", "SJxqstEyqB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734838, 1573834598186, 1573825901757, 1573825868002, 1573825766056, 1573825727325, 1573825658870, 1573825615562, 1572854658122, 1571932449696, 1571928482226 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1878/Authors" ], [ "ICLR.cc/2020/Conference/Paper1878/Authors" ], [ "ICLR.cc/2020/Conference/Paper1878/Authors" ], [ "ICLR.cc/2020/Conference/Paper1878/Authors" ], [ "ICLR.cc/2020/Conference/Paper1878/Authors" ], [ "ICLR.cc/2020/Conference/Paper1878/Authors" ], [ "ICLR.cc/2020/Conference/Paper1878/Authors" ], [ "ICLR.cc/2020/Conference/Paper1878/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1878/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1878/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper presents experimental evidence that learning with privacy requires optimization of the model settings (architectures and initializations) that are not identical to those used when learning without privacy. While acknowledging potential usefulness of this work for practitioners, the reviewers expressed several important concerns such as (1) lack of SOTA baseline comparisons, (2) lack of clarity of the empirical evaluation protocols, (3) large models (that are widely used in practice) have not been studied in the paper, (4) low technical novelty. The authors have successfully addressed some of the concerns regarding (1) and (2). However (3) and (4) make it difficult to assess the benefits of the proposed approach for the community and were viewed by AC as critical issues. We hope the detailed reviews are useful for improving and revising the paper.\", \"title\": \"Paper Decision\"}", "{\"title\": \"New weight-scaling experiments\", \"comment\": \"Following the reviewer's advice to make it easy to understand the worst-case potential privacy implications of weight scaling, we performed new weight-scaling experiments, now shown in Figure 4. These experiments all transfer the mean and variance from seed models previously-trained with differential privacy using DP-SGD with the same clipping and noise settings. The experiments exhibit the same type of improvements as shown in the original submission where the transfer was performed from a non-private model.\\n\\nBy their construction, the worst-case privacy loss is at most doubled in these experiments. Since we are post-processing an already-DP model, this transfer does not require any complexities, such as training multiple models, doing PATE-style PTR, etc., although we still discuss this as a means of extracting statistics. (Of course, since only six floating-point values---three mean and variance pairs---are extracted from a model trained with privacy, and used to bias a random initialization process, the real-world privacy loss is likely to increase by less than a factor of two.)\"}", "{\"title\": \"Second part of the comment.\", \"comment\": \"> 4.2 Initialization by Weight Scaling proposes that judiciously scaling initial weights can improve model privacy/utility. This scaling is done by \\u201ctransfer from one differentially private model to another\\u201d, where \\u201cDP-SGD can be applied to train a model with high utility, but less than ideal privacy\\u201d and then extracting the relevant information from there in order to initialize a new differentially private model that will be trained with strong privacy guarantees. It is claimed that \\u201cthis extraction can be done in a differentially-private manner, e.g., as in Papernot et al. (2018), although the privacy\\nrisk of summary statistics that drive random initialization should be vanishing\\u201d. It is unclear to me how this extraction of summary statistics should be done in such a way that doesn\\u2019t consume a significant portion of the privacy budget. If there is such a way, it should be clearly stated and its effect on the privacy budget should be explicitly incorporated into this paper\\u2019s results.\\n\\nWe\\u2019ve clarified in the paper how extracting these summary statistics should be done without consuming a significant portion of the privacy budget. Following [Y, Z], one can use the formal framework of sub-sample and aggregate in conjunction with Propose-Test-Release (PTR) for this selection. Given privacy parameters (\\\\epsilon, \\\\delta), the algorithm first splits the training data into disjoint subsets, and trains models independently on each of the splits. Using these trained models, the parameter is chosen via consensus voting with differential privacy. To ensure privacy, a consensus of majority + (1/\\\\epsilon) * \\\\log(1/\\\\delta) is needed. Notice that if the training data set is large, and there is a strong consensus, then the cost towards privacy is very low.\\n\\n[Y] Bassily, Raef, Om Thakkar, and Abhradeep Guha Thakurta. \\\"Model-agnostic private learning.\\\" Advances in Neural Information Processing Systems. 2018.\\n\\n[Z] Papernot, Nicolas, et al. \\\"Scalable private learning with pate.\\\" ICLR (2018).\\n\\n> Minor: The statement that \\u201cSuch accuracy loss may sometimes be inevitable\\u201d on page 1 should include a reference; e.g., Feldman\\u2019s \\u201cDoes Learning Require Memorization? A Short Tale about a Long Tail\\u201d paper (https://arxiv.org/abs/1906.05271).\\n\\nFeldman\\u2019s paper provides excellent support for our statement. We added a citation to it, as well as to https://arxiv.org/abs/1905.12101 which makes related empirical observations.\"}", "{\"title\": \"Thank you for the review.\", \"comment\": \"> My major concern with this paper lies in the experimental methodology. Specifically, most experiments are based on varying a single component while leaving all other components the same. While this is certainly the scientifically-valid way to demonstrate the component\\u2019s influence on the entire system given the other fixed components, it doesn\\u2019t convincingly demonstrate that the component has this influence across all (or at least most) reasonable configurations of the other components.\\nThis can be made concrete using many experiments in the paper, but let\\u2019s take the activation functions experiment of 3.2 as an example. Here, it is shown that after fixing the privacy guarantee, model structure, training procedure, and hyperparameters -- the tanh activation performs better than the ReLU activation. However, suppose instead that we fix all of these components except the hyperparameters; it may then be the case that the ReLU activation is capable of outperforming the tanh activation when its hyperparameters are chosen carefully. In other words, to validly compare the two activations and reach a convincing conclusion, they should be compared against each other in their own individually-best settings (e.g., the results induced by the optimal hyperparameters for ReLU versus the results induced by the optimal hyperparameters for tanh). This is similar to the problem addressed in Avent et al.\\u2019s \\u201cAutomatic Discovery of Privacy\\u2013Utility Pareto Fronts\\u201d paper (https://arxiv.org/abs/1905.10862). \\n\\nWe thank the reviewer for their careful review and suggestions on methodology and exposition. Due to poor presentation of our experimental methodology, it was unclear in our original submission that we had simultaneously finetuned all components, explored individually in the body of the paper, to produce the summary table included in the conclusion. We\\u2019ve updated the language in the conclusion to explain how this summary table for instance presents experimental results that compare ReLU with tanh in their own individually-best setting, by first fixing the activation function and then fine-tuning all other hyperparameters (model structure, training procedure, and hyperparameters). It shows that tanh consistently outperforms ReLU: e.g., with 98.1% test accuracy instead of 96.6% test accuracy on MNIST for the same privacy guarantee, even in their own individually-best settings. We also added a citation to Avent et al. in the main body of the paper.\\n\\n> The specific technical details on some experiments were either difficult to find or were lacking. Given that this is fundamentally an experimental paper, having these details clearly listed somewhere for reference is important, even if relegated to an Appendix. Although this applies more broadly to most of the experiments, we can use Section 3 as an example again: the details on the experiment in 3.1 were found in the caption of Figure 1, whereas I would have expected them either in the main body or clearly listed in their own table; the details on the experiment in 3.2 specify that everything is identical between the tests of the two activation functions, however it is never specified exactly what is being altered (and by how much) to vary the \\\\epsilon value.\\n\\nWe added missing details of the experimental setup in an Appendix.\"}", "{\"title\": \"Second part of comment.\", \"comment\": \"> The baselines are not enough. Of course, Abadi et al.\\u2019s work is outstanding in handling the privacy learning of deep networks. It has been further developed by the following researchers. For example, [B] and [C]. Does the conclusion still hold for these algorithms?\\n\\nWe regret not having provided provided more up-to-date baselines than those in Abadi et al., and thank the reviewer for pointing this out. \\n\\nFor MNIST, the previous SotA in terms of privacy and utility is---as far as we know---that shown in the TensorFlow Privacy GitHub repo: 95% accuracy at eps 1.19; 96.6% at eps 3.01, and 97% accuracy at epsilon 7.10 (see https://github.com/tensorflow/privacy/tree/master/tutorials)\\n\\nFor CIFAR10, the previous SotA is harder to nail down, because of the number of incomparable assumptions and privacy definitions in previous work. However, for the setup that we use, and was also used in Abadi el at. (CIFAR-10 training with transfer learning from CIFAR-100), we know of no result that has improved on the initial results of Abadi et al.\\n\\nBoth [B] and [C] are based on standard DP-SGD; therefore, all of our techniques should be directly applicable. With regard to baselines, the MNIST accuracy is less than 94% in [B] and less than 60% in [C, Figure 5] and the CIFAR10 accuracy is less than 45% in both papers, for all privacy levels that the papers investigate (which seem along the same lines as in our work).\\n\\nBelow are further, recent papers that we can discuss in the camera ready, if the reviewers feel this is appropriate.\\n\\nAs another baseline, we considered discussing [D], as it reports very high accuracy and privacy for MNIST. However, we decided against it, because over the last year-and-a-half we have been unable to replicate the paper\\u2019s results. Even when using code provided by the authors, we have only trained models with privacy/accuracy tradeoffs much worse than those reported elsewhere (e.g., in TensorFlow Privacy). As part of writing this ICLR response, we contacted the authors again, and they told us that they are now themselves unable to reproduce their results.\\n\\nOne of several recent papers based on distributed/federated means of learning is [E], which reports MNIST accuracy of 97% and CIFAR10 accuracy of 94% for a 3-layer CNN, both at epsilon 4.65. However, [E] does not fully explain how those results can have come about or, in particular, how the leader/follower architecture may have strengthened the per-example signal and changed the meaning of epsilon upper bounds. Distributed learning can radically change privacy definitions and transmit information in unexpected ways (e.g., see [G]); therefore, we decided against comparing against [E] until we have a better understanding of the work.\\n\\nFinally, [F] describes another distributed mechanism for learning with privacy that reports excellent privacy/utility tradeoffs for both CIFAR10 and MNIST (although the MNIST results are worse than in our work). However, the results in [F] do not seem to be using the same definition of DP epsilons as is standard, and as are used in our work. This can be seen in Figure 10, where model accuracy remains unchanged despite epsilon increasing from 1 to 10. This can also be seen in Table II, where there is a multi-percentage gap between training accuracy and test accuracy---even though a defining characteristic of DPML with significant epsilons is that it should eliminate any such gap, and does do so in practice.\\n\\n[B] Yu et al. Differentially Private Model Publishing for Deep Learning. IEEE S&P, 2019 (version at https://arxiv.org/abs/1904.02200)\\n\\n[C] Phan et al. Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness. Joint Conf on AI, 2019 https://arxiv.org/abs/1906.01444\\n\\n[D] Li et al. On Connecting Stochastic Gradient MCMC and Differential Privacy, PMLR 2019 (http://proceedings.mlr.press/v89/li19a.html; Arxiv version is incorrect, according to authors.)\\n\\n[E] Cheng et al. LEASGD: an Efficient and Privacy-Preserving Decentralized Algorithm for Distributed Learning, PPML 2018, at https://arxiv.org/pdf/1811.11124.pdf \\n\\n[F] Arachchige et al. Local Differential Privacy for Deep Learning, Internet of Things Journal, 2019, at https://arxiv.org/pdf/1908.02997.pdf\\n\\n[G] Melis et al. Exploiting Unintended Feature Leakage in Collaborative Learning, IEEE S&P 2019, https://arxiv.org/pdf/1805.04049.pdf\"}", "{\"title\": \"Thank you for the review.\", \"comment\": \"> As far as empirical research, the compared techniques are too few. What if we use those less popular techniques, for example, RMSprop optimization method?\\n\\nWe believe the techniques considered are sufficiently broad in scope because they enable us to improve the state-of-the-art significantly. In particular, we focused on SGD and Adam because they are the most popular optimizers. While we did not report results for other optimizers, our preliminary experiments show similar conclusions for other adaptive optimizers such as SGD with momentum or RMSprop. Recall that training with differential privacy is slow (in terms of wall-clock training time) because one needs to compute per-example gradients of the loss rather than gradients of the average loss across a batch of examples. This limited our ability to repeat all of our experiments with other optimizers within the scope of the rebuttal process.\\n\\n> The model capacity of neural networks, especially deep networks, has some non-trivial relation to the number of filters or the number parameters. It is important to quantify such relation. A good reference might be [A]. Briefly, the generalization performance may not be monotonic against the number of parameters.\\n\\nThank you for the pointer, we added a reference to [A] in our revised manuscript. As shown in Figure 4 of [A], the number of parameters is a good proxy for the model\\u2019s capacity in our setting. However, we acknowledge that the relationship between generalization performance and the number of parameters is not always monotonic. In fact, we believe that a future study of how different measures of capacity can inform the design of model architectures for private learning would be fruitful. We\\u2019ve updated the paper to reflect insights provided by your comment.\\n\\n[A] Neyshabur, B., Bhojanapalli, S., Mcallester, D., & Srebro, N. (2017). Exploring Generalization in Deep Learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 30 (pp. 5947\\u20135956).\"}", "{\"title\": \"Second part of comment\", \"comment\": \"> Conclusions relative Adam vs SGD seem to repeat what's already known or been discussed about these methods outside of the DP topic. May be worth highlighting that when one knows how to set learning rates for SGD (may be via learning rate scheduler, not discussed in the paper but practically relevant) then SGD may be as good or slightly better than Adam. However note, adaptive optimizers are often preferred for their ease of use as no tweaking and searching for an optimal learning rate is required. Would not this problem be detrimental for SGD optimization affecting the privacy budget?\\n\\nThe reviewer correctly points out that our Adam and SGD observations confirm what has been discussed before outside of differential privacy (we had tried to make this connection explicit through the reference to Wilson et al. at the end of Section 5.1 but are open to suggestions on how to make this more clear). Those observations deserved to be revisited in the context of DPML, where noise and clipping are confounding factors. For DPML, we wanted to raise awareness to how---particularly in later epochs---SGD may outperform Adam due to the accumulation of noise injected to preserve privacy (see Figure 5-right), and how some privacy budget could be allocated to fine-tuning the learning rate with privacy [X].\\n\\n[X] Liu, Jingcheng, and Kunal Talwar. \\\"Private selection from private candidates.\\\" Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing. ACM, 2019.\\n\\n> Please add wall-clock time column to Table 5 to support the statement about 4 times gain.\\n\\nWe\\u2019ve added wall-clock time to Table 5 to support this statement. \\n\\nI think it's more accurate to change \\\" This confirms that earlier theoretical analysis (Talwar et al., 2014) also holds in the non-convex setting.\\\" to \\\"This suggests that earlier theoreticalanalysis (Talwar et al., 2014) also holds in the non-convex setting.\\\"\\n\\nThank you for the suggestion, we fixed the language around the citation to Talwar et al., 2014.\"}", "{\"title\": \"Thank you for the review.\", \"comment\": \"> The example models used in demonstrations are quite small (3 hidden layers 26,000 parameters, when, for example, a standard segmentation CNN model U-net can typically have 26,000,000, AlexNet has about 60,000,000 and so on). The results would be much more convincing if these or other models widely used in practice were used as running examples.\\n\\nWe agree that larger datasets and larger models would be more convincing (e.g., ImageNet with AlexNet or a more modern architecture). However, research into differentially-private ML (DPML) has simply not progressed to be able to offer strong privacy/utility tradeoffs for such models and tasks. A quick literature survey (or Google Scholar search) shows that DPML work often considers only simple tasks (e.g., logistic regression on the UCI Adult dataset), even in 2019. For DPML work with strong privacy guarantees and high utility, the most challenging datasets considered are still those of MNIST, FashionMNIST, and CIFAR-10 (e.g., see the recent survey in Jayaraman & Evans, https://arxiv.org/pdf/1902.08874.pdf). The DPML research community has been focused on improving the accuracy of those models, without sacrificing the DP privacy guarantees (i.e., without increasing the DP epsilon), as we do in our work, reaching a new state-of-the-art. (This said, we strongly agree that DPML research should move onto more complex tasks; we feel that the results in our current paper are a good step in that direction.)\\n\\nWe chose relatively simple and small models for our MNIST experiments because no further complexity or capacity was needed to achieve good accuracy without privacy. Because DPSGD training becomes increasingly more challenging with increased dimensionality (see Section 3.1), a large number of superfluous model parameters can only hinder DPML training to high accuracy. Choosing larger models would have handicapped our experiments, unnecessarily.\\n\\n> In Figure 1 the multitude of point on the plot makes it unclear whether they represent result variability per number of filters or simply reflect variability as the number of filters grows. If it is the latter, it seems appropriate to perform a cross validation analysis and report standard deviations. Especially in the MNIST plot the values for SGD and DP-SGD are so close that they may in fact be statistically indistinguishable. Hard to tell by looking at a point estimate. The same request holds for Figure 2, where the difference may be immaterial, but as the figure currently stands it is unclear.\\n\\nOriginally, Figure 1 plotted the outcome of a fine-tuning strategy optimizing the number of filters to maximize accuracy, which explained the multitude of points in the regions with the largest accuracy (high number of filters for SGD and low number of filters for DP-SGD). We updated Figure 1 to instead report the mean accuracy for each number of filters along with the standard deviation. This updated figure clearly indicates that there is an inflection point for DP-SGD, which does not exist for SGD, on both datasets.\\n\\n> Section 3.2 reports some numbers for test accuracy but the uncertainty of these numbers with respect to the test set changes (cross validation) is not reported and the numbers are quite close to each other. Furthermore, the dataset is not described and it is unclear what was the size of the training and the test sets.\\n\\nWe\\u2019ve added standard deviations across 10 runs for the results presented in Figure 2 within Section 3.2, demonstrating that the improvement is meaningful. To put the increase in accuracy into context, we note that on MNIST at comparable privacy guarantees, the state-of-the-art accuracy went up from 95% in 2016 (with PCA dimensionality reduction) to 96.6% in late 2018 (without any dimensionality reduction), which our results then improved to 98.1% (again without any dimensionality reduction). We\\u2019ve improved the writing to more clearly state that MNIST (Figure 2 - left) and FashionMNIST (Figure 2 - right) refer to their standard learning tasks, and their datasets of 60,000 training examples and 10,000 test examples, as is standard.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper presents experimental evidence that learning with privacy requires approaches that are not identical to those used when learning without privacy. These approaches include re-considering different model choices (i.e., its structure and activation functions), its initialization, and its optimization procedure. With these changes, they show that it is possible to obtain state-of-the-art results for some canonical learning tasks.\", \"strengths\": \"This paper questions nearly every component in the training pipeline, including choices about the model structure, initialization strategies, and optimization procedures. For each component, they show that judiciously choosing the components (which go against the standard choices in the non-private learning setting) enables training higher-utility models than in previous works without sacrificing privacy. Moreover, in addition to the experimental evidence alone, most of the components considered in the paper were accompanied by reasonable justification/hypotheses for why the choices enable such improvements.\\nThis paper helps push differentially private learning to a more practically-useful realm. First, the suggested changes here are easy for a practitioner to understand and easy to implement. With only these simple changes, the concrete results then show that it is possible to achieve utility close to the analogous non-private model while still maintaining reasonable utility (\\\\epsilon less than 3 with \\\\delta of 10^-5).\", \"weaknesses\": \"My major concern with this paper lies in the experimental methodology. Specifically, most experiments are based on varying a single component while leaving all other components the same. While this is certainly the scientifically-valid way to demonstrate the component\\u2019s influence on the entire system given the other fixed components, it doesn\\u2019t convincingly demonstrate that the component has this influence across all (or at least most) reasonable configurations of the other components.\\nThis can be made concrete using many experiments in the paper, but let\\u2019s take the activation functions experiment of 3.2 as an example. Here, it is shown that after fixing the privacy guarantee, model structure, training procedure, and hyperparameters -- the tanh activation performs better than the ReLU activation. However, suppose instead that we fix all of these components except the hyperparameters; it may then be the case that the ReLU activation is capable of outperforming the tanh activation when its hyperparameters are chosen carefully. In other words, to validly compare the two activations and reach a convincing conclusion, they should be compared against each other in their own individually-best settings (e.g., the results induced by the optimal hyperparameters for ReLU versus the results induced by the optimal hyperparameters for tanh).\\nThis is similar to the problem addressed in Avent et al.\\u2019s \\u201cAutomatic Discovery of Privacy\\u2013Utility Pareto Fronts\\u201d paper (https://arxiv.org/abs/1905.10862). \\nThe specific technical details on some experiments were either difficult to find or were lacking. Given that this is fundamentally an experimental paper, having these details clearly listed somewhere for reference is important, even if relegated to an Appendix. Although this applies more broadly to most of the experiments, we can use Section 3 as an example again: the details on the experiment in 3.1 were found in the caption of Figure 1, whereas I would have expected them either in the main body or clearly listed in their own table; the details on the experiment in 3.2 specify that everything is identical between the tests of the two activation functions, however it is never specified exactly what is being altered (and by how much) to vary the \\\\epsilon value.\\n4.2 Initialization by Weight Scaling proposes that judiciously scaling initial weights can improve model privacy/utility. This scaling is done by \\u201ctransfer from one differentially private model to another\\u201d, where \\u201cDP-SGD can be applied to train a model with high utility, but less than ideal privacy\\u201d and then extracting the relevant information from there in order to initialize a new differentially private model that will be trained with strong privacy guarantees. It is claimed that \\u201cthis extraction can be done in a differentially-private manner, e.g., as in Papernot et al. (2018), although the privacy\\nrisk of summary statistics that drive random initialization should be vanishing\\u201d. It is unclear to me how this extraction of summary statistics should be done in such a way that doesn\\u2019t consume a significant portion of the privacy budget. If there is such a way, it should be clearly stated and its effect on the privacy budget should be explicitly incorporated into this paper\\u2019s results.\", \"minor\": \"The statement that \\u201cSuch accuracy loss may sometimes be inevitable\\u201d on page 1 should include a reference; e.g., Feldman\\u2019s \\u201cDoes Learning Require Memorization? A Short Tale about a Long Tail\\u201d paper (https://arxiv.org/abs/1906.05271).\\n\\n\\nOverall, this work provides good practical guidance to practitioners and researchers who wish to do differentially private machine learning. However, given the lack of theoretical novelty, the experimental methodology needs to be improved in order to significantly strengthen the results (assuming they continue to hold).\\n\\n\\n----------------------------------------------------\", \"update\": \"Due to the authors' writing clarifications and experimental additions, in conjunction with the concrete and realistically-applicable insights from the paper, I've modified my rating to a Weak Accept.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Overall, this work empirically evaluates different techniques used in privacy learning and suggest useful methods to stabilize or improve performance.\", \"detail_comments\": \"\", \"strength\": \"Despite the progress of privacy-preserving learning in theory, there are few works providing learning details for better training. Especially, considering the instability in perturbation-based private algorithms, e.g., most DP ones, the work could be valuable in the sense of practice.\", \"weakness\": \"As far as empirical research, the compared techniques are too few. What if we use those less popular techniques, for example, RMSprop optimization method?\\n\\nThe model capacity of neural networks, especially deep networks, has some non-trivial relation to the number of filters or the number parameters. It is important to quantify such relation. A good reference might be [A]. Briefly, the generalization performance may not be monotonic against the number of parameters.\\n\\nThe baselines are not enough. Of course, Abadi et al.\\u2019s work is outstanding in handling the privacy learning of deep networks. It has been further developed by the following researchers. For example, [B] and [C]. Does the conclusion still hold for these algorithms?\\n\\n[A] Neyshabur, B., Bhojanapalli, S., Mcallester, D., & Srebro, N. (2017). Exploring Generalization in Deep Learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 30 (pp. 5947\\u20135956). \\n[B] Yu, L., Liu, L., Pu, C., Gursoy, M. E., & Truex, S. (2019). Differentially Private Model Publishing for Deep Learning. Proceedings of 40th IEEE Symposium on Security and Privacy. \\n[C] Phan, N., Vu, M. N., Liu, Y., Jin, R., Dou, D., Wu, X., & Thai, M. T. (2019). Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness. Proceedings of the Twenty-Eighth International Joint Conference on Artificial\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper methodically analyses the settings and choices used when training neural networks (specifically CNNs) via the DP-SGD algorithm and suggests changes to the standard procedures that empirically lead to higher accuracies despite the added noise. The main statement of the paper is quite simple: optimize hyperparameters for the model that you're training (DP-SGD) rather than the model it is inspired by. Yet, the findings an recommendations may be useful for practitioners.\\n\\nNevertheless, to be more practically relevant the paper needs some modifications:\\n\\nThe example models used in demonstrations are quite small (3 hidden layers 26,000 parameters, when, for example, a standard segmentation CNN model U-net can typically have 26,000,000, AlexNet has about 60,000,000 and so on). The results would be much more convincing if these or other models widely used in practice were used as running examples.\\n\\nIn Figure 1 the multitude of point on the plot makes it unclear whether they represent result variability per number of filters or simply reflect variability as the number of filters grows. If it is the latter, it seems appropriate to perform a cross validation analysis and report standard deviations. Especially in the MNIST plot the values for SGD and DP-SGD are so close that they may in fact be statistically indistinguishable. Hard to tell by looking at a point estimate. The same request holds for Figure 2, where the difference may be immaterial, but as the figure currently stands it is unclear.\\n\\nSection 3.2 reports some numbers for test accuracy but the uncertainty of these numbers with respect to the test set changes (cross validation) is not reported and the numbers are quite close to each other. Furthermore, the dataset is not described and it is unclear what was the size of the training and the test sets.\\n\\nConclusions relative Adam vs SGD seem to repeat what's already known or been discussed about these methods outside of the DP topic. May be worth highlighting that when one knows how to set learning rates for SGD (may be via learning rate scheduler, not discussed in the paper but practically relevant) then SGD may be as good or slightly better than Adam. However note, adaptive optimizers are often preferred for their ease of use as no tweaking and searching for an optimal learning rate is required. Would not this problem be detrimental for SGD optimization affecting the privacy budget?\\n\\nPlease add wall-clock time column to Table 5 to support the statement about 4 times gain.\\n\\nI think it's more accurate to change \\\" This confirms that earlier theoreticalanalysis (Talwar et al., 2014) also holds in the non-convex setting.\\\" to \\\"This suggests that earlier theoreticalanalysis (Talwar et al., 2014) also holds in the non-convex setting.\\\"\"}" ] }
HygS91rYvH
Universal Adversarial Attack Using Very Few Test Examples
[ "Amit Deshpande", "Sandesh Kamath", "K V Subrahmanyam" ]
Adversarial attacks such as Gradient-based attacks, Fast Gradient Sign Method (FGSM) by Goodfellow et al.(2015) and DeepFool by Moosavi-Dezfooli et al. (2016) are input-dependent, small pixel-wise perturbations of images which fool state of the art neural networks into misclassifying images but are unlikely to fool any human. On the other hand a universal adversarial attack is an input-agnostic perturbation. The same perturbation is applied to all inputs and yet the neural network is fooled on a large fraction of the inputs. In this paper, we show that multiple known input-dependent pixel-wise perturbations share a common spectral property. Using this spectral property, we show that the top singular vector of input-dependent adversarial attack directions can be used as a very simple universal adversarial attack on neural networks. We evaluate the error rates and fooling rates of three universal attacks, SVD-Gradient, SVD-DeepFool and SVD-FGSM, on state of the art neural networks. We show that these universal attack vectors can be computed using a small sample of test inputs. We establish our results both theoretically and empirically. On VGG19 and VGG16, the fooling rate of SVD-DeepFool and SVD-Gradient perturbations constructed from observing less than 0.2% of the validation set of ImageNet is as good as the universal attack of Moosavi-Dezfooli et al. (2017a). To prove our theoretical results, we use matrix concentration inequalities and spectral perturbation bounds. For completeness, we also discuss another recent approach to universal adversarial perturbations based on (p, q)-singular vectors, proposed independently by Khrulkov & Oseledets (2018), and point out the simplicity and efficiency of our universal attack as the key difference.
[ "universal", "adversarial", "SVD" ]
Reject
https://openreview.net/pdf?id=HygS91rYvH
https://openreview.net/forum?id=HygS91rYvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "BXtAz0FOce", "SyliF_gxcH", "ByxmcCp6tB", "HJxMVJT3KS" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734808, 1571977347334, 1571835531033, 1571766058262 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1877/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1877/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1877/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes to get universal adversarial examples using few test samples. The approach is very close to the Khrulkov & Oseledets, and the abstract for some reason claims that it was proposed independently, which looks like a very strange claim. Overall, all reviewers recommend rejection, and I agree with them.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1877\", \"review\": \"This paper presents an observation that one can use the top singular vector of a matrix consisting of adversarial perturbation (Gradient attack, FGSM attack, or DeepFool) vectors of a subset of data as a universal attack (applying the same perturbation to all inputs and fools a large fraction of inputs). The paper gives a theoretical justification of their method using matrix concentration inequalities and spectral perturbation bounds.\", \"strengths\": [\"A simple and effective technique to fool a large fraction of examples leveraging the observation that only a small number of dominant principal components exist for input-dependent attack directions.\", \"Clean theoretical justification of the performance of the proposed methodology.\", \"I also like the observation and the generality, simplicity, and theoretical proof of the proposed universal attack algorithm SVD-Universal.\"], \"weaknesses\": [\"Performance seems to be inferior to previous methods e.g. Khrulkov & Oseledets 2018. The paper does not give a comparison between SVD-Universal and (p,q)-SVD.\", \"Although the author gives a justification of why they do not compare with (p,q)-SVD, I still like to see a comparison between the two methods such that we can have a better idea about what is the potential performance loss by using the SVD-Universal when compared with (p,q)-SVD.\", \"It is not clear to me how the authors build the matrix corresponding to the universal invariant perturbations in sec 6.\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a universal adversarial attack, which firstly conducts existing gradient-based attacks on the sample images and then applies SVD on the perturbations from those attacks. The universal attacks are the right singular vectors of the SVD. The experiments are conducted on attacking VGG and ResNet. In addition, theoretical analysis is also provided in the paper.\\n\\nCompared with instance-wise attacks, universal attacks are relatively rare. The idea of this paper is intuitive but I feel that it is highly related to the one in Khrulkov & Oseledets (2018). The latter finds singular vectors with the gradients of the hidden layers of the targeted classifier. In general, the instance-wise attacks such as FGSM and Gradient are essentially based on gradients of the classifiers, as well. Therefore, given Khrulkov & Oseledets (2018), I would consider the novelty of this paper is not large enough, although I can see that the proposed may be more efficient.\\n\\nIn addition to attacking raw classifiers, I would also expect the comparisons with defence methods against universal attacks, such as the one in [1].\", \"minors\": \"It is a bit hard to compare the performance across different methods in Figure 1. I would suggest using tables to give a clearer comparison.\\n\\nOverall, I think the paper stands on the borderline. \\n\\n[1] Akhtar, Naveed, Jian Liu, and Ajmal Mian. \\\"Defense against universal adversarial perturbations.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper studied the problem of universal adversarial attack which is an input-agnostic perturbation. The authors proposed to use the top singular vector of input-dependent adversarial attack directions to perform universal adversarial attacks. The authors evaluated the error rates and fooling rates for three attacks on standard benchmark datasets.\", \"The paper is generally well-written and easy to follow. My main concern towards this paper is about the experiments part from several aspects. First, the proposed method needs quite large L2 norm (50 on ImageNet) to work, while common adversarial attack experiments on ImageNet are usually conducted with L2 perturbation strength of 5 or less. I totally understand that performing universal attack would be much more difficult, yet having such loose L2 norm constraint still seems impractical. Second, the authors did not compare with any other baselines such as (Moosavi-Dezfooli et al. 2017a) arguing that their universal attack is different for different perturbation strength and pixels are normalized. I do not think normalized pixel will be a problem as you can simply scale the perturbation strength accordingly. And because (Moosavi-Dezfooli et al. 2017a) uses different attack vectors for different perturbation strength, some comparison between these two types of universal attacks should be presented in order to mark the difference and demonstrate your advantages. I would suggest the authors to compare with several mentioned baselines in the paper to show the superiority of the proposed method.\", \"Theorem 1 seems interesting, yet it needs a special assumption. The authors argue that this is a reasonable assumption in a small neighborhood of x. I wonder if the authors could conduct some demonstrative experiments to verify this? Because the definition of S_x depends on the attack function, does it mean that the assumption need to be held for any attack function? Also regarding the choice of \\\\delta, it seems that \\\\delta is different for different x? If so, since u is also depend on \\\\delta, this attack vector seems not universal?\"], \"detailed_comments\": \"- In proof of Theorem 1, all S should be G?\\n- In proof of Theorem 2, how to get \\\\|v - \\\\hat v\\\\|_2 \\\\leq \\\\epsilon/(\\\\gamma - \\\\epsilon)? Directly applying the Theorems seems to get \\\\epsilon / (\\\\gamma) only?\\n\\nDepending on whether the authors can address my concerns, I may change the final rating.\\n\\n\\n======================\\nafter the rebuttal\\n\\nI thank the authors for their response but I still feel that the assumption is not well-justified and there is still a lot to improve in terms of experiments. Therefore I decided to keep my score unchanged.\"}" ] }
rklr9kHFDB
Rotation-invariant clustering of neuronal responses in primary visual cortex
[ "Ivan Ustyuzhaninov", "Santiago A. Cadena", "Emmanouil Froudarakis", "Paul G. Fahey", "Edgar Y. Walker", "Erick Cobos", "Jacob Reimer", "Fabian H. Sinz", "Andreas S. Tolias", "Matthias Bethge", "Alexander S. Ecker" ]
Similar to a convolutional neural network (CNN), the mammalian retina encodes visual information into several dozen nonlinear feature maps, each formed by one ganglion cell type that tiles the visual space in an approximately shift-equivariant manner. Whether such organization into distinct cell types is maintained at the level of cortical image processing is an open question. Predictive models building upon convolutional features have been shown to provide state-of-the-art performance, and have recently been extended to include rotation equivariance in order to account for the orientation selectivity of V1 neurons. However, generally no direct correspondence between CNN feature maps and groups of individual neurons emerges in these models, thus rendering it an open question whether V1 neurons form distinct functional clusters. Here we build upon the rotation-equivariant representation of a CNN-based V1 model and propose a methodology for clustering the representations of neurons in this model to find functional cell types independent of preferred orientations of the neurons. We apply this method to a dataset of 6000 neurons and visualize the preferred stimuli of the resulting clusters. Our results highlight the range of non-linear computations in mouse V1.
[ "computational neuroscience", "neural system identification", "functional cell types", "deep learning", "rotational equivariance" ]
Accept (Talk)
https://openreview.net/pdf?id=rklr9kHFDB
https://openreview.net/forum?id=rklr9kHFDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "vzvObgEQ7V", "SJej92NvsH", "B1x7Z1Gwsr", "HkliVN_UiH", "ByggA5pEiS", "B1lA8c6VjH", "rkxj6UTNjr", "H1xjrBTViS", "S1eKwBtIcr", "B1lNHjJJ5B", "r1gCIKb0KB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734778, 1573502098594, 1573490426544, 1573450803231, 1573341896266, 1573341781938, 1573340867027, 1573340483032, 1572406624638, 1571908411983, 1571850582220 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1876/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1876/Authors" ], [ "ICLR.cc/2020/Conference/Paper1876/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1876/Authors" ], [ "ICLR.cc/2020/Conference/Paper1876/Authors" ], [ "ICLR.cc/2020/Conference/Paper1876/Authors" ], [ "ICLR.cc/2020/Conference/Paper1876/Authors" ], [ "ICLR.cc/2020/Conference/Paper1876/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1876/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1876/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Talk)\", \"comment\": \"This paper is enthusiastically supported by all three reviewers. Thus an accept is recommended.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Changing recommendation to Accept\", \"comment\": \"Thank you for the swift reply. With these interesting additional controls and the softened claims, I now think that this article is a valuable contribution that should be published at ICLR.\"}", "{\"title\": \"Title change and additional control\", \"comment\": \"Thank you very much for a quick reply and additional suggestions.\", \"in_the_latest_revision_of_the_manuscript_we\": [\"Replaced \\u201ccell types\\u201d with \\u201cneuronal responses\\u201d in the title;\", \"Updated the abstract to remove claims about the existence of cell types;\", \"Following your suggestion, added Fig. A2 showing the t-SNE embedding of aligned readouts with the features randomly permuted across the neurons. Similarly to Fig. A1, the plot is substantially less clustered than the one in Fig. 6. The examples of MEIs within clusters also look less consistent in comparison to Fig. 6. In Fig. A3 we show the plot from Fig. 6 as well the two controls side by side for an easier comparison.\"]}", "{\"title\": \"Thank you for careful rebuttal - additional requests\", \"comment\": \"Thank you for the careful rebuttal and interesting complementary analyses. In light of these additional results and arguments, I am ready to reconsider my evaluation if the following conditions are met:\\n\\n1) removal of all ambiguous wording suggesting that functional cell types were identified in V1. In particular, the title and abstract sentence reproduced below are misleading:\\n(title) \\\"Rotation-invariant clustering of functional cell types in primary visual cortex\\\"\", \"for_example\": \"cell types => cell responses\\n(abstract) \\\"We [...] provide evidence that discrete functional cell types may exist in V1.\\\"\\n=> unnecessarily ambiguous claim\\n\\n2) Additional control: Similarly to A1, do a 2D t-SNE embedding of the aligned readouts R \\u0303 with feature weights randomly permuted. However, instead of permuting feature weights for each of the neurons, permute feature weights *across* neurons, so as to keep the marginal feature pooling statistics the same.\"}", "{\"title\": \"A reply to Reviewer 2\", \"comment\": \"Thank you for the review and the comments! Below we answer your questions.\\n\\n\\n> 1. In Figure 2, what does 1 x feature + 2 x another_feature mean?\\n\\nFeature 1 (low frequency Gabor) and feature 2 (high frequency Gabor) are cartoon representations of the two features computed by the hypothetical rotation-equivariant CNN (CNN features are not simple Gabors of course, it is just a cartoon). The neurons are assumed to implement the linear combinations of the CNN outputs corresponding to those two features with coefficients 1 and 2. Our goal was to illustrate how different orientations of the neurons and the relative orientations of the CNN features influence the readout weights (i.e. linear combination weights) if the CNN is rotation-equivariant. We are sorry if the figure confused more than clarified this mechanism, and we are happy to answer any other questions about this mechanism.\\n\\n\\n> 2. In Equation 3, why was the \\u2018square\\u2019 of error differences not used?\\n\\nWe haven\\u2019t tried using squared difference. However, we don\\u2019t expect it to result in any substantial differences. Squared differences are more convenient to differentiate, but in times of automatic differentiation this benefit is not really relevant.\\n\\n\\n> 3. In the clustering approach, how is the number of mixtures set for the GMM? How stable is the model to different number of mixtures?\\n\\nWe evaluated the test likelihood on a held-out set of neurons for different numbers of clusters (Fig. 5), and chose 100 clusters as it is roughly where the likelihood curve started to saturate.\\n\\nWe group neurons into groups performing similar computations by examining the MEIs and the confusion matrices (Fig. 7). We find that the number of such groups is much smaller than the number of GMM mixtures (17 vs. 100). This means that as long as the GMM uses sufficiently many clusters, they would be merged into larger groups during post-processing, therefore, the exact number of GMM mixtures is not important.\\n\\n\\n> 4. In Figure 6: are Blocks 5 and 13 the same clusters (since they are of the same color) or is it that the colourmap use did not have 100 colors?\\n\\nThe colormap doesn\\u2019t have 100 colors. We tried using a colormap with 100 different colors, however, many of them are very similar and hard to visually distinguish, so they don\\u2019t provide much additional information. Therefore we chose to use a colormap with clearly distinct colors and show the correspondence between the block and the cluster in the scatter plot by the color of the border in the MEIs subplot.\\n\\n\\n> 5. In the \\u2018network learned redundant features\\u2019, Sentence 1: why do the authors say \\u2018similar MEIs\\u2019. The 16 neurons rendered in both blocks look different. \\n\\nThe 16 MEIs shown in blocks 9 and 13 look visually similar to us. Could you elaborate on the differences you observed?\"}", "{\"title\": \"A reply to Reviewer 2 (continuation)\", \"comment\": \"> 6. It will be informative to know how the number of clusters vary based on the correlation threshold used to collapse 100 clusters to a lower number. Are the clusters still functionally distinct for varying thresholds? Further why is MEI confusion matrix only shown for 13 groups?\\n\\nThe MEI confusion matrix is shown for a subset of the well predicted neurons (with test correlation >= 0.7), and some of the 17 groups shown in the cluster confusion matrix don\\u2019t contain such neurons, resulting in only 13 groups present in the MEI matrix. The motivation for that is that the MEIs are most informative for well-predicted neurons.\\n\\nThe cluster confusion matrix shows all 100 GMM clusters, and the merging procedure with the 0.5 threshold is a heuristic to rearrange the rows and columns to highlight the block structure in the matrix. We tried different threshold values and chose the value of 0.5 by visual inspection of the resulting block structure in the matrix and the corresponding MEIs.\\n\\nTo provide a better intuition of how the threshold value affects the matrix, we have updated the paper to include Figures C1 and C2. In Figure C1 we show the cluster confusion matrices resulting from starting the matrix in Figure 7 and merging the next three pairs of blocks with the highest correlations. We can see that such merges result in biggest blocks, which are not homogeneous (e.g. block 5 after the third merge clearly exhibits a checkerboard pattern suggesting it contains sufficiently different GMM clusters, which we want to avoid within the same block).\\n\\nIn Figure C2 we show the sequential splits of the three blocks (which is equivalent to performing the initial merging procedure backwards starting from the matrix in Figure 7). The split blocks shown are not the last ones merged before achieving the correlation threshold of 0.5 (those are blocks 15 and 16, each of which contains only two GMM cluster, but rather the ones merged before them, which we think are more illustrative for this figure). The examples of the MEIs of the split blocks are also shown. The split blocks seem to be correlated with the MEIs being sufficiently similar (perhaps with the exception of blocks 9 and 10).\\n\\nGenerally speaking, it is hard to find an exact threshold value which works best. However, grouping the cluster confusion matrix allows us to control the granularity of the functional blocks of neurons to highlight the range of computations implemented by the neurons and produce hypotheses of functional cell types which can be tested by further analysis based on other biological evidence (e.g. morphology or genetics).\"}", "{\"title\": \"A reply to Reviewer 1\", \"comment\": \"Thank you for the positive assessment of our submission.\\n\\nWe agree that a comparison to ground truth would be great. Unfortunately for pyramidal cells (> 80% of cells in cortex) it is unknown whether they\\u2019re further subdivided. There is evidence for genetic differences, but as far as we know there is no combined functional + genetic data available for pyramidal cells \\u2013 certainly not to us.\\n\\nRegarding your question what would happen if there was no input from the retina, we could only speculate. We do not think there is reason to believe that there is a biological mechanism enforcing equivariance. It\\u2019s more likely to be a consequence of the statistics of the visual input. Also note that the fact that a rotation-equivariant representation works well to describe the data does not mean that the brain\\u2019s representation is actually equivariant.\"}", "{\"title\": \"A reply to Reviewer 3\", \"comment\": \"Thank you for the thorough review and the useful comments. Your main point of contention appears to be a perceived lack of evidence for functional cell types. While we agree that we do not provide undeniable proof, we do believe that the clusters revealed in Fig. 6 and the clear block-diagonal structure of the confusion matrices in Fig. 7 constitute important pieces of evidence that at least suggest that functional cell types may exist.\\n\\nAssessing whether discrete clusters exist in a high-dimensional space is a notoriously difficult problem, for which no commonly accepted solution exists \\u2013 or at least we are not aware of one. A statistical comparison of the Gaussian Mixture Model (GMM) against alternative density models representing a continuous structure could be useful to refute the hypothesis of functional cell types if good candidates for alternative models existed and such models yielded a higher likelihood. However, such comparisons would never be strong evidence in favor of discrete clusters, because one would have to test against all alternative models. In addition, it is not clear to us what a good alternative model would be.\\n\\nThus, the question will necessarily have to be answered qualitatively to some extent, and \\u2013 as you also acknowledge \\u2013 verified by experiments. As these additional experiments are technically very challenging, they first require a very clear hypothesis, which is what the present paper provides. The experimental verification, however, is clearly beyond the scope of a conference paper.\\n\\nHaving said that, we followed your suggestion of applying our method to the readouts with randomly shuffled features (Figure A1 in the updated manuscript). The resulting t-SNE plot looks substantially less clustered than the one in Figure 6 and the MEIs within clusters also look less consistent. We believe this analysis is another piece of evidence for the functional cell types.\\n\\nWe also investigated whether there is spurious structure in the clustering due to alignment procedure. As you suggested, we analyzed a synthetic example (Fig. B1 in the updated manuscript). We show examples of raw and aligned data as well as the t-SNE embeddings colored according to the GMM clustering. As the amount of noise increases, the t-SNE plots for the raw and the aligned data become increasingly similar. That suggests that despite some the overfitting to noise (as shown in Fig. 4), the alignment procedure does not significantly affect the GMM clustering or the t-SNE embeddings of the unstructured data.\\n\\nWe would be very interested in any additional analyses we could do to convince you that we provide evidence for the hypothesis that the neurons form discrete clusters.\\n\\n\\n[ML method is too specific]\\n\\nWe agree that the presented alignment method is specific to the analysis of rotation-equivariant feature spaces. However, we think it might still be interesting to the community as a tool for analysis of equivariant feature spaces, since the same approach can be adapted to other symmetry groups (not only rotations) by replacing the cyclic shifts with an appropriate transformation for the other symmetry groups. There are multiple submissions to this conference (e.g. [1], [2], [3]) discussing equivariant (to rotations and other symmetries) neural networks, so we believe that additional tools for the analysis of such networks will be useful for future work in this direction.\\n\\n[1] https://openreview.net/forum?id=r1g6ogrtDr\\n[2] https://openreview.net/forum?id=B1xtd1HtPS\\n[3] https://openreview.net/forum?id=HJeYSxHFDS\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1876\", \"review\": \"The paper proposes an original approach to predict the function of groups of neurons in the V1 cortex based on their invariance to well designed rotation invariant CNN filters. The design of these features is funded by the observation that specific ganglion cell types have rotation and scale invariant responses to visual stimuli.\\nThe method is very clearly explained and the evaluation on an publicly available dataset looks promising. The clustering Figure 6 in particular is very insightful. \\nThe paper could have been more impactful if a comparison with a ground truth was built. The issue is clearly that ground truth is hard to establish for this type of problems but biological observations and annotations of cell types can be available (unfortunately not public as far as I know). \\nI would also be curious to know how such a method can be applied to a blind patient whose retina does not react to visual stimuli. Is there a biological function that will still preserve such invariance properties which allow to find structure in the data?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this study, the authors develop a method to cluster cells in primary visual cortex (V1) based on the cells' responses to natural images. The method consists in three steps:\\n- fit a rotation-equivariant convolutional neural network model to V1 cells (previously described in Ecker et al. 2019)\\n- align all cells by choosing the rotation for each cell that minimizes overall distance between cells in feature space, so that the clustering is mostly blind to the orientation of the filters.\\n- cluster the cells using a Gaussian mixture model (GMM).\\n\\nAlthough I find this article mostly well-written and the topic important, I cannot recommend acceptance because (1) the study does not make a significant contribution to our understanding of V1, (2) the main innovation in ML presented (alignment method) is quite specific and will thus not likely be of interest for the general audience of ICLR:\\n\\n(1) An important question in visual neuroscience is whether V1 cells form discrete functional clusters as opposed to a continuum. Another related question is whether these functional clusters correspond to distinct cell types characterized by specific wiring patterns, gene expression and/or morphology.\", \"the_analyses_performed_do_not_answer_any_of_these_two_questions\": [\"the clustering model (GMM model) is not compared statistically to other models that would assume a continuous structure in the data (e.g. cells form a sparse continuous manifold in feature space). Although clusters do appear in the t-SNE visualization, this visualization does not provide statistical evidence that cells indeed form distinct clusters.\", \"The correspondence of the proposed clusters to cell types with specific wiring patterns, gene expression and/or morphology is not established. To establish this correspondence would require further experiments, as acknowledged by the authors: \\\"To systematically classify the V1 functional cell types, these proposals need to be subsequently examined based on a variety of biological criteria reflecting the different properties of the neurons and the prior knowledge about the experiment\\\".\", \"(2) The alignment method, which consists in rotating the cells in feature space so that orientation is not a factor for subsequent clustering, is quite specific to the problem studied and likely not of interest for the general ICLR audience.\"], \"additional_feedback\": [\"Title: ROTATION-INVARIANT CLUSTERING OF FUNCTIONAL CELL TYPES IN PRIMARY VISUAL CORTEX\", \"=> \\\"functional cell types\\\" is not adequate here, since the article does not establish the existence of functional cell types. Could be replaced with \\\"cell responses\\\".\", \"Abstract: We apply this method to a dataset of 6000 neurons and provide evidence that discrete functional cell types may exist in V1.\", \"=> this sentence is misleading, since no evidence for functional clusters is provided.\", \"\\\"Thus, the network has learned an internal representation that allows constructing very similar functions in multiple ways\\\"\", \"=> To avoid the caveat of redundant features, the authors could try to add a dimensionality bottleneck on feature space before readout.\", \"\\\"Small values of \\u03b2 incur a small cost for poor reconstructions resulting in small optimised values of T and over-smoothed aligned readouts.\\\"\", \"=> A simulated annealing procedure (progressive increase of T during learning) could potentially allow the use of larger \\u03b2 values here (i.e. less distortion of the filter).\", \"The alignment procedure could lead to the emergence of spurious structure in the clustering. It would be important to control for this potential artifact by running the procedure on an unstructured synthetic dataset.\", \"It is possible that the MEIs within clusters look more similar than they actually are, since the cells are fitted from the same common bank of features. It would be useful but maybe difficult to control for this.\", \"It would be interesting to test the clustering procedure on a shuffled version of the readout weights (shuffle across features and V1 cells), so as to keep sparsity but not any other structure. Does the t-SNE map look less clustered? Is the GMM fit qualitatively different?\", \"Fig1(2): add legend/caption. what are the ellipses?\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors present a rotation-invariant representation of a CNN modeling the V1 neurons and a pipeline to cluster these neurons to find cell types that are rotation-invariant. Experimental validation is performed on a 6K neuron dataset with promising results.\\nThe paper is well postulated.\", \"below_are_comments_about_the_work\": \"1. In Figure 2, what does 1 x feature + 2 x another_feature mean?\\n2. In Equation 3, why was the \\u2018square\\u2019 of error differences not used? \\n3. In the clustering approach, how is the number of mixtures set for the GMM? How stable is the model to different number of mixtures?\\n4. In Figure 6: are Blocks 5 and 13 the same clusters (since they are of the same color) or is it that the colourmap use did not have 100 colors? \\n5. In the \\u2018network learned redundant features\\u2019, Sentence 1: why do the authors say \\u2018similar MEIs\\u2019. The 16 neurons rendered in both blocks look different. \\n6. It will be informative to know how the number of clusters vary based on the correlation threshold used to collapse 100 clusters to a lower number. Are the clusters still functionally distinct for varying thresholds? Further why is MEI confusion matrix only shown for 13 groups?\"}" ] }
HJxV5yHYwB
Solving single-objective tasks by preference multi-objective reinforcement learning
[ "Jinsheng Ren", "Shangqi Guo", "Feng Chen" ]
There ubiquitously exist many single-objective tasks in the real world that are inevitably related to some other objectives and influenced by them. We call such task as the objective-constrained task, which is inherently a multi-objective problem. Due to the conflict among different objectives, a trade-off is needed. A common compromise is to design a scalar reward function through clarifying the relationship among these objectives using the prior knowledge of experts. However, reward engineering is extremely cumbersome. This will result in behaviors that optimize our reward function without actually satisfying our preferences. In this paper, we explicitly cast the objective-constrained task as preference multi-objective reinforcement learning, with the overall goal of finding a Pareto optimal policy. Combined with Trajectory Preference Domination we propose, a weight vector that reflects the agent's preference for each objective can be learned. We analyzed the feasibility of our algorithm in theory, and further proved in experiments its better performance compared to those that design the reward function by experts.
[ "reinforcement learning", "single-objective tasks", "multi-objectivization" ]
Reject
https://openreview.net/pdf?id=HJxV5yHYwB
https://openreview.net/forum?id=HJxV5yHYwB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "T0ytmBXEG6", "Hke2FZqFiH", "Syg1Lx9Yor", "SkxhzAFtiH", "Byg4aCzE5S", "rygBjki6tB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1576798734749, 1573654916245, 1573654599139, 1573654036321, 1572249276488, 1571823517433 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1875/Authors" ], [ "ICLR.cc/2020/Conference/Paper1875/Authors" ], [ "ICLR.cc/2020/Conference/Paper1875/Authors" ], [ "ICLR.cc/2020/Conference/Paper1875/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1875/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper considers planning through the lenses both of a single and multiple objectives. The paper then discusses the pareto frontiers of this optimization. While this is an interesting direction, the reviewers feel a more careful comparison to related work is needed.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Author response to Reviewer #2 (part 2)\", \"comment\": \"\", \"q4\": \"\\\"I am also puzzled to understand the relevance of Theorem 1.\\\"\", \"a\": \"We agree that more experiments are helpful to show the effectiveness of our method. However, due to the limitations of time and computation resources, more experiments are left as future work.\", \"q5\": \"\\\"there is no correctness proof.\\\"\", \"q6\": \"\\\"Why not provide experimental results on a challenging problem like DOOM?\\\"\", \"references\": \"[1] Natarajan S, Tadepalli P. Dynamic preferences in multi-criteria reinforcement learning. In ICML, 2005.\\n[2] Abels A, Roijers D M, Lenaerts T, et al. Dynamic Weights in Multi-Objective Deep Reinforcement Learning. In ICML, 2019.\\n[3] Mossalam H, Assael Y M, Roijers D M, et al. Multi-objective deep reinforcement learning. arXiv preprint arXiv:1610.02707, 2016.\\n[4] Sener O, Koltun V. Multi-task learning as multi-objective optimization. In Advances in Neural Information Processing Systems, 2018.\\n[5] Christiano P F, Leike J, Brown T, et al. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, 2017.\\n[6] Wirth C, Akrour R, Neumann G, et al. A survey of preference-based reinforcement learning methods. The Journal of Machine Learning Research, 2017, 18(1): 4945-4990.\\n[7] Rudolph G. Convergence properties of evolutionary algorithms. Kovac, 1997.\"}", "{\"title\": \"Author response to Reviewer #2 (part 1)\", \"comment\": \"We appreciate the time you spent reviewing our submission and hope our response help address some of your concerns.\\n\\nWe believe the reviewer may have misunderstanded our paper to some extent. Different from the reviewer's summary \\\"The main idea is converting the multi-objective problem into single objective by scalar weighting\\\", we propose to transform a single-objective problem to a preference multi-objective problem with learnable dynamic weights. This main idea has been elaborated in the third paragraph of introduction.\", \"q1\": \"\\\"its novelty is not even clear since authors did not discuss majority of the existing related work.\\\"\", \"a\": \"The 'far greater' is used in the expression '$r(s_0^1,a_0^1)+\\\\cdots + r(s_{k-1}^1,a_{k-1}^1) \\\\gg r(s_0^2,a_0^2)+\\\\cdots + r(s_{t-1}^2,a_{t-1}^2)$', which is between Definition 1 and Definition 2. The symbol $'\\\\gg'$ denotes our learning objective, taking '$a \\\\gg b$' for example, which can be satisfied by maximizing $\\\\left(a-b\\\\right)$.\", \"q2\": \"\\\"Definition 2 uses but not define 'p' in condition (2).\\\"\", \"q3\": \"\\\"Lemma 1 states sth is 'far greater' than something else. However, 'far greater' is not really defined.\\\"\"}", "{\"title\": \"Author response to Review #3\", \"comment\": \"Thank you for providing the feedback. We hope the following address some of your concerns.\\n\\nWe agree that more experiments are helpful to show the effectiveness of our method. However, almost all benchmark scenarios (e.g., Deep Sea Treasure, SuperMario, etc), which have been widely used to measure the performance of MORL algorithms, are not suitable for evaluating our contribution. The reasons are as follows. 1) The problem setting is different. Although the problem proposed also requires a learning agent to optimize two or more objectives at the same time, the major difference is that we focus on problems with one single primary objective and several additional helper-objectives, in which the main concern is how to utilize the helper-objectives so that the primary objective can be more efficiently optimized. 2) The weight vector is time-variant. Different from most benchmark scenarios, where the weight vector is time-invariant, in our Efficient Delivery environment, in order to deliver as many packages as possible, it is necessary for the agent to weigh the importance of the three objectives in every state, according to the distances from the delivery location, the charging location and the acceleration location. \\n\\t\\nIn addition, the feasibility of our algorithm has been theoretically analyzed. Although only one scenario was used, we believe that the effectiveness of our algorithm has been sufficiently demonstrated. Therefore, we sincerely hope that the reviewer can understand why only one scenario was used in this paper.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Thank the authors for the response. I agree with R2 that the paper lacks comparisons with previous works. I will stick to my previous decision.\\n----------------------------------------\\nSummary\\nThis paper presents a new approach for single-objective reinforcement learning by preferencing multi-objective reinforcement learning. The general idea is to first figure out a few important objectives, add some helper-objectives to the original problem, and learn the weights for each individual objective by trying to keep the same order as Pareto dominance. This paper has potential, but I lean to vote for rejecting this paper now, since it is still not ready. I might change my score based on the reviews from other reviewers.\\nStrengths\\n- The idea is novel. Learning weights for each objective by keeping the order as Pareto dominance is an interesting idea to me.\\nWeaknesses\\n- The lack of experiments. The authors tested their method in only one scenario, which makes me feel unsafe. Only testing on one simple scenario does not demonstrate the effectiveness. The authors are supposed to test their method on more (complex) scenarios to show the effectiveness of their method.\\nPossible Improvements\\nAs mentioned before, the proposed method can be tested on more scenarios (e.g., Deep Sea Treasure, SuperMario, etc.).\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"After Responses:\\nI understand the differences that authors pointed to the relevant literature. However, it is still lacking comparisons to these relevant methods. The proposed method has not been compared with any of the existing literature. Hence, we do not have any idea how does it stand against the existing approaches. Hence, I believe the empirical study is still significantly lacking. I will stick to my decision. Main reason is as follows; I believe the idea is interesting but it needs a significant empirical work to be published. I recommend authors to improve empirical study and re-submit.\\n-------\\nThe submission is proposing a method for multi-objective RL such that the preference of tasks learned on the fly with the policy learning. The main idea is converting the multi-objective problem into single objective by scalar weighting. The weights are learned in a structured learning fashion by enforcing them to approximate the Pareto dominance relations.\\n\\nThe submission is interesting; however, its novelty is not even clear since authors did not discuss majority of the existing related work. \\n\\nAuthors can consult the AAMAS 2018 tutorial \\\"Multi-Objective Planning and Reinforcement Learning\\\" by Whiteson&Roijers for relevant papers. It is also important to note that there are other methods which learn weighting. Optimistic linear support is one of such methods. Hence, this is not the first of such approaches. Beyond RL, it is also studied extensively in supervised learning. For example, authors can see \\\"Multi-Task Learning as Multi-Objective Optimization\\\" from NeurIPS 2018.\\n\\nThe manuscript is also very hard to parse and understand. For example, Definition 2 uses but not define \\\"p\\\" in condition (2). Similarly, Lemma 1 states sth is \\\"far greater\\\" than something else. However, \\\"far greater\\\" is not really defined. I am also puzzled to understand the relevance of Theorem 1. It is beyond the scope of the manuscript, and also not really new.\\n\\nAuthors suggest a method to solve multi-objective optimization. However, there is no correctness proof. We do not know would the algorithm result in Pareto optimal solution even asymptotically. Arbitrary weights do not result in Pareto optimality.\\n\\nProposing a new toy problem is well-received. However, not providing any experiment beyond the proposed problem is problematic. Authors motivate their method using DOOM example. Why not provide experimental results on a challenging problem like DOOM?\\n\\nIn summary, I definitely appreciate the idea. However, it needs better literature search. Authors should position their paper properly with respect to existing literature. The theory should be revised and extended with convergence to Pareto optimality. Finally, more extensive experiments on existing problems comparing with existing baselines is needed.\"}" ] }
rkeNqkBFPB
Deep automodulators
[ "Ari Heljakka", "Yuxin Hou", "Juho Kannala", "Arno Solin" ]
We introduce a novel autoencoder model that deviates from traditional autoencoders by using the full latent vector to independently modulate each layer in the decoder. We demonstrate how such an 'automodulator' allows for a principled approach to enforce latent space disentanglement, mixing of latent codes, and a straightforward way to utilize prior information that can be construed as a scale-specific invariance. Unlike GANs, autoencoder models can directly operate on new real input samples. This makes our model directly suitable for applications involving real-world inputs. As the architectural backbone, we extend recent generative autoencoder models that retain input identity and image sharpness at high resolutions better than VAEs. We show that our model achieves state-of-the-art latent space disentanglement and achieves high quality and diversity of output samples, as well as faithfulness of reconstructions.
[ "unsupervised learning", "generative models", "autoencoders", "disentanglement", "style transfer" ]
Reject
https://openreview.net/pdf?id=rkeNqkBFPB
https://openreview.net/forum?id=rkeNqkBFPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "vJPGOWeB_", "rJx497-hoB", "ryliKbbhoH", "SJe2blW2jS", "rkgdpClhsS", "rJxBHBNP5H", "rkgGbTwr5r", "HkxmXW7AYS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734717, 1573815179648, 1573814659039, 1573814275660, 1573813952481, 1572451644872, 1572334841868, 1571856667120 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1874/Authors" ], [ "ICLR.cc/2020/Conference/Paper1874/Authors" ], [ "ICLR.cc/2020/Conference/Paper1874/Authors" ], [ "ICLR.cc/2020/Conference/Paper1874/Authors" ], [ "ICLR.cc/2020/Conference/Paper1874/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1874/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1874/AnonReviewer4" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The manuscript proposes an autoencoder architecture incorporating two recent architectural innovations from the GAN literature (progressive growing & feature-wise modulation), trained with the adversarial generator-encoder paradigm with a novel cyclic loss meant to encourage disentangling, and procedure for enforcing layerwise invariances. The authors demonstrate coarse/fine visual transfer on generative modeling of face images, as well as generative modeling results on several Large Scale Scene Understanding (LSUN) datasets.\\n\\nReviewers generally found the results somewhat compelling and the ideas valuable and well-motivated, but criticized the presentation clarity, lack of ablation studies, and that the claims made were not sufficiently supported by the empirical evidence. The authors revised, and while it was agreed that clarity was improved, some reviewers were still not satisfied with the level of clarity (the revision appeared at the very end of the discussion period, unfortunately not allowing for any further refinement). Ablation studies were added in the revised manuscript, which were appreciated, but seemed to suggest that the proposed loss function was of mixed utility: while style-mixing quantitatively improved, overall sample quality appeared to suffer.\\n\\nAs the reviewers remain unconvinced as to the significance of the contribution and the clarity of its presentation, I recommend rejection at this time, while encouraging the authors to further refine the presentation of their ideas for a future resubmission.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Review #4\", \"comment\": \"Thank you for the detailed requests and literature pointers. We hope that our extensive rewrites of Sec. 3 especially will address most of these questions. We also added the references to the other models you provided.\\n\\nThe parts about layer independence and style transfer have been rephrased. A working definition for disentanglement is now (albeit very shortly) given in Sec. 1, explanation of the related PPL added in Sec. 4, and some details of how we calculated it were added in the Appendix.\\n\\nAs for the use of style transfer and AdaIn in our context, please find the updated description in Sec. 3.1. which hopefully makes it clearer that indeed, also in this context, the shifting and scaling coefficients relate to the feature maps, in essentially the same way as they do in StyleGAN (Karras, 2019), and hence allow using the (overloaded) term \\u2018style\\u2019.\\n\\nWe removed the term \\\"ad hoc disentanglement stack\\\" and replaced it with the original term \\\"mapping layer\\\", and moved it to Related Work section.\\n\\nAs for $\\\\phi(x)$, it was actually mentioned in the very beginning of 3.1. But nonetheless, we have now added more background information from Ulyanov (2018) and Heljakka (2018).\\n\\nAs for the layer-specific loss, it can be summarized like this: we generate a random latent z1, and start driving the decoder with it, layer by layer, until we reach some layer J. There, we record the intermediate result of the decoding, and then pick another latent z2, and continue driving the decoder with that one, instead. Once we have decoded the full image, we encode it again into z12, and start decoding again, until we reach again layer J. At that point, we consider the reconstruction loss between the previous intermediate decoding result of z1 and the new corresponding result of z12, thus trying to make the model ignore the effect of z2 until we move again onwards from layer J.\\n\\nAs for the probabilistic interpretation and random sampling in AGE models, we added more explanation of both. AGE latent space is, like regular VAE's, a 512-dimensional unit hypersphere. We can sample from it exactly as we do from VAE latent space. This sampling is also utilized at every training step.\\n\\nYour point about the mis-characterization of ALI/BiGAN as VAE-GAN hybrids, from the point-of-view of their loss objective, is correct. We corrected our wording in that regard.\\n\\nWe have made the wording in our referring to GANs and GAN inference mechanisms more precise (if not perfect), in alignment with your points. However, we\\u2019d be inclined to say that the terminology is somewhat fuzzy here, in terms of whether a GAN with an encoder should be called \\\"GAN\\\" or \\\"GAN with a separate encoder\\\". Consider, for instance, this use of terminology in the ALI paper that you refer to (https://openreview.net/pdf?id=B1ElR4cgg): \\\"However, GANs lack an efficient inference mechanism\\\", \\\"Our approach...casts the learning...in an GAN-like adversarial framework\\\", \\\"ALI bears close resemblance to GAN, but it differs from it in the two following ways\\\", etc. If you can recommend an authoritative source for what are the necessary and sufficient conditions for being a GAN, we would be happy to adapt to it!\"}", "{\"title\": \"Response to Review #3\", \"comment\": \"Thank you for the positive comments and encouragement for improvements. We have now added an ablation study that specifically looks into the effect of L_j and loss term d_rho from (Barron 2019). We don't think that either of them are absolutely critical, but Lj clearly improves style-mixing sample quality. As you see in the new Table 1, when measured within the constant budget of training steps, Lj essentially trades general random sampling performance for improved style-mix sampling performance. Note that random sampling with mixed samples (i.e. producing each random sample from 2+ different latents) is a more direct way to measure the disentanglement of scale-specific properties than PPL, but we can only use it when comparing across models that can do such mixing. (Measuring LPIPS/PPL differences reliably would require repeating the comparisons in 128x128 or higher.) We would not necessarily consider the use of d_rho an important contribution of the paper, but it seems to improve results slightly.\\n\\nThe reduced reconstruction quality and grid artifacts, in comparison to Balanced Pioneer, seem to largely go away after fixing the implementation issue (see item #3 in our \\\"Updated Version\\\" comment, and esp. the updated Fig. 6 and 9).\\n\\nWe have improved the subsections of Sec. 3 in many ways, we hope you will find the improvements significant.\"}", "{\"title\": \"Response to Review #1\", \"comment\": \"Thank you for the positive assessment and several appropriate remarks. In the updated draft, we have now made the relationship to AGE/PIONEER objectives clearer, addressing your points about KL divergence and d_cos, which certainly improves the readability of the paper. Understandably, detailed explanation of the original AGE equations would not have fit the paper, but we did our best.\\n\\nYou are absolutely right about the lack of clarity in explanation of cyclic loss and invariance enforcement. We did considerable improvements to the text in those sections. We removed the 'F' notation in the invariance section since it seemed confusing to us, too. The invariant part that F notation was used for is captured in the decoder layers, where the key idea is that it takes the form of a reconstruction loss in the image space, but such that the reconstruction of input sample x1 has been affected by another training sample x2, but only at those exact decoder layers that should treat both samples identically. We made this also more explicit in Eqs. (9\\u201311).\\n\\nThe effect of cyclic loss has been investigated in the new ablation study (up to 64x64 resolution).\\n\\nFor Fig. 5, you are correct in that in the presented experiment, the invariance objective reduces image quality in terms of making the images more blurry. We believe this is simply due to lack of capacity and crude \\\"horizontal flip\\\" training data. The experiment using the L_inv (invariance loss) was intended just to introduce the idea; a complete paper (or several) could be written to explore it further with more emphasis on output quality. As you say, the experiment presently fulfils the objective we claim: That the identity/pose distinction is clearly induced into the model by the training. Just to be clear, L_inv is only used in Fig. 5, and none of the main experiments use it.\"}", "{\"title\": \"Updated Version\", \"comment\": \"We thank all the reviewers for their encouraging feedback and helpful comments, especially for pointing out parts that needed clarification, and the detailed advice on literature references and terminology by R4. We have addressed all the concerns raised by the reviewers, with detailed replies provided below.\\n\\nThe revised version (as of 2019-11-15) incorporates the following improvements:\\n1. TEXT: As all reviewers correctly remarked that clarity was lacking in some sections, we have made considerable improvements in the text, even rewriting complete paragraphs for better exposition.\\n2. ABLATION: As R1 and R3 pointed out, the paper benefits from an ablation study to disseminate the contributions of the loss function components. Such an ablation study was carried out and has now been added to Sec. 4.\\n3. IMAGE QUALITY FIX: As R3 noted, there were visible grid artifacts in some images and some weakness in some of the metrics. Right before the Discussion Period, we found a bug in our implementation of the encoder. After fixing this bug and slightly re-adjusting the KL margin term, we re-ran the FFHQ experiments, and the grid artifacts reduced considerably. We have updated most of the affected FFHQ images (Fig. 4a, 6, 9 and 13) and the FFHQ metrics in Table 2. For the final revision, we will also re-run the CelebA-HQ and the rest of the experiments that could be similarly affected, and expect to improve the rest of the images similarly. Because none of the reviewers considered the performance to be a bottleneck issue in the first place, we consider the probable forthcoming improvements only a bonus in this case.\", \"detailed_list_of_changes\": \"\\u2022 Sec. 1\\u20132: Mostly incorporated improvements advised by R4, including some additional references and more precise framing of GANs vs. \\\"GANs with encoder\\\" (we did our best \\u2013 there is no 100% consensus on the terminology).\\n\\u2022 Sec. 3.1: Improved the clarity of explanation involving AdaIn (related to questions by R4).\\n\\u2022 Sec. 3.2: Rewrote parts of the Layer-specific loss (Lj) to improve clarity. This includes making the previously inlined key equations formatted on their own lines.\\n\\u2022 Sec. 3.2: For more context (covering various points raised by R1 and R4), we have included more exposition of the original AGE and PIONEER loss function and explanation of each loss term, as well as our departure of the original formulation, and described more clearly the nuances of deterministic inference and random sampling during training and evaluation.\\n\\u2022 Sec. 3.2\\u20133.3: Switched the representation of some terms to more intuitive forms and made e.g. variable indices between equations more consistent\\n\\u2022 Sec. 3.3: Rewrote this short section for more clarity.\\n\\u2022 Sec. 4: Added the ablation study (requested by R1 and R3) and explanation of PPL (requested by R4).\\n\\u2022 Sec. 4.3: Improved explanation of the \\\"Invariances\\\" experiment (requested by R3), as well as fixed a mistake in the description of how exactly the equations were used.\\n\\u2022 Changes in other parts were necessary mostly to compress the paper, since the reviewers made several (well justified) requests for more thorough explanations.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper makes the following contribution:\\n- using the AdaIn architecture proposed by Karras et al., 2019 with the autoencoding architecture of AGE/PIONEER;\\n- a cyclic loss to enforce disentangling between different layers;\\n- a method to enforce invariances at specific layers.\\n\\nThe adaptation of the AdaIn architecture in an autoencoding fashion (a la AGE/PIONEER) is sensible and well motivated, combining state-of-the-art generator while allowing inference in a compact setting (i.e. not requiring an additional discriminator). \\n\\nThe other contribution are harder to read and the writing should be improved.\\nThe cyclic loss should be better described. The notation of the KL divergence is confusing if you are using the KL divergence defined by AGE/PIONEER and will need to be explained. I will also assume that d_cos is the cosine loss as defined by the PIONEER paper. This should be mentioned as well.\\nThe method to enforce invariance is also not clear to me. While the authors introduce F as a \\\"known invariance\\\", it is unclear what role it plays in the cost function. Is F an invariant on which we measure this reconstruction loss d? What is d? Explaining that might shed light on the result Figure 5, e.g. why the images become blurry when doing this rotation. \\n\\nThe experiment demonstrates the sampling quality of the model and the transfer of features at different level (coarse-medium-fine) Figure 4. It is unclear what was the contribution of the layer specific loss metric to allow that feature transfer. It seems from Figure 5 the invariance objective has been roughly satisfied but at the cost of a significant drop in image quality.\\n\\nThe clearest contribution from this paper is definitely the AGE/PIONEER approach to train the AdaIn architecture. The two other contributions are unclear, both in their explanation and in what they contribute: the layer specific loss not compared to an architecture just trained in an AGE way, and the enforcing of invariance, although filling its objective, might deteriorate other desirable properties of the model (e.g. sample quality).\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"Deep Automodulators introduces a generative autoencoder architecture that replaces the canonical encoder decoder autoencoder architecture with one inspired by StyleGAN. The encoder interacts with the decoder by modulating layer statistics via Adaptive Instance Normalization (AdaIN) conditioned on the latent. The paper trains this architecture with the loss framework of the Adversarial Generator\\u2013Encoder (AGE) and utilizes the progressive growing trick originally introduced in Progressive GAN which is also adapted by the Pioneer models, recent followups to AGE.\\n\\nThe use of AdaIN conditioning across multiple layers and multiple scales (like StyleGAN) and the ability to directly compute latent codes via the encoder allows the authors introduce a disentanglement objective L_j and also an invariance objective L_inv to help encourage these properties in the models via consistency objectives \\n\\nThe paper shows results demonstrating StyleGAN style coarse/fine visual transfer on two high quality face datasets (importantly this is demonstrated on real inputs rather than samples as in StyleGAN) as well as respectable sample quality on LSUN Bedrooms and the LSUN Cars dataset.\\n\\nMy decision is weak reject. Overall, I think the paper is promising and shows a nice combination of efficient latent inference and controllable generation but the authors do not include ablations to validate some of their core contributions such as the L_j objective. Additionally, the improved controllability of the approach seems to unfortunately result in lower reconstruction quality than direct prior work such as Balanced Pioneer and this potential tradeoff is not investigated/discussed.\\n\\nTo expand a bit, there are three changes from that prior work that that stood out to me. 1) The StyleGAN inspired architecture 2) the disentangling objective L_j and 3) using the loss function d\\u03c1 of Barron 2019. Successful ablations to demonstrate the importance of 2) to the presented results as well as better motivating / demonstrating the impact of including 3) would raise my score to an weak acceptance.\\n\\nMy other concern is that the reconstruction quality seems noticeably lower than that of the proceeding work, Balanced Pioneer. This is reflected in its 10% reduction in LPIPS compared to the Automodulator\\u2019s paper. In general there also seems to be noticeable grid artifacts in the samples across all datasets samples/reconstructions, which don\\u2019t seem as prominent in Balanced Pioneer. It is not immediately clear why this is the case and additional investigation of this, such as checking whether this is due to the introduction of the disentanglement objective, or the inclusion of the Barron 2019 loss function would be informative.\", \"additional_comments\": \"Each subsection of 3 could be improved by providing a brief introduction to the motivation for and aim of each contribution before launching directly into how it is implemented / achieved. Without that bit of context on the goals of each subsection, it was more difficult to follow along with what was being done and why.\\n\\nThe presentation of L_j with lots of inlined equations intermixed with text gets a bit difficult to read / follow.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": \"The submission proposes an autoencoder architecture which combines two recent GAN-based architectural innovations, namely the progressive growing of the decoder architecture (as well as the encoder architecture in this case) and the use of the encoded representation to modulate the decoder via a feature-wise transformation mechanism.\\n\\nI think the overall idea behind the paper is valuable, but I don\\u2019t think the submission meets the acceptance bar from a clarity point of view. I also have concerns with its characterization of the literature.\", \"clarity_related_comments\": [\"\\u201cThis allows the layers to work independently of each other.\\u201d This is an imprecise use of the term \\u201cindependently\\u201d. How can two layers work independently if one\\u2019s input is the output of the other?\", \"\\u201cThe reconstructed samples could be re-introduced to the encoder, repeating the process, and requiring consistency between passes.\\u201d The rest of the paragraph is built on the premise that this is a desirable property, but I\\u2019m not sure I understand why this is a desirable property in the first place.\", \"How to define \\u201cdisentanglement\\u201d in the context of representation learning is in itself an unsettled question as far as I\\u2019m aware, but the submission uses the term without an intuitive or formal definition. What do the authors mean by \\u201cdisentangled representation\\u201d? What is measured by perceptual path length (PPL), and in which ways does PPL relate to the author\\u2019s definition of \\u201cdisentangled representation\\u201d?\", \"\\u201c[...] a new autoencoder-like model with powerful properties not found in regular autoencoders, including style transfer.\\u201d The term \\u201cstyle transfer\\u201d is overloaded; what do the authors mean?\", \"The submission defines AdaIn as a way to combine \\u201ccontent\\u201d and \\u201cstyle\\u201d, and defines the style \\u201cy\\u201d in terms of mean and variance. In the context of the AdaIn paper, this makes sense: the instance normalization shifting and scaling coefficients are heuristically defined as the channel-wise means and standard deviations of a \\u201cstyle\\u201d stack of feature maps. However, in the context of this submission I\\u2019m not sure this definition makes as much sense: the instance normalization shifting and scaling coefficients are the result of a linear projection of the latent representation and do not involve the channel-wise means and standard deviations of an external stack of feature maps; is this correct?\", \"\\u201cThis setup follows the same logic as that of Karras et al. (2019), but we do not require an ad-hoc disentanglement stack.\\u201d Can the authors clarify what they mean by an \\u201cad-hoc disentanglement stack\\u201d?\", \"Section 3.2 uses some notation for the encoder without introducing it first. I believe the only way to understand that \\\\phi(x) refers to the encoder network is to look at Figure 2.\", \"Section 3.2 as a whole is hard to follow, in part due to the use of imprecise language (\\u201cmutually independent\\u201d, \\u201crepresentation of those levels disentangled in z\\u201d). At some point probability distributions are introduced (up until now the reader is operating under the assumption that the model is an autoencoder with no probabilistic interpretation), and mutual information is mentioned to justify an L2 reconstruction loss in z-space (which I would argue is an instance of mathiness that does not serve the reader\\u2019s comprehension). Can the authors explain in plain language how layer-specific losses are defined and how the complete loss is obtained?\", \"The submission presents model samples, but as far as I can tell the procedure for sampling is not provided. Unlike VAEs, autoencoders do not explicitly model the empirical distribution -- although reconstruction in denoising autoencoders is related to the score of the empirical distribution (Alain & Bengio, 2014). How are samples obtained from the trained model?\"], \"literature_related_comments\": [\"The use of normalization layers to implement feature-wise transformation mechanisms is fairly widespread nowadays, but for instance normalization specifically the work of Dumoulin et al. (2017) pre-dates that of Huang & Belongie (2017). Both are cited by Karras et al. (2019) in relation to AdaIn (which is termed \\u201cconditional instance normalization\\u201d in Dumoulin et al. (2017)).\", \"I disagree with the characterization of ALI/BiGAN as \\u201chybrid models that combine the properties of VAEs and GANs\\u201d: unlike AAE and AVB, which minimize KL-divergence terms in the VAE loss adversarially, the objective for ALI/BiGAN is purely adversarial. I would also include IAE (Makhzani et al., 2018) and BigBiGAN (Donahue et al., 2019) in the list of GAN variants that incorporate an inference mechanism.\", \"The submission repeatedly asserts that GANs lack an inference mechanism: \\u201cUnlike GANs, autoencoder models can directly operate on input samples.\\u201d; \\u201cTo work on new input images, GANs either need to be extended with a separate encoder, or inverted [...].\\u201d; \\u201c[...] GANs show good image quality, but have no built-in encoding mechanism [...]\\u201d; \\u201c[...] the problem with GANs is that they lack the encoder [...]\\u201d. This is false: see for example ALI, BiGAN, and BigBiGAN. The problem in my opinion is elsewhere: the kinds of reconstructions these models yield are not suited to the downstream applications investigated in this submission, because they oftentimes fail to preserve low-level details.\"], \"references\": [\"Alain, G., & Bengio, Y. (2014). What regularized auto-encoders learn from the data-generating distribution. The Journal of Machine Learning Research, 15(1), 3563-3593.\", \"Dumoulin, V., Shlens, J., & Kudlur, M. (2017). A learned representation for artistic style. In Proceedings of the International Conference on Learning Representations.\", \"Makhzani, A. (2018). Implicit autoencoders. arXiv:1805.09804.\", \"Donahue, J., & Simonyan, K. (2019). Large scale adversarial representation learning. arXiv:1907.02544.\"]}" ] }
BkgNqkHFPr
Enhanced Convolutional Neural Tangent Kernels
[ "Dingli Yu", "Ruosong Wang", "Zhiyuan Li", "Wei Hu", "Ruslan Salakhutdinov", "Sanjeev Arora", "Simon S. Du" ]
Recent research shows that for training with l2 loss, convolutional neural networks (CNNs) whose width (number of channels in convolutional layers) goes to infinity, correspond to regression with respect to the CNN Gaussian Process kernel (CNN-GP) if only the last layer is trained, and correspond to regression with respect to the Convolutional Neural Tangent Kernel (CNTK) if all layers are trained. An exact algorithm to compute CNTK (Arora et al., 2019) yielded the finding that classification accuracy of CNTK on CIFAR-10 is within 6-7% of that of the corresponding CNN architecture (best figure being around 78%) which is interesting performance for a fixed kernel. Here we show how to significantly enhance the performance of these kernels using two ideas. (1) Modifying the kernel using a new operation called Local Average Pooling (LAP) which preserves efficient computability of the kernel and inherits the spirit of standard data augmentation using pixel shifts. Earlier papers were unable to incorporate naive data augmentation because of the quadratic training cost of kernel regression. This idea is inspired by Global Average Pooling (GAP), which we show for CNN-GP and CNTK, GAP is equivalent to full translation data augmentation. (2) Representing the input image using a pre-processing technique proposed by Coates et al. (2011), which uses a single convolutional layer composed of random image patches. On CIFAR-10 the resulting kernel, CNN-GP with LAP and horizontal flip data augmentation achieves 89% accuracy, matching the performance of AlexNet (Krizhevsky et al., 2012). Note that this is the best such result we know of for a classifier that is not a trained neural network. Similar improvements are obtained for Fashion-MNIST.
[ "neural tangent kernel", "data augmentation", "global average pooling", "kernel regression", "deep learning theory", "kernel design" ]
Reject
https://openreview.net/pdf?id=BkgNqkHFPr
https://openreview.net/forum?id=BkgNqkHFPr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "srZ7H-3QtN", "SJgsaAK2sB", "H1eOHZN2or", "ryxC7H0oir", "Hke5xNAijH", "H1x7kXRosB", "H1lNYzCooB", "ryltv8TTFB", "S1gzyEd6FH", "Byxqt4VztH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734685, 1573850818551, 1573826879586, 1573803301609, 1573802994218, 1573802714838, 1573802620288, 1571833440845, 1571812314041, 1571075202499 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1873/Authors" ], [ "ICLR.cc/2020/Conference/Paper1873/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1873/Authors" ], [ "ICLR.cc/2020/Conference/Paper1873/Authors" ], [ "ICLR.cc/2020/Conference/Paper1873/Authors" ], [ "ICLR.cc/2020/Conference/Paper1873/Authors" ], [ "ICLR.cc/2020/Conference/Paper1873/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1873/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1873/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper was assessed by three reviewers who scored it as 6/3/6.\\nThe reviewers liked some aspects of this paper e.g., a good performance, but they also criticized some aspects of work such as inventing new names for existing pooling operators, observation that large parts of improvements come from the pre-processing step rather than the proposed method, suspected overfitting. Taking into account all positives and negatives, AC feels that while the proposed idea has some positives, it also falls short of the quality required by ICLR2020, thus it cannot be accepted at this time. AC strongly encourages authors to go through all comments (especially these negative ones), address them and resubmit an improved version to another venue.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thank you for your reply\", \"comment\": \"Thank you for your reply!\\n\\nOn difference from (Dao et al. 18):\\nTo clarify, (Dao et al. 18) assumes the loss is linear (second order term vanishes), and has no restriction on the transformations. We do not need to assume loss is linear (we considered l2 loss), but we require transformations to form a group. \\n\\nTherefore, the results in (Dao et al. 18) cannot be applied to our setting.\", \"on_linear_actions\": \"Both the result in (Dao et al. 18) and our result do not require the linearity of actions, although both translation and flip are indeed linear (viewed as functions on the image space).\"}", "{\"title\": \"Thanks for the rebuttal\", \"comment\": \"Dear authors,\\n\\n1. Thanks for the clarification.\\n2. Thanks for the clarification.\\n3. So, as far as I understood, (Dao et al) consider data augmentation which are linearized. As all the layers of your architectures are (linearly) covariant with the action of translation, it implies that the action of translations on the sample $x$ is a linear action on the obtained representation. This is exactly the setting of the Section 4.1 of (Dao et al), if a subset of translations is uniformly sampled. In other words, if a group action is linear, averaging along an orbit of the group leads to a linear operator. I agree that this setting is different for the flips, yet this is a group with simply 2 elements... Am I incorrect?\\n4. Thanks for the clarification.\\n5. I partially agree: I think it is still computationally tractable (maybe not on standard academic resources), however approximate methods exist. I think this would have been interesting, as you observed that directly solving the regression (which incorporates a regularization) allows to obtain good performances: it is surprising given that there is no supervision. Thanks however for the clarification.\\n6. Thanks.\\n7. OK.\\n\\nI will revise my review. Thank you very much for your rebuttal.\"}", "{\"title\": \"General Response and Revision Summary:\", \"comment\": \"We thank all reviewers for their constructive comments! All major changes in our paper are marked in red. We made the following main changes in our revision.\\n1.\\tWe changed the title according to suggestion from Review #3.\\n2.\\tWe added more detailed discussions on previous work on methods that are not trained networks in the last paragraph of Section 2.\\n3.\\tWe have added validation accuracy in Appendix F. The cross-validation details are in Section 6.1.\\n4.\\tWe now use the name box filtering for the operation on CNN according to [1].\\n5.\\tWe have provided a link to our current implementation\\n\\n\\n[1] Richard Szeliski. Computer vision: algorithms and applications. Springer Science & Business Media, 2010.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your review. We have revised our paper according to your comments. Please find our response to your comments below.\\n1.\\tWe acknowledge this is not a new operation. We now use the name box filtering according to [1].\\n2.\\tWe have added cross-validation in our experiments. See the first paragraph in Section 6.1 for the detailed procedure of cross-validation. We have underlined the accuracy on test data that corresponds to the best hyper-parameter via cross-validation. We remark that we still achieve 88.91% accuracy on CIFAR-10 with cross-validation. \\n3.\\tWe are sorry that we do not have computational resources to run $c=0$ and $c=32$ during the rebuttal period since it requires roughly 1,000 GPU hours. Note that in the setting where features are predefined without using the data, then LAP gives 2-3% improvement over GAP.\\n4.\\tSorry for the confusion. The conjecture refers to the sentence in the original paper that proposed GAP (Lin et al., 2013), which stated that one possible reason that GAP improves the performance is that CNN with GAP has fewer training parameters than CNN without GAP. Our point is that since we fixed the last fully-connected layer, CNNs with GAP or without GAP have the same number of training parameters but GAP still improves the performance, so the benefit of GAP may not be explained by the fact that it leads to fewer training parameters.\\n5.\\tWe choose the smallest $\\\\lambda$ so that solving kernel regression is numerically stable across all settings. \\n\\n[1] Richard Szeliski. Computer vision: algorithms and applications. Springer Science & Business Media, 2010.\", \"for_minor_comments\": \"1.\\tFor CIFAR-10, the bottleneck of using CNN-GP and CNTK is not solving kernel regression but computing the kernel values. The time complexity is $O(p^2 n^2)$ where p is the number of pixels in each image and $n$ is number of data point. We have added a clarification in the paper. \\n2.\\tBoth Garriga-Alonso et al. (2019) and Novak et al. (2019) have implemented the CNN-GP kernel rather than the CNTK kernel. We have changed the first paragraph to make it clear.\\n3.\\tWe have slightly changed the definition of the \\u201caugmented kernel\\u201d, so that $K^{\\\\mathcal{G}}$ is a kernel even when $K$ is not invariant under $\\\\mathcal{G}$. If $K$ is invariant, the definition remains the same.\\n4.\\tFor Figure 1, we have enlarged its size and moved it to appendix. \\n5.\\tWhen \\u201cc\\u201d is as large as 12, it does create unrealistic images, but certainly with much smaller probability than full translation. In fact, whether the image is unrealistic is not exactly what we care about. As long as an augmented image much closer to its real class than other classes (e.g., Figure 1.b is much more like a \\u201ctruck\\u201d than an \\u201cautomobile\\u201d/\\u201dairplane\\u201d/\\u201dbird\\u201d), it will likely improve the robustness of the classifier, and thus improve the accuracy. Therefore, we should choose a proper value for $c$ to get as many \\u201chelpful\\u201d augmented samples as possible.\\n6.\\tIf the input images has $c$ channels and $p$ pixels, calculating the kernel value in the first layer requires $O(cp^2)$ time (see the formula for $K^{(0)}$ in Appendix A). Thus, with $n$ input images, the total cost would be $O(n^2cp^2)$. With very large value of $c$ this would be even more expensive than calculating the rest part of the kernel values. In our experience, with $c = 2048$ channels, calculating the kernel value in the first layer requires roughly 750 GPU hours on NVIDIA Tesla V100. If we take $c = 256, 000$ for example, this will require roughly 100, 000 GPU hours. We are sorry but this is well-beyond our computational resources available. We are planning to perform the other experiments requested by the reviewer after getting more computational resources.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your review. We have revised our paper according to your comments. Please find our response to your comments below.\\n1.\\tWe do agree GAP is also introduced in [1] and we do not claim we invented GAP. In fact, GAP is first proposed in Lin et al. (2013) and is a standard component in modern CNNs. For LAP, we do not think it is introduced in [1]. At least [1] does not define such an operation explicitly. Moreover, although [1] shows that CNN-GP with GAP is invariant to translations, in this paper we establish a formal connection between GAP and full translation data augmentation, which is novel and does not appear in [1]. \\n2.\\tWe have changed our title to reflect the fact that our paper deals with both CNN-GP and CNTK.\\n3.\\tWe acknowledge this is not a new operation. We now use the name box filtering according to [4].\\n4.\\tFor CNN we used many standard tricks including batch norm, weight decay and momentum. Note CNTK corresponds to CNN without using these tricks and are trained via gradient flow.\\n5.\\tWe have provided a link to our current implementation. We will further clean up the codes and make it public after acceptance. We will share the kernel values as well.\\n\\n[4] Richard Szeliski. Computer vision: algorithms and applications. Springer Science & Business Media, 2010.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your review. We have revised our paper according to your comments. Please find our response to your comments below.\\n1.\\tIn the related work section, we have added discussion on SOTA results in the unsupervised learning and in the no-data setting.\\n2.\\tWe have added cross-validation in our experiments. See the first paragraph in Section 6.1 for the detailed procedure of cross-validation. We have underlined the accuracy on test data that corresponds to the best hyper-parameter via cross-validation. We remark that we still achieve 88.91% accuracy on CIFAR-10 with cross-validation. \\n3.\\tOur results in Section 4 is not a restatement of or implied by (Dao et al. 2018). In our paper we consider two types of data augmentation. The first type is enlarging the training dataset (as used in practice), and the second type is averaging the prediction on new training samples obtained by applying different transformation on the original training samples. These two methods are in general very different, except for unrealistic cases when the loss function is almost linear, as discussed in (Dao et al., 2018). (Dao et al., 2018) also assumes the augmented images from the same image has very small variance such that the quadratic term of Taylor expansion of the loss vanishes, which is not the case for operations studied in this paper. For instance, horizontal flip could induce large variance. Even if the objective values given by these two type augmentation methods might be close, the gradients can still be very different, and thus could result in different trajectories and solutions. In this paper, we give a much stronger result in the case of kernel regression (note the loss is l2 and thus non-linear). We prove that when the transformations used to generate augmented samples form a group, solving kernel regression using the above two types of data augmentation gives *exactly* the same solution, which explains the success of Global Average Pooling.\\n4.\\tWe acknowledge this is not a new operation. We now use the name box filtering according to [1].\\n5.\\tComputing the spectrum requires performing SVD on a 50000 by 50000 matrix, which is computationally infeasible given our computational resources. \\n6.\\tWe have provided a link to our current implementation. We will further clean up the codes and make it public after acceptance.\\n7.\\tWe have moved Figure 1 to appendix.\\n\\n[1] Richard Szeliski. Computer vision: algorithms and applications. Springer Science & Business Media, 2010.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper considers architectures that do not involve learning (up to the classification layer) and tries to improve their accuracies. They're based on CNTK and CNN-GP works. This is purely a numerical paper and its contribution is to show that despite being not learned, the obtained representations are competitive with supervised neural networks.\\n\\nOverall, despite the fact if I find this numerical result interesting, I found too many flaws to justify its acceptance. (fine tuning on the test set, lack of comparison with the state of the art...)\", \"pros\": [\"Good numerical performances.\"], \"cons\": [\"Given the claim in the abstract about accuracies, it should be pointed out that:\", \"in the unsupervised setting, with a kernel engineering method, you can obtain ~86% on cifar10 (cf https://arxiv.org/abs/1605.06265 )\", \"in the no-data(up to a linear model) setting, it is possible to get ~82% on cifar10 with the scattering networks (cf https://arxiv.org/abs/1412.8659 )\", \"Those two works are also mainly empirical, and thus some accuracies of this paper should be compared to them.\", \"There is a significant amount of experiments (table 1/2/3/4). While this should have been a positive aspect of the paper, I noticed that the accuracies reported here are computed from the test set. A validation set should have been used with a careful cross-validation. I'm aware this is a standard practice in deep learning, yet here it seems obvious to me that some hyper parameters have been fine-tuned on the testing set.\", \"Section 4: isn't it a rephrasing of (Dao et al, 2018)? (which is cited) I think this should be clearly stated.\", \"Section 5: The paper cites the Local Average Pooling as a \\\"new operation\\\", but this is clearly standard in the literature. \\\"Boxblurring\\\" has always been named average pooling in deep learning, low-pass filtering in signal processing. It was used before researchers employ a stride of 2 in convolutions. A similar pooling is also present in https://arxiv.org/abs/1605.06265\", \"I'm nicely surprised that the authors didn't encounter any significant conditioning issues. Would it be possible to show the spectrum of the kernel? This could be commented.\", \"Nothing about the future release of the code is indicated.\"], \"minor\": [\"I find the Figure 1 is not informative to the reader.\"], \"post_discussion\": \"The revision clarifies all my concerns and this work is likely to induce interesting discussions.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper builds on recent developments of CNN-GP and CNTKs in multiple fronts obtaining significant performance boost on CIFAR-10 dataset (and some mild boost on Fashion-MNIST). One way is by usage of Local Average Pooling (LAP) layers which interpolates between Global Average Pooling (GAP) and no Pooling layer. The authors also introduce flip data augmentation by doubling the dataset. With the help of additional feature extractor, this paper obtained 89% classification accuracy on CIFAR-10 which is the best among methods not using trained neural networks.\\n\\nThe discussion on section 4 regarding augmented kernel and data augmentation is quite clear and revealing. It\\u2019s unfortunate that the flip augmentation could not be introduced in kernel level. It would be interesting for future work to find kernel operation similar to GAP that encodes symmetries of the dataset. \\n\\nWhile the paper is clearly written and the results are strong, there are few criticisms I\\u2019d like to address and hope the authors address. \\n\\nAFAIK both GAP and LAP for CNN-GP are already introduced and analyzed in [1]. It seems best results on CIFAR-10 all comes from CNN-GP (with without flip augmentation, with and without using extra feature extractor), and I think the authors should properly credit [1] for GAP/LAP in convolutional kernels. It\\u2019s fair that this paper along with [2] was able to efficiently implement and scale up to full CIFAR-10 dataset and demonstrated pooling layer\\u2019s full potential for kernels corresponding to infinitely wide CNNs. Also in this regard the title could be misleading. It\\u2019s strange to have paper\\u2019s strongest result is based on CNN-GP while the title only mentions CNTK.\\n\\nAs the author\\u2019s mention in the paper, Box Blur is just an average pooling operation. This is already widely use by practitioners(e.g. [3]) and I don\\u2019t understand how author\\u2019s claim: \\u201cThis operation also suggests a new pooling layer for CNNs which we call BBlur\\u201d \\n\\nFew question/comments:\\n\\nBest parameters for trained CNN\\u2019s BBlur c is smaller than best c values for kernels, do authors understand the cause of discrepancy? \\n\\nIt would benefit the research community if authors could share code to generate the CNN-GP Kernels / CNTKs with LAP. Also I would encourage authors to share actual numerical values of kernel matrix for other research groups to analyze and encourage reproducibility.\\n\\n\\n[1] Novak et al., Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes, ICLR 2019\\n[2] Arora et al., On Exact Computation with an Infinitely Wide Neural Net, NeurIPS 2019\\n[3] Huang et al., Densely Connected Convolutional Networks, CVPR 2017\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper shows that there is a one-to-one correspondence between pixel-shift based data augmentation and average pooling operations in CNN-NNGP/NTK based ridge regression. Interestingly, the authors show that standard average pooling + flatten can lead to a better performance than simple global average pooling. This paper further shows that using the data pre-processing step proposed in (Coates et al., 2011) can boost performance of CNN-NNGP/NTK based ridge regression by ~7% which allowed the authors to achieve classification accuracy in high 80s which is AFAIK SOTA on CIFAR-10 when not using learned representations.\\n\\nMy current assessment of the paper is \\u201cweak accept\\u201d. There are two main reasons why I am on the verge of recommending rejection of this paper: (1) I believe that the experiment evaluation is not done entirely correctly leading to inflation of the reported results (my guesstimate is by ~0.5-2%)---please see my \\u201cMajor comments\\u201d. If this is not fixed, I am very likely to downgrade my score. (2) While the observation of the relationship between pixel-shifts and average pooling is very nice (which is why my current score is \\u201cweak accept\\u201d), it seems that most of the improvement comes from application of the pre-processing step of Coates et al. (2011) (seems like a ~7% improvement!). Given the large computational cost of CNN-NNGP/NTK (authors say about 1000 GPU hours), I wonder whether a simpler algorithm like some of the newer variants of boosting combined with the Coates et al. algorithm wouldn\\u2019t also perform at around 87-88% like CNN-NNGP/NTK (given the baseline 85-86% accuracy of the Coates et al. (2011) algorithm reported by the authors).\", \"major_comments\": [\"Can you please clarify why you decided to give a new name (Box Blur) to standard average pooling? Why not just use the existing name?\", \"I believe that the way you report results in all the tables (i.e., tables 1-6) and the text based upon them is flawed. The right approach would be to select the hyperparameters \\u201cc\\u201d and \\u201cd\\u201d on a validation set, and then report the performance with these hyperparameters on the test set. While the experiments are somewhat rescued by the fact that you report results for (almost) all the possible hyperparameter settings (which allows us to see samples from the population distribution of the generalisation error), type-setting the best results in boldface and thus implying that these are valid estimates of the generalisation error is not appropriate since you are effectively selecting the best hyper-parameters on the test set! Unfortunately, I cannot accept these results to be published \\u201cas-is\\u201d. While re-running the experiments with hyperparameter selection on validation set is already a somewhat imperfect solution, I am not sure I can see a better way forward. However, I do understand that this could be prohibitively expensive in which case I would like to ask you to suggest an alternative solution please (of course, other reviewers are welcome to chime in as well)?!\", \"While most of the paper is about Local Average Pooling (LAP) and the equivalence between averaging and pixel shifts, the experimental results seem to show that most of the improvement comes from the use of Coates et al.\\u2019s preprocessing step. Could you please run the experiments in tables 3 and 4 with c=0 and c=32 to see what the effect of the preprocessing is without LAP?\", \"In sect.6.3, you say \\u201cOur experiment illustrates that even with a fixed last FC layer, using GAP could improve the performance of CNN, and challenges the conjecture that GAP reduces the number of parameters in the last fully-connected layer and thus avoids overfitting.\\u201d I am not sure I see why fixing the last FC layer should provide more convincing evidence than training it? I do not know the conjecture to which you refer but from your description, the overfitting without GAP should occur because the FC layer has more parameters than with GAP?! If this is true, then the overfitting would happen in the last layer (due to the large number of parameters) which you have (at least partially) prevented by not training it?! Can you clarify and also report the results of this experiment with all the layers trained please?\", \"In Appendix D, you say that you have used lambda = 10^{-5} for all configurations. How have you selected this particular value please? Do you have a sense of how far from optimal this value is for all the different configurations (or at least for NTK vs NNGP models---in my experience, the optimal setting between the two can differ quite a bit)?\"], \"minor_comments\": [\"In the abstract and throughout the paper, you claim that the cost of kernel regression is quadratic. AFAIK without any approximations, the cost is cubic (or O(n^{2.67}) to be more precise). Please clarify.\", \"In par.1 on p.1, you say \\u201cconvolutional neural networks (CNNs) whose width (number of channels in convolutional layers) **goes to infinity**\\u201d (emphasis mine) and cite the Jacot et al. (2018) paper. AFAIK this paper only works with infinite networks but does not actually prove that **deep** networks of finite width (in each layer) converge to the NTK limit; IMHO you should cite the Allen-Zhu et al. (2018) and Du et al. (2018) papers from your references for that result. Based on p.2 (end of par.2 in sect.2), you seem to be aware of this distinction but cite Arora et al. (2019) instead of these two; I would suggest either citing Allen-Zhu et al. and Du et al. only, or citing all three as the Arora et al. paper came out later than the first versions of the other two paper which AFAIK already contained all the necessary derivations (even if the words \\u201cNeural Tangent Kernel\\u201d were not spelled out there).\", \"Also in par.1 on p.1, you say that Arora et al. (2019) was the first to provide an algorithm to compute the CNTK kernel which is a bit of a stretch given that both Garriga-Alonso et al. (2019) and Novak et al. (2019) have implemented the CNTK kernel in their experiments. AFAIK the claim in (Arora et al., 2019) is that they provided first **efficient** implementation of the CNTK-GAP kernel which should be made clearer in the next revision of your paper.\", \"On p.2, you say \\u201cThese kernels correspond to neural networks where only the last layer is trained.\\u201d In reality, the correspondence is not exact for finite networks because the induced kernel will not be exactly equal to the one at the limit.\", \"Bottom of p.2, \\u201cGlobal Average Pooling (GAP) is proposed\\u201d -> \\u201c... was proposed\\u201d.\", \"Top of p.3, \\u201c..., and GAP is more robust\\u201d -> \\u201c..., and that GAP is more robust\\u201d.\", \"On p.3 in the \\u201cPadding Schemes\\u201d paragraph, do you mean to assume that the input image has only a single channel (not necessary later)?\", \"On p.4, I am slightly confused by your definition of the \\u201caugmented kernel\\u201d. Specifically, it does not seem K^G (x , x\\u2019) = K^G (x\\u2019, x) holds in general. Can you please clarify? If there\\u2019s no symmetry, I do not think it necessary to use a different name, but perhaps a clarifying note would be beneficial to the reader?!\", \"On p.5, fig.1 is too small when printed and one needs to use the computer screen to see what is depicted; given the amount of white space around, can you please try to make the images larger (you can perhaps also only include 2 or 4 images instead of 16 which will give you additional space)?\", \"On p.5, you say that for small \\u201cc\\u201d, circular padding will not create unrealistic images. Looking at fig.1b, it seems like the images are not as unrealistic as in fig.1a but human eye can still tell they are not realistic (potentially even more so with other images than the one selected for this figure). I am not sure whether there is a reason to assume this issue does not affect CNNs too?! Further, I am not convinced the motivation is correct in the first place given that the optimal \\u201cc\\u201d for CIFAR-10 is 12 which will presumably create clearly unrealistic images; perhaps it would be best to omit this motivation?!\", \"On p.6, you claim \\u201cAnother advantage of LAP is that it **does not incur any extra computational cost**\\u201d (emphasis mine) while at the next line you say that there is a constant additional computational cost. Perhaps say that the extra computational cost is relatively small?\", \"It might be nice to swap tables 3 and 4 so at least the results for NNGP are next to each other. Even better would be the current table 3 was closer to table 1 to achieve the same effect for NTK.\", \"I am not sure I fully understand the description in sect.6.3: isn\\u2019t the number of channels on the input irrelevant after computation of the kernel in the first layer? In other words, why have you opted to use only 2,048 patches in your experiments and not 32,000 or 256,000 as used by Recht et al. (2019)? Do you have an estimate of how different could the performance of NNGP/NTK be with the larger number of features? Do you know what is the performance of Coates et al.\\u2019s algorithm with only 2,048 features? Relatedly, do you know how AlexNet would perform if its PCA data augmentation was replaced by the Coates et al.\\u2019s feature extractor?\"]}" ] }
H1g79ySYvB
Revisiting Gradient Episodic Memory for Continual Learning
[ "Zhiyi Chen", "Tong Lin*" ]
Gradient Episodic Memory (GEM) is an effective model for continual learning, where each gradient update for the current task is formulated as a quadratic program problem with inequality constraints that alleviate catastrophic forgetting of previous tasks. However, practical use of GEM is impeded by several limitations: (1) the data examples stored in the episodic memory may not be representative of past tasks; (2) the inequality constraints appear to be rather restrictive for competing or conflicting tasks; (3) the inequality constraints can only avoid catastrophic forgetting but can not assure positive backward transfer. To address these issues, in this paper we aim at improving the original GEM model via three handy techniques without extra computational cost. Experiments on MNIST Permutations and incremental CIFAR100 datasets demonstrate that our techniques enhance the performance of GEM remarkably. On CIFAR100 the average accuracy is improved from 66.48% to 68.76%, along with the backward (knowledge) transfer growing from 1.38% to 4.03%.
[ "gem", "inequality constraints", "gradient episodic memory", "continual learning", "effective model", "gradient update", "current task", "quadratic program problem", "catastrophic forgetting" ]
Reject
https://openreview.net/pdf?id=H1g79ySYvB
https://openreview.net/forum?id=H1g79ySYvB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "haDMmb_GW", "SkeYd-i2FB", "Bylpt08hYS", "HJxngHKiKB" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734657, 1571758449116, 1571741317351, 1571685619975 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1872/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1872/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1872/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes an extension of Gradient Episodic Memory (GEM) namely support examples, soft gradient constraints, and positive backward transfer. The authors argue that experiments on MNIST and CIFAR show that the proposed method consistently improves over the original GEM.\\n\\nAll three reviewers are not convinced with experiments in the paper. R1 and R3 mentioned that the improvements over GEM appear to be small. R2 and R3 also have some concerns without results with multiple runs. R3 has questions about hyperparameter tuning. The authors also appears to be missing recent developments in this area (e.g., A-GEM). The authors did not provide a rebuttal to these concerns.\\n\\nI agree with the reviewers and recommend rejecting this paper.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents three improvements to the previous GEM algorithm: choosing support examples better (the GEM paper used a random set), incorporating soft gradient constraints, and specifying the magnitude of the dot product in the gradient optimisation problem (in order to increase positive backward transfer). They then compare the original GEM algorithm (and other baselines) with their algorithm on two datasets, consider the case where there is less memory available, and provide ablation studies for their three improvements.\\n\\nI recommend to reject this paper. I am struggling to see the improvements in the results, as each algorithm is only run once on each dataset, and all the numbers seem very close together (GEM vs new algorithm). There are also a significant number of hyperparameters added by this paper: how are these tuned? I also do not understand the reasoning behind the third idea ('positive backward transfer'). I do, however, like the other two improvements and see the reasoning behind them, however I still have some misgivings. I will now elaborate on these points.\\n\\nFirstly, there are no standard deviations provided for the experiments. I believe this is especially important to do for the original GEM algorithm and the proposed algorithm because the two provide extremely similar metric values. Comparing values from the original GEM paper and this paper's run of GEM indicates to me that there is a significant chance that any improvements could be within error bars. For example, on CIFAR100, the authors' run of GEM gives 66.48%, the original GEM paper reports 67.83%, and the authors' method is 68.76%.\\n\\nSecondly, this paper introduces a significant number of hyperparameters over the original GEM algorithm. How are these tuned? Is there a validation set? Does having to tune these hyperparameters slow down computation significantly (how much exactly)? It would be nice to see how much more computationally expensive the new method is compared with GEM: the authors claim \\\"little computational burdens\\\".\\n\\nI like the soft gradient constraint idea introduced in this paper. It is more principled than the hack that the original GEM paper used. The method of choosing the support set also makes sense to me, however there are many hyperparameters in this idea. Additionally, the results (Table 3) for different memory sizes are confusing. It seems like smaller memory sizes lead to less improvement over random memory. Surely cleverly choosing memory should be more beneficial when constrained to smaller memory? This indicates to me that improvements need to still be made to the idea. I also do not follow the explanation given in Section 4.2.1 for the positive backward transfer improvement. Why should the magnitude of the inner products be specified? What do you mean by \\\"cosine similarity in magnitude\\\"?\\n\\nAs a final comment, I will say that there are many typos in this paper.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"Paper proposes three improvements upon the gradient episodic memory (GEM) method [Lopez-Paz,2017]. The three improvements address 1. A way of selecting which exemplars to store (originally random), 2. The strictness of the constraints is loosened by considering slack vectors, 3. The update is improved by promoting positive backward transfer, not only limiting the gradient to the constraints imposed by exemplars of previous tasks, but aiming to improve also for previous tasks by preferring gradients which have high cosine similarity with the gradients on exemplars. Results on CIFAR 100 and MNIST permuted are presented. Noteworthy is the improved backward transfer on CIFAR 100 when compared with GEM.\", \"conclusion\": \"The proposed improvements seem sensible. I liked especially the one which aims at improved backward transfer. However, the paper should have built upon the more recent A-GEM paper. Also, the proposals show strange behavior in the ablation study, and I am not convinced they all contribute to better performance. Finally, the gain with respect to GEM is very small. I, therefore, recommend a weak reject.\\n\\n1. The authors somehow missed the more recent paper \\u2018EFFICIENT LIFELONG LEARNING WITH A-GEM\\u2019. Their analysis should compare with this paper. This new paper does not address the points addressed in this paper, but even so, since it obtains generally slightly better results but is much more efficient, it is a better starting point. The relaxing of the constraints (with slacking vectors) might be less efficient in A-GEM which already relaxes the constraints.\\n2. Improved exemplar sampling: The authors say \\u2018we cannot afford to assign memory to the examples whose margins are negative\\u2019, why not ? I would like to see this ablated.\\n3. Why does the average accuracy start so low ? Are they also averaging over the unseen classes? Does it not make more sense to just average over the seen classes (those considered until the current task)?\\n4. The gain with respect to GEM is very small in Figure 1.\\n5. Could the authors explain their validation protocol (A-GEM also makes a point of that).\\n6. I think there is a typo in the equation, should have a j index on the left side, and should be A_jn-A_nn\\n7. Ablation study should be better written (it takes a while to understand to which of the three proposals the abbreviations link). The results of the ablation do not convincingly show the merits of two of the three proposals. These only work when combined with \\u2018threshold\\u2019. No reason/explanation is provided for this strange behavior. (error in the legend the support+soft should be green).\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes an extension to gradient episodic memory (GEM) to improve its performance and backwards transfer. Specifically, the proposed method selects \\\"support examples\\\" to represent each task (versus the last M examples for GEM); introduces slack variables to ensure the constraints imposed by GEM are not too restrictive; and uses cosine similarity between sample gradients to encourage backwards transfer.\\n\\nThe ideas proposed are interesting, and the paper is easy to follow.\\n\\nHowever, I have a number of concerns which I think, unfortunately, preclude publication at this point.\\n\\nPrimarily, while I believe the ideas are intuitive and novel, the experimental evaluation does not appear to support the claim that the proposed method significantly improves GEM (only a minor improvement is shown at best). This is further compounded by the fact that variances over multiple seeds/runs are not reported, which makes it difficult to gauge any statistically significant performance improvement.\", \"some_claims_in_the_paper_also_need_to_be_tempered\": \"- The abstract suggests performance improves \\\"remarkably\\\", but experiments do not support this.\\n- The last paragraph of Section 4.2 declares that adding a offset to the Lagrangian, as suggested by the GEM authors, \\\"is an ad-hoc practice and lacks rationality: it is not faithfully follow their mathematical formulation\\\". This may come across as confrontational, and I don't feel the point is valid in this case, given that the soft gradient constraints introduce a similar trade-off parameter.\\n\\nThe language is also imprecise and conversational at times; I would suggest another read-through and careful rewording for clarity. For example, in 4.2.2, \\\"this new constraint imposes cosine similarity actually...\\\".\\n\\nLastly, I'm not sure about this, but does the link to the GitHub repository break anonymity? Given the language difference, I'm not sure what the url refers to; but wonder if it could be a name.\\n\\nThe ideas have potential, and I suggest the authors explore avenues to further improve the efficacy of the method and expand the experimentation, and improve some of the writing in terms of language and claims made.\\n\\n(A note on the score: I don't think this is a 1 under the old ICLR system, but unfortunately have to recommend it be rejected in its current state)\"}" ] }
rkem91rtDB
Inductive and Unsupervised Representation Learning on Graph Structured Objects
[ "Lichen Wang", "Bo Zong", "Qianqian Ma", "Wei Cheng", "Jingchao Ni", "Wenchao Yu", "Yanchi Liu", "Dongjin Song", "Haifeng Chen", "Yun Fu" ]
Inductive and unsupervised graph learning is a critical technique for predictive or information retrieval tasks where label information is difficult to obtain. It is also challenging to make graph learning inductive and unsupervised at the same time, as learning processes guided by reconstruction error based loss functions inevitably demand graph similarity evaluation that is usually computationally intractable. In this paper, we propose a general framework SEED (Sampling, Encoding, and Embedding Distributions) for inductive and unsupervised representation learning on graph structured objects. Instead of directly dealing with the computational challenges raised by graph similarity evaluation, given an input graph, the SEED framework samples a number of subgraphs whose reconstruction errors could be efficiently evaluated, encodes the subgraph samples into a collection of subgraph vectors, and employs the embedding of the subgraph vector distribution as the output vector representation for the input graph. By theoretical analysis, we demonstrate the close connection between SEED and graph isomorphism. Using public benchmark datasets, our empirical study suggests the proposed SEED framework is able to achieve up to 10% improvement, compared with competitive baseline methods.
[ "Graph representation learning", "Graph isomorphism", "Graph similarity learning" ]
Accept (Poster)
https://openreview.net/pdf?id=rkem91rtDB
https://openreview.net/forum?id=rkem91rtDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "DoQRNUXgZL", "SJglRfghoS", "B1gi4Thsor", "Hklsbarior", "rkxn8kbtoB", "SklWvalKoB", "rklsO2xtiH", "HkxYLFlKiH", "S1xbh5Gg9r", "HkeAKNxptr", "SJgTQ4kwFB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734626, 1573810887575, 1573797171453, 1573768451301, 1573617491636, 1573616985171, 1573616755068, 1573615952950, 1571986089353, 1571779718178, 1571382309078 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1871/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1871/Authors" ], [ "ICLR.cc/2020/Conference/Paper1871/Authors" ], [ "ICLR.cc/2020/Conference/Paper1871/Authors" ], [ "ICLR.cc/2020/Conference/Paper1871/Authors" ], [ "ICLR.cc/2020/Conference/Paper1871/Authors" ], [ "ICLR.cc/2020/Conference/Paper1871/Authors" ], [ "ICLR.cc/2020/Conference/Paper1871/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1871/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1871/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper focuses on the problem of finding dense representations of graph-structured objects in an unsupervised manner. The authors propose a novel framework for solving this problem and show that it improves over competitive baselines. The reviewers generally liked the paper, although were concerned with the strength of the experimental results. During the discussion phase, the authors bolstered the experimental results. The reviewers are satisfied with the resulting paper and agree that it should be accepted.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to the rebuttal\", \"comment\": \"Dear authors,\\n\\nThank you for providing a rebuttal and adding new experimental results. I think that this has strengthened the paper and I increase my score to a weak accept. The novelty of the proposed approach is still limited but now with more empirical results and comparisons to existing methods, it is less of a concern.\"}", "{\"title\": \"Appreciate your attention and time\", \"comment\": \"Dear Reviewers,\\n\\nWe sincerely appreciate your valuable comments that help us improve the paper. If you have more questions or concerns about our latest draft or response, please feel free to let us know. We are happy to discuss with you.\"}", "{\"title\": \"Response to the comments from the reviewer (Part 2)\", \"comment\": \"Q2. In embedding distribution part, it seems that only identity kernel is easy to calculate. For commonly adopted kernels, the MLP is \\u201cmimicking the behavior of a kernel\\u201d, so it will still be limited by the kernel you choose in MMD. There are some statistical approaches to estimate \\u0424(z) like Nystr\\u00f6m method, maybe that will be another solution other than taking average.\\n\\nThanks for this good suggestion. In the latest draft, we have added Appendix K, where we evaluate how the Nystr\\u00f6m approximation impacts the effectiveness, and its scalability advantage. From the effectiveness and efficiency results, we see that the Nystr\\u00f6m method is promising to further enhance the scalability of the SEED framework in training phases, especially for the cases where a large number of WEAVE samples are needed.\"}", "{\"title\": \"Response to the comments from the reviewer (Part 2)\", \"comment\": \"Q2. My \\\"novelty\\\" critique is also made in light of the small number of datasets on which experiments have been conducted. If a new simple random walk strategy would lead to clearly better results on a number of datasets, this would be a significant contribution. As far as I can tell, however, the results are mixed and not very impressive especially due to the small number of datasets.\\n\\nThanks for your suggestions. In the latest draft, we have strengthened the experimental study in this paper, including the following updates.\\n$\\\\bullet$ We have added five additional public benchmark datasets, including NCI1, PROTEINS, COLLAB, IMDB-BINARY, and IMDB-MULTI. The dataset description is updated in Appendix F, and the their evaluation results are presented in Appendix G.\\n$\\\\bullet$ We have added another set of ablation study, where we evaluate the impact of different features in WEAVE. The evaluation results are presented in Appendix J.\\n$\\\\bullet$ We have added a new baseline where DeepSet serves for embedding subgraph distribution in SEED. In Appendix I, we investigate whether DeepSet is also a good option in the step of embedding distribution.\\n\\nThe newly added experimental results are detailed in Appendix F, G, I, and J. In the following, we briefly summarize the key observations.\\n$\\\\bullet$ The SEED framework outperforms the baseline methods in 18 out of 21 cases, and achieves competitive performance in the rest 3 cases. In particular, the SEED achieves up to 0.18, 0.13, and 0.22 absolute performance improvement, in terms of classification accuracy, clustering accuracy, and clustering NMI, respectively.\\n$\\\\bullet$ WEAVEs consistently outperforms vanilla random walks (without earliest visit time information). Indeed, WEAVEs are stronger at preserving structural information, while loops information is usually lost in vanilla random walks. The results highlight the importance of WEAVEs in the SEED framework.\\n$\\\\bullet$ For the baseline where DeepSet is deployed in the component of embedding subgraph distributions, we observe that it achieves similar performance compared with the one using feature mapping function evaluation. We confirm that DeepSet is compatible with the SEED framework, and it could be a good candidate in the step of embedding subgraph distributions.\"}", "{\"title\": \"Response to the comments from the reviewer (Part 1)\", \"comment\": \"We sincerely appreciate your valuable comments to our work.\\n\\nQ1. Unfortunately, the proposed method has limited novelty. The WEAVE sampling is a small variation on random walk sampling that's been around for a while in graph representation learning. Also, to define the similarity between set of vectors has been addressed before in numerous papers (e.g., all papers investigating learning for sets, DeepSets, etc.) and the method here seems a bit ad-hoc and doesn't compare to existing work.\\n\\nThanks for sharing your concerns. There could be confusion on the contribution in our work. Therefore, we discuss this concern from the following aspects.\\n\\n$\\\\bullet$ $\\\\small\\\\textbf{The main technical contribution.}$ We address the problem of inductive and unsupervised graph representation learning. As it is intractable to evaluate the error between input and reconstructed graphs, it is challenging to make graph learning inductive and unsupervised simultaneously. We propose the framework SEED, and its core idea is novel.\\n 1. Instead of directly evaluating reconstruction errors for original graphs, we first sample subgraphs which can preserve structural information and lead to efficient reconstruction evaluation. \\n 2. With the observation that similar graphs share similar subgraph distribution, we use the embedding of an input graph's subgraph distribution as its vector representation. \\nFor concrete implementations, we need to address three questions\\n I. What is the subgraph? \\n II. How to encode such a subgraph? \\n III. How to embed subgraph distributions? \\nIn this work, we propose a competitive implementation with three concrete components to answer the questions.\\n\\n$\\\\bullet$ $\\\\small \\\\textbf{What is the novelty in WEAVE?}$ WEAVE is our answer to Question I. WEAVE is a random walk variant that has the capability to preserve loop information in traversed graph data. As discussed in the paper, when the SEED framework is geared with WEAVE, it becomes closely related to graph isomorphism; however, existing random walk variants cannot meet this goal. While existing random walk variants have been widely utilized in node representation learning (e.g., DeepWalk and so on), WEAVE is the one that enables inductive and unsupervised graph representation learning. Note that our goal is not to propose a new random walk variant. Instead, our goal is to propose a strong candidate that meets the requirements in SEED. \\n\\n$\\\\bullet$ $\\\\small \\\\textbf{Are existing set similarity techniques related?}$ Existing set similarity techniques, such as DeepSet, could be quite related. In particular, DeepSet could be another strong candidate for the step of embedding subgraph distributions. In the latest draft, we have added a new baseline named DeepSet, where DeepSet is adopted for embedding subgraph distributions in SEED. The detailed empirical study is presented in Appendix I. We briefly summarize our discovery as follows.\\n\\n$\\\\bullet$ The SEED implementation based on DeepSet achieves competitive performance compared with the implementation based on identity kernel. The results suggest the effectiveness of DeepSet in the SEED framework.\"}", "{\"title\": \"Response to the comments from the reviewer (Part 1)\", \"comment\": \"Thank you so much for recognizing our work. We sincerely appreciate your valuable comments. The following are our response to your questions or concerns.\\n\\nQ1. In WEAVE encoding part, the paper doesn't show how much the earliest visiting time information improve the model. In another word, if we leave out the timing term $x_t^{(p)}$, will the model still perform well?\\n\\nThe earliest visiting time information is critical for WEAVE. Compared with vanilla random walk, WEAVE is able to preserve loop information in its traversed graph data because of the earliest visiting time.\\n\\nIn the latest draft, we have added Appendix J. In particular, we have added a new baseline where WEAVEs without the earliest visiting time information are employed for subgraph sampling and encoding. We briefly summarize our observations as follows.\\n$\\\\bullet$ The classification and clustering performance could suffer significant performance drop if we only consider node features for subgraph encoding. \\n$\\\\bullet$ We achieve the best performance when we jointly consider both node feature and earliest visit time information.\\n\\nIn addition, we have strengthened the experimental study in this paper, including the following updates.\\n$\\\\bullet$ We have added five additional public benchmark datasets, including NCI1, PROTEINS, COLLAB, IMDB-BINARY, and IMDB-MULTI. The dataset description is updated in Appendix F, and the evaluation results are presented in Appendix G.\\n$\\\\bullet$ We have added another set of ablation study, where we evaluate the impact of different features in WEAVE. The evaluation results are presented in Appendix J.\\n$\\\\bullet$ We have added a new baseline where DeepSet serves for embedding subgraph distribution in SEED. In Appendix I, we investigate whether DeepSet is a good option in the step of embedding distribution.\\n\\nThe newly added experimental results are detailed in Appendix F, G, I, and J. In the following, we briefly summarize the key observations.\\n$\\\\bullet$ The SEED framework outperforms the baseline methods in 18 out of 21 cases, and achieves competitive performance in the rest 3 cases. In particular, the SEED achieves up to 0.18, 0.13, and 0.22 absolute performance improvement, in terms of classification accuracy, clustering accuracy, and clustering NMI, respectively.\\n$\\\\bullet$ WEAVEs consistently outperforms vanilla random walks (without earliest visit time information). Indeed, WEAVEs are stronger at preserving structural information, while loops information is usually lost in vanilla random walks. The results highlight the importance of WEAVEs in the SEED framework.\\n$\\\\bullet$ For the baseline where DeepSet is deployed in the component of embedding subgraph distributions, we observe that it achieves similar performance compared with the one using feature mapping function evaluation. We confirm that DeepSet is compatible with the SEED framework, and it could be a good candidate in the step of embedding subgraph distributions.\\n\\nQ2. In embedding distribution part, it seems that only identity kernel is easy to calculate. For commonly adopted kernels, the MLP is \\u201cmimicking the behavior of a kernel\\u201d, so it will still be limited by the kernel you choose in MMD. There are some statistical approaches to estimate \\u0424(z) like Nystr\\u00f6m method, maybe that will be another solution other than taking average.\\n\\nThanks for this good suggestion. It could be promising to employ Nystr\\u00f6m method in the SEED framework. We will provide a concrete discussion to this question in part 2 of our response.\"}", "{\"title\": \"Response to the comments from the reviewer\", \"comment\": \"Thank you so much for recognizing our work. We sincerely appreciate your valuable comments. Our answers to your questions/concerns are as follows.\", \"q1\": \"It would be better if experiments can be conducted on a few more benchmark datasets used in the compared methods.\\n\\nThanks for the good suggestion. In the latest draft, we have strengthened the experimental study in this paper, including the following updates.\\n$\\\\bullet$ We have added five additional public benchmark datasets, including NCI1, PROTEINS, COLLAB, IMDB-BINARY, and IMDB-MULTI. The dataset description is updated in Appendix F, and the evaluation results are presented in Appendix G.\\n$\\\\bullet$ We have added another set of ablation study, where we evaluate the impact of different features in WEAVE. The evaluation results are presented in Appendix J.\\n$\\\\bullet$ We have added a new baseline where DeepSet serves for embedding subgraph distribution in SEED. In Appendix I, we investigate whether DeepSet is a good option in the step of embedding distribution.\\n\\nThe newly added experimental results are detailed in Appendix F, G, I, and J. In the following, we briefly summarize the key observations.\\n$\\\\bullet$ The SEED framework outperforms the baseline methods in 18 out of 21 cases, and achieves competitive performance in the rest 3 cases. In particular, the SEED achieves up to 0.18, 0.13, and 0.22 absolute performance improvement, in terms of classification accuracy, clustering accuracy, and clustering NMI, respectively.\\n$\\\\bullet$ WEAVEs consistently outperforms vanilla random walks (without earliest visit time information). Indeed, WEAVEs are stronger at preserving structural information, while loops information is usually lost in vanilla random walks. These results highlight the importance of WEAVEs in the SEED framework.\\n$\\\\bullet$ For the baseline where DeepSet is deployed in the component of embedding subgraph distributions, we observe that it achieves similar performance compared with the one using feature mapping function evaluation. We confirm that DeepSet is compatible with the SEED framework, and it could be a good candidate in the step of embedding subgraph distributions.\\n\\nIn addition, we have done another round of proof reading, and have fixed typos or grammar errors, including those suggested by the reviewers.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this work, a novel graph similarity learning framework SEED is proposed. Given an input graph, SEED proceeds in 4 steps, namely Sampling subgraphs, Encoding sampled subgraphs using autoencoder, aggregating subgraphs' Embedding Distribution into a vector representation. Theretically, a connection between proposed SEED and the graph isomorphism is established. Experimentally, simulation on DEEZE and MUTAG datasets validated the effectivety of the proposed graph learning framework.\", \"pro\": \"The paper is well structured and easy to follow. Experiments appears convincing, especially the t-SNE plots when varying the number of subgraph samples.\", \"con\": \"It would be better if experiments can be conducted on a few more benchmark datasets used in the compared methods.\", \"minor\": \"\", \"last_sentence_in_page_6\": \"each component have been -> each component has been\\nthe 5-th sentence in Sec. 4.3, focusing -> focuses\\nthe 7-th sentence in Sec. 4.3, At the meantime -> In the meantime\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors propose a general framework SEED (Sampling, Encoding, and Embedding Distributions) for inductive and unsupervised representation learning on graph structured objects. The most innovative part in this paper is random walks with earliest visiting time (WEAVE) in the sampling part. WEAVE has potential power for capturing structure difference and could reflect isomorphism as well. Instead of using language model like word2vec, encoding part leverages MLP to get the embedding of each WEAVE, which is efficient and intuitive. Then, the group of encoding results from all WEAVEs are aggregated with kernel functions, generating the final embedding of a graph. This method achieves better accuracy in both clustering and classification tasks than previous ones including GraphSAGE, GMN and GIN.\\nThis method uses an elegant way to embed graphs in an unsupervised manner, and the new random walk approach provides insights into graph structure encoding. Factors like walk length and number of walks are strictly derived then well examined in real experiments.\\nQuestions & suggestions:\\n1 In WEAVE encoding part, the paper doesn\\u2019t show how much the earliest visiting time information improve the model. In another word, if we leave out the timing term (x_t^{(p)}), will the model still perform well?\\n2 In embedding distribution part, it seems that only identity kernel is easy to calculate. For commonly adopted kernels, the MLP is \\u201cmimicking the behavior of a kernel\\u201d, so it will still be limited by the kernel you choose in MMD. There are some statistical approaches to estimate \\u0424(z) like Nystrom method, maybe that will be another solution other than taking average.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose a method for learning graph embeddings and focus specifically on a setting where not all graphs are part of the training data (the inductive setting). The core problem of graph embedding methods is to find a learnable function that maps arbitrary graphs into a fixed-sized vector representation. There have been several proposals ranging from the class of graph kernels to variations of graph neural networks. The authors propose a method that consists of three steps\\n\\n(1) sample a number of subgraphs from the original graphs\\n(2) learn an encoding function for these subgraphs (subgraph -> vector representation)\\n(3) for every graph we, therefore, get a set of vector representations, one per subgraph. We now try to find a similarity measure operating on sets of vectors to compute the distance between graphs.\\n\\nThe novel bits are\\n(a) the way that the subgraphs are sampled (using an algorithm called WEAVE, that stores more information about random walks) and, therefore, is able to be distinguish graphs based on the extracted walks that standard random walk based methods cannot; and \\n(b) to define a similarity measure based on the set of vectors. \\nThe authors also prove that their method is (under some assumptions) able to decide the isomorphism problem. This is a nice result to have in light of recent papers that have investigated the limitations of GNN in comparison to Weisfeiler-Leman and isomoprhism testing. \\n\\nUnfortunately, the proposed method has limited novelty. The WEAVE sampling is a small variation on random walk sampling that's been around for a while in graph representation learning. Also, to define the similarity between set of vectors has been addressed before in numerous papers (e.g., all papers investigating learning for sets, DeepSets, etc.) and the method here seems a bit ad-hoc and doesn't compare to existing work. \\n\\nMy \\\"novelty\\\" critique is also made in light of the small number of datasets on which experiments have been conducted. If a new simple random walk strategy would lead to clearly better results on a number of datasets, this would be a significant contribution. As far as I can tell, however, the results are mixed and not very impressive especially due to the small number of datasets.\"}" ] }
SyxM51BYPB
A new perspective in understanding of Adam-Type algorithms and beyond
[ "Zeyi Tao", "Qi Xia", "Qun Li" ]
First-order adaptive optimization algorithms such as Adam play an important role in modern deep learning due to their super fast convergence speed in solving large scale optimization problems. However, Adam's non-convergence behavior and regrettable generalization ability make it fall into a love-hate relationship to deep learning community. Previous studies on Adam and its variants (refer as Adam-Type algorithms) mainly rely on theoretical regret bound analysis, which overlook the natural characteristic reside in such algorithms and limit our thinking. In this paper, we aim at seeking a different interpretation of Adam-Type algorithms so that we can intuitively comprehend and improve them. The way we chose is based on a traditional online convex optimization algorithm scheme known as mirror descent method. By bridging Adam and mirror descent, we receive a clear map of the functionality of each part in Adam. In addition, this new angle brings us a new insight on identifying the non-convergence issue of Adam. Moreover, we provide new variant of Adam-Type algorithm, namely AdamAL which can naturally mitigate the non-convergence issue of Adam and improve its performance. We further conduct experiments on various popular deep learning tasks and models, and the results are quite promising.
[ "Machine Learning", "Algorithm", "Adam", "First-Order Method" ]
Reject
https://openreview.net/pdf?id=SyxM51BYPB
https://openreview.net/forum?id=SyxM51BYPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "ueJfykS85b", "H1xy8D7-iS", "S1lPzDQWiB", "SyetOLmboH", "H1gj6JT7qr", "rJlnmSE-qH", "SJg7hHphtH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734597, 1573103431125, 1573103374728, 1573103217210, 1572224963309, 1572058403837, 1571767723034 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1870/Authors" ], [ "ICLR.cc/2020/Conference/Paper1870/Authors" ], [ "ICLR.cc/2020/Conference/Paper1870/Authors" ], [ "ICLR.cc/2020/Conference/Paper1870/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1870/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1870/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"In this paper, the authors draw upon online convex optimization in order to derive a different interpretation of Adam-Type algorithms, allowing them to identify the functionality of each part of Adam. Based on these observations, the authors derive a new Adam-Type algorithm, AdamAL and test it in 2 computer vision datasets using 3 CNN architectures. The main concern shared by all reviewers is the lack of novelty but also rigor both on the experimental and theoretical justification provided by the authors. After having read carefully the reviews and main points of the paper, I will side with the reviewers, thus not recommending acceptance of this paper.\", \"title\": \"Paper Decision\"}", "{\"title\": \"RE: Official Blind Review #3\", \"comment\": \"We thank the reviewer for their thoughtful comments and questions. We address them in order.\\n\\n1. The novelty of this paper is limited.\\nWe restate our contributions in the new version in Section 1. We summarize our contribution as follows:\\nWe provide a new perspective in understanding the non-convergence behavior of Adam-Type algorithms based on mirror descent approach. Our analysis agrees well with the previous works but much more intuitive and effective. \\nBy using our analyzing framework, we can: \\n2a) clearly identify the functionality of each part of Adam algorithm such as \\\\beta_1,2, v_t and m_t (Section 3.1)\\n2b) easily explain why EMA can help Adam-type algorithms with smooth trajectory and less oscillation around the optimal point.\\n3c) the sensitivity of loss function and so on.\\nBased on our observation, we identify potential faults in Adam-Type algorithms and we provide a new Adam variant algorithm, named AdamAL.\\nWe conduct a series of experiments on different machine learning tasks and models by using our AdamAL algorithm, and the results are promising and the performance of AdamAL is never worse than Adam.\\nIn fact, the primary goal of our work is trying to understand the mechanism of Adam-type algorithms, especially the v_t. The mystery of Adam\\u2019s v_t draws a lot of attention in recent years, such as Zhou et al.2018, Balles et al. 2018. However, the traditional analysis is insufficient, it regards the v_t as second momentum which is hard to understand. Besides v_t, we also use our analyzing framework to explore the usage of hyper-parameters \\\\beta_1 and \\\\beta_2. It is also a hot topic in the recent (Lucas et al. 2019). \\n\\nYou also mentioned that \\u201cIt provides a unified viewpoint to consider a broad class of algorithms.\\u201d. Yes, we think our main contribution aims to deliver a new perspective in understanding of Adam-type algorithms. Based on our knowledge, we do not know any previous work similar to us. In addition, our new algorithm adamAL is a very good example to show our framework is useful. Based on the principle of our framework, we design AdamAL in a very natural way. Meanwhile, we can use this framework to design more Adam-Type algorithms. This also leads to another contribution of our work, that is, our framework could help researchers to design a new and better algorithm in the future. \\n\\n\\u201cIt seems that contribution is more conceptual rather than practical\\u201d. We mentioned this in the above. Again, AdamAL is one of the practical algorithms we design by the guidance of our unified framework. \\n\\n\\u201cI suggest the authors add more examples or theorems to show the superiority of their mirror descent framework.\\u201d Thank you for your valuable suggestion. In fact, we are trying to show the superiority of our framework in Corollary 1 and 2; and we explain some of them in the Section 3.1 as well. We plan to extend more algorithms and theorems according to this framework in the future. \\n\\n\\n2. \\\"Some of the explanations in this paper may be wrong. For instance, around equation 17, the authors suggested that if the swap happens, then v_{t+1} = v_t. That is not correct since the swap happens for each coordinate.\\\"\\n\\nWe actually mentioned that the \\\\max(x, y) operation denotes the entry-wise maximum in Section 2 Notations which means if swap happens, v_{t+1} = \\\\max{v_{t+1}, v_t} will perform entry-wise swap. \\nThank you for pointing out this. We rewrite the Section 3.3 and we use entry-wise notation such as v_{t+1, i}.\\n\\n3. \\\"Meanwhile, the explanation about why AdamAL is better than AMSgrad is quite poor since the update rule of AMSgrad can also guarantee the coordinate decreasing of v_t.\\\"\\n\\n\\n4. The authors should complete the proof of Theorem 3.1. \\nWe add proof in Appendix.\\n\\n5. The settings of experiments are limited. \\nWe conduct more experiments and show the results in this version, please check.\", \"minor_comments\": \"1. Below equation 8, detail -> details.\\nThank you very much, we fix it in new revision.\\n\\n2. The authors should add the definition of 1:t in subscript for g_{1:t} or \\\\phi_{1:t}.\\nThank you for your suggestion, we add those definitions in the Section 2 Notations.\\n\\n3. Page 6, the first paragraph, logt-> \\\\log t\\nThank you very much, we fix it in new revision.\\n\\n4. This paper lacks some references in this area. \\nThank you for your suggestion, we add these references to this revision.\", \"references\": \"Zhou et al.2018: https://openreview.net/forum?id=HkgTkhRcKQ\\nBalles et al. 2018: https://arxiv.org/pdf/1705.07774.pdf\\nLucas et al. 2019: https://openreview.net/forum?id=Syxt5oC5YQ\"}", "{\"title\": \"RE:Official Blind Review #1\", \"comment\": \"We thank the reviewer for their time and their kind words about our work. We will address your concerns and questions in the order you wrote them.\\n\\n1.It is not clear whether the heuristic observations are useful.\\nI am assuming that heuristic observations refer to the observations in Section 3.3 and Figure 1 (if not this one, please correct me). The observations in Section 3.3 demonstrate that (1) different coordinates have very different swapping counts; (2) the swapping frequency is nonuniform, which means the swapping interval between two swaps happens is different for different coordinates. They are very useful because they identify the non-alignment issue in AMSGrad. \\n\\n2.Theorem 3.1 does not even lead to the convergence of the algorithm.\\n\\nTheorem 3.1 shows the convergence of a class of Adam-Type algorithms for the non-convex optimization. And we think it leads to the convergence of the algorithm (See https://arxiv.org/abs/1808.02941 for reference). In our first submitted paper, we actually follow the same proof sketch above. We would like to check and fix the problems of our proof, and thank you for pointing out this issue.\\n\\nHowever, in this new revision, we decide to replace Theorem 3.1 by using Zinkevich regret analysis which is commonly used in proving the Adam-Type algorithms (Reddi et al. 2019, Chen et al. 2018) and it is sufficient for proving our algorithm.\\n\\n3.The simulation does not show the advantage.\\n\\nWe are sorry for the inappropriate representation of our experiment results. We conduct experiments on ResNet18 and VGG16 with CIFAR10, and in fact, our algorithm shows the advantage of such settings, as we mentioned in Section 4, AdamAL's performance is never worse than Adam, AMSGrad and Vanilla SGD. When you zoom in on the last 20 epochs, AdamAL achieves better accuracy on validation set and meanwhile, it achieves lower loss values on training set.\\n\\nIn order to see our experiment results better, in this revision:\\n(a) We will zoom in on both training and testing results for clear visualization in next version.\\n(b) We add more convincing experimental results to our paper.\\n\\n\\n4.There are too many typos and grammar errors.\\nWe fix them in this revision and thank you for pointing out.\\n\\n1) Heuristic observations.\\nIn Section 3.3, we use one-step analysis to illustrate the two different approaches. For Adam the update of v_{t+2, i} at i-th coordinate is v_{t+2, i} = \\\\beta_2^2 v_{t,i} + (1-\\\\beta_2)\\\\beta_2 g_{t+1, i}^2 + (1-\\\\beta_2) g_{t+2, i}^2. However, the AMSGrad uses v_{t+2, i} = \\\\beta_2 v_{t, i} + (1-\\\\beta_2) g_{t+2, i}^2 if swap happens. It is very clear that AMSGrad will not use the information of g_{t+1, i} in only one-step update. And we also notice that Adam use \\\\beta_2^2v_{t,i} and AMSGrad use \\\\beta_2 v_{t, i}. We can not ignore this difference even though we have big \\\\beta_2 value 0.999. Because the experiment in Section 3.3 (Figure 1) tells us the different coordinates have very different swapping counts and there is no way we can know when the swap happens and how many swaps will it be. Suppose v_t = (v_{t, 1}, v_{t, 2}, ... v_{t, i}, ...), and v_{t, i} is the i-th coordinate at iteration t. AMSGrad will generate v_t = (..., v_{t, i}, v_{*, j}, ...) with j-th coordinate using some value v_{*, j} but not v_{t, j} since the the ASMGrad swaps happen in nearly random behavior. This will cause the issue which we call non-alignment of v_t coordinates. The value of v_{*, j} is unpredictable and totally loses its meaning. If we use incorrect (or not up-to-date) coordinates of v_t, the searching direction of some coordinates to the optimal may be wrong, which highly leads the result to the suspicious local optimal. The wrong searching direction due to (1)unpredictable swap; (2)g_{, i} the gradient information skipping.\\n\\n2) Too many typos and grammar errors. \\n\\n2a) \\\"They derive it mainly from an unrealistic objective function\\\" -- what does \\\"objective function\\\" mean?\\n\\nThe \\\"objective function\\\" refers to a loss function constructed by Reddi et al. (2019) in Section 3 of the Theorem.1. They use this function to prove the non-convergence of Adam. We mention the Reddi et al's objective function in Section 3.2 \\\"the objective function with periodicity gradient rarely seen in real scenario.\\\" And in the section \\\"The non-convergence of Adam\\\" with \\\"Reddi et al. (2019) construct an objective function with periodicity gradient to illustrate ... which is hard to follow.\\\". Because this paper is well-known and it is the best paper of ICLR 2018, we prefer the readers to check it from the reference for the sake of simplicity. Again, the Reddi et al's objective function refers to function f_t(x) = Cx for t mod 3 = 1 or f_t(x) = -x otherwise. \\n\\n2b)What is the \\\"ill-condition problem\\\"?\\nWe refer to the AMSGrad non-alignment problem. We remove the \\\"ill-condition problem\\\" in the revision.\", \"references\": \"Reddi et al. 2019: https://arxiv.org/pdf/1904.09237.pdf\\nChen et al. 2018: https://arxiv.org/pdf/1902.09843.pdf\"}", "{\"title\": \"We appreciate your thorough review and helpful suggestions.\", \"comment\": \"We thank the reviewer for their time and their kind words about our work.\\n\\n1. We upload the revised version to resolve all of your concerns.\\n\\n1.1 We proofread the paper and complete the proof of our main Theorem 3.1. \\n1.2 We rewrite the Section 3.3 for more clear heuristic observations on the ASMGrad algorithm. We also carefully explain the differences of AMSGrad and desired Adam algorithms. \\n1.3 We restate the contribution of our work in the introduction section. You will be easy to follow the outline of our paper. \\n1.4 Notations are explained in Section 2 Notations. In this version, we add more notation definitions for your information.\\n1.5 We add more detailed experiment results in this revision, and we also conduct more experiments with different settings and algorithms.\\n\\n2. \\\"Empirically, there are results on one dataset ... AdamAL works better. However, it is not the detailed type of experiments I was expecting of a paper that points out specific issues in AMSGrad.\\n\\nTo point out the issues in AMSGrad, we first conduct the experiment on sampled coordinates of v_{t, i:j}. More specific, we randomly sampled 10 coordinates v_{t, k} where i<=k<=j. We trace those coordinates at each iteration. Here, two metrics are being used, one is the total number of swapping and another is the frequency in between the two swaps. We have two obversions: (1) different coordinates have very different swapping counts; (2) the swapping frequency is nonuniform, which means the swapping interval between two swaps happens is different for different coordinates. (Mentioned in Section 3.3)\\n\\nThese two observations may cause the performance of AMSGrad not better than Adam. And AMSGrad is also suffering the issue of poor generalization ability as the Adam. The reasons are: (1) It is necessary to keep all the coordinates of v_t to be up-to-date. However, AMSGrad breaks this rule. The example shown in Section 3.3 tells us that AMSGrad has v_{t+2, i} = \\\\beta_2 v_{t, i} + (1-\\\\beta_2)g_{t+2, i}^2 (Equation 17) if the swap happens. The gradient information of g_{t+1, i} will be skipped. This may cause the slowing of convergence speed or become a possible reason for poor generalization ability. (2) All coordinates in v_t should stay on the same iteration of t, but AMSGrad can cause non-alignment of v_t coordinates, for example, using v_{t, i} and v_{t-m, j} in the same iteration. (3) The swapping frequency is nonuniform and unpredictable. When the training iterations are increasing, the small errors will be accumulated at each iteration and affect the final model accuracy.\\n\\nYet, the above issues may not cause the non-convergence issue of AMSGrad, but the truths are (1) AMSGrad does not show superiority than Adam; (2) AMSGrad also suffer the poor generalization ability issue, same as Adam; (3)\\\"Even if AMSGrad uses a different v from the original Adam, it is not necessarily bad.\\\"(Review #1), we agree with this point of view, however, in our experiment results, adamAL performance better than both Adam and AMSGrad.\", \"back_to_your_question\": \"\\\"Shouldn't the experiments be showing these weaknesses in some sort of controlled setting?\\\". We basically discover this weakness by conducting the tracing experiments on sampled coordinates of v_t, and we justify the problem of AMSGrad by Equation 17 and Section 3.3. We will do some research on this issue in the future.\\n\\nBesides your questions, we would like to restate our main contribution of our work, because it seems all reviews are focusing on our algorithm part and ignoring the way we try to understand the Adam-Type algorithms. In fact, in this work, we try to deliver a new perspective in understanding of Adam-type algorithms which no one did this before. Based on our analysis framework, we can clearly identify the functionality of each part of Adam algorithm such as v_t and m_t. We can also easily explain why EMA can help Adam-type algorithms with smooth trajectory and so on. In contrast, the traditional frameworks have difficulty to demonstrate these. The results produced by using our analyze framework are quite promising and they are consistent with many other research works (mentioned in Section 3.1). Our main contribution is providing a unified viewpoint to Adam-type algorithms which can help researchers to design a better algorithm in the future. Our new algorithm adamAL is a very good example to show our framework is useful. We directly design this algorithm by using our analyze framework and as a result, AdamAL outperforms AMSGrad and Adam.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes to study some weaknesses of Adam and AMSGrad and propose a new method called AdamAL that is evaluated on CIFAR10.\\n\\nI am not from this area, but unfortunately I find this paper to not be rigorously written or organized. Section 3.3 for instance which discusses the non-alignment projection issue with AMSGrad is not rigorously written. There are no proofs to any theorems and even some of the theorems/corollaries are not written rigorously. \\n\\nOrganization-wise I feel it is difficult to see where the paper is going and some sort of outline / notation box would help. \\n\\nEmpirically, there are results on one dataset CIFAR10 that shows the author's proposed variant AdamAL works better. However, it is not the detailed types of experiments I was expecting of a paper that points out specific issues in AMSGrad. Shouldn't the experiments be showing these weaknesses in some sort of controlled setting?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper analyzed a few issues of Adam, and proposed a new variant of Adam called AdamAL.\", \"there_are_quite_a_few_issues\": \"--It is not clear whether the heuristic observations are useful. \\n--Theorem 3.1 does not even lead to the convergence of the algorithm. \\n--The simulation does not show the advantage. \\n--There are too many typos and grammar errors.\\n\\n1) Heuristic observations.\\nOne main observation is that v_{t+1} = beta_2 v_t + (1 - beta_2) g_{t+1}^2 in the original Adam will be different from \\\\hat{v}_t defined in AMSGrad. The paper states \\\"...will accumulate this small error into each step\\\" ...\\\"will lead this model to find a suspicious local optimal\\\". This claim makes little sense. Even if AMSGrad uses a different v from the original Adam, it is not necessarily bad. In addition, why is this related to \\\"suspicious local optimal\\\" (I suppose this paper intends to say \\\"spurious local optima\\\")? I do not have any intuition why this is related to spurious local minima. \\n\\n\\n\\n2) Too many typos and grammar errors. \\n\\nJust give one example. In the first paragraph of Sec. 3.3, there are at least 15 typos, and a few sentences are hard to read: \\n \\\"They derive it mainly from an unrealistic objective function\\\" --what does \\\"objective function\\\" mean? \\n \\\"Does it really solve the problem or dose this design violate the intuition of Adam-Type algorithm\\\"? --what is the \\\"intuition\\\" to be violated? I roughly get the point of this sentence, but \\\"intuition\\\" is not a good word here.\\n \\\"To be more specifically, we present a simple one-step AMSGrad swapping at iteration t and figure out the ill-condition problem\\\". --What is \\\"ill-condition problem\\\"? It is not mentioned in this paragraph, and I don't know where it comes from.\\n In the whole paper, there are too many problematic sentences to enumerate (a very rough estimate: at least 30?) It makes the paper very difficult to read.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\n\\nThis work proposed a framework to analyze both Adam-type algorithms and SGD-type algorithms. The authors considered both of them as specialized cases of mirror descent algorithms and provided a new algorithm AdamAL. The authors showed experiments to backup their theoretical results.\", \"pros\": \"The authors provided a novel framework to analyze Adam-type algorithms by using standard FTRL framework. It provides a unified viewpoint to consider a broad class of algorithms. The authors also provided a new algorithm AdamAL to overcome some shortcomings in previous algorithms.\", \"cons\": [\"The novelty of this paper is limited. The authors provided a framework to analyze Adam-type algorithms. However, it seems that contribution is more conceptual rather than practical. I suggest the authors add more examples or theorems to show the superiority of their mirror descent framework.\", \"Some of the explanations in this paper may be wrong. For instance, around equation 17, the authors suggested that if the swap happens, then v_{t+1} = v_t. That is not correct since the swap happens for each coordinate. Meanwhile, the explanation about why AdamAL is better than AMSgrad is quite poor since the update rule of AMSgrad can also guarantee the coordinate decreasing of v_t. I suggest the authors explain more on the algorithm design.\", \"The authors should complete the proof of Theorem 3.1.\", \"The settings of experiments are limited. The authors should at least compare AdamAL with other baseline algorithms on some modern deep learning tasks including Imagenet.\"], \"minor_comments\": \"- Below equation 8, detail -> details.\\n- The authors should add the definition of 1:t in subscript for g_{1:t} or \\\\phi_{1:t}.\\n- Page 6, the first paragraph, logt-> \\\\log t\\n- This paper lacks some references in this area. \\n\\nJ. Chen and Q. Gu. Closing the generalization gap of adaptive gradient methods in training\\ndeep neural networks. arXiv preprint arXiv:1806.06763, 2018.\\nWard, R., Wu, X. and Bottou, L. (2018). Adagrad stepsizes: Sharp convergence over nonconvex\\nlandscapes, from any initialization. arXiv preprint arXiv:1806.01811 .\\nLi, X. and Orabona, F. (2018). On the convergence of stochastic gradient descent with adaptive\\nstepsizes. arXiv preprint arXiv:1805.08114 .\"}" ] }
HyeG9yHKPr
Causally Correct Partial Models for Reinforcement Learning
[ "Danilo J. Rezende", "Ivo Danihelka", "George Papamakarios", "Nan Rosemary Ke", "Ray Jiang", "Theophane Weber", "Karol Gregor", "Hamza Merzic", "Fabio Viola", "Jane Wang", "Jovana Mitrovic", "Frederic Besse", "Ioannis Antonoglou", "Lars Buesing", "Julian Schrittwieser", "Thomas Hubert", "David Silver" ]
In reinforcement learning, we can learn a model of future observations and rewards, and use it to plan the agent's next actions. However, jointly modeling future observations can be computationally expensive or even intractable if the observations are high-dimensional (e.g. images). For this reason, previous works have considered partial models, which model only part of the observation. In this paper, we show that partial models can be causally incorrect: they are confounded by the observations they don't model, and can therefore lead to incorrect planning. To address this, we introduce a general family of partial models that are provably causally correct, but avoid the need to fully model future observations.
[ "causality", "model-based reinforcement learning" ]
Reject
https://openreview.net/pdf?id=HyeG9yHKPr
https://openreview.net/forum?id=HyeG9yHKPr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "IWKzm3Lt_1", "rkx7BVYhsB", "ryecJfD3iB", "HygH9Gmhir", "Sye2G8I_iH", "B1xcYH8uoS", "HkxGiNLuiS", "Bklram8_jH", "rJgKAsG09B", "rkez7nP55B", "H1eOrcwaKB", "HJl1FwQaFH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734565, 1573848122620, 1573839329854, 1573823117412, 1573574164413, 1573574017956, 1573573786306, 1573573565416, 1572903888607, 1572662298153, 1571809855633, 1571792759266 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1869/Authors" ], [ "ICLR.cc/2020/Conference/Paper1869/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1869/Authors" ], [ "ICLR.cc/2020/Conference/Paper1869/Authors" ], [ "ICLR.cc/2020/Conference/Paper1869/Authors" ], [ "ICLR.cc/2020/Conference/Paper1869/Authors" ], [ "ICLR.cc/2020/Conference/Paper1869/Authors" ], [ "ICLR.cc/2020/Conference/Paper1869/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1869/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1869/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1869/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The authors show that in a reinforcement learning setting, partial models can be causally incorrect, leading to improper evaluation of policies that are different from those used to collect the data for the model. They then propose a backdoor correction to this problem that allows the model to generalize properly by separating the effects of the stochasticity of the environment and the policy. The reviewers had substantial concerns about both issues of clarity and the clear, but largely undiscussed, connection to off-policy policy evaluation (OPPE).\\n\\nIn response, the authors made a significant number of changes for the sake of clarity, as well as further explained the differences between their approach and the OPPE setting. First, OPPE is not typically model-based. Second, while an importance sampling solution would be technically possible, by re-training the model based on importance-weighted experiences, this would need to be done for every evaluation policy considered, whereas the authors' solution uses a fundamentally different approach of causal reasoning so that a causally correct model can be learned once and work for all policies.\\n\\nAfter much discussion, the reviewers could not come to a consensus about the validity of these arguments. Futhermore, there were lingering questions about writing clarity. Thus, in the future, it appears the paper could be significantly improved if the authors cite more of the off policy evaluation literature, in addition to their added textual clairifications of the relation of their work to that body of work. Overall, my recommendation at this time is to reject this paper.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for the response. That is a great question regarding the good choice of the partial view.\\n\\nFirst, to explain the invariance of the model: the behavior policy is the combination of the two arrows s_t -> z_t -> a_t as shown in Figure 3(b). The causally correct model is invariant with respect to a change in the arrow z_t -> a_t, shown as red in Figure 3(b), but not with respect to a change in s_t -> z_t. That's why the best-found simulation policy in Figure 4 changes with respect to the behavior policy. In any case however, the model will be causally correct, in the sense that it will evaluate correctly any simulation policy conditioned on the information available in the simulation: the s_0, the previous actions and the previous partial views.\\n\\nAs you very correctly pointed out, the choice of the partial view z_t affects how good the simulation policy can be. In particular, what matters is how much information the partial view z_t has about the state s_t. If z_t has all the information about s_t, then the best-found simulation policy can be optimal, as shown in the leftmost and rightmost points of Figure 4(a). However, if z_t has no information about s_t, the best-found simulation policy will be the best open-loop policy, as shown in the middle of Figure 4(a).\\n\\nTo choose a good partial view, the user can choose a sweet spot between the intended action and the full observation. A partial view with the intended action is enough to make the model causally correct. The intended action can be concatenated with more information from the observation, at the cost of increasing the variance of the simulation, increasing the branching factor of a tree search and making the partial view harder to model. In practice, the intended action worked surprisingly well.\\n\\nThe best choice will depend on the environment and the planning algorithm. For example, if the observation background contains irrelevant noise, the partial view does not need to contain this noise. And if using just a small amount of planning before collecting more data, it is not needed to be able to express a simulation policy very different from the behavior policy. In the worst case, even if the planning is not able to find a better policy for actions a_1, a_2, ..., the model can be used to estimate the expected on-policy action-values for the (s_0, a_0) actions and improve the policy for a_0.\\nWe will explore multiple planning algorithms in future works.\"}", "{\"title\": \"Response acknowledgement\", \"comment\": \"Thanks to the authors for the response and for the edits to the paper. I'm very positive about the revisions that were made.\\n\\nRegarding Figure 4(a), I'm afraid I still find myself a bit confused. The author response seems to indicate that which behavior policy is used affects the quality of the simulation policy. However, it seemed to me that the whole point of the proposed technique was exactly to remove this effect.\\n\\nI suspect the subtlety lies in what is used as the partial view? If so, how should readers understand intuitively which partial views are better than others? Are there good ways to select the \\\"right\\\" partial view? Figure 4(a) seems to suggest that making a good choice is critical to the success of the proposed technique.\"}", "{\"title\": \"Summary of changes\", \"comment\": \"We thank all reviewers for their time and effort to improve the paper.\\nWe substantially revised the paper and believe to have clarified all points of confusion.\", \"the_main_changes_to_the_paper\": \"1. Much cleaner Figure 1 with the MDP diagram.\\n2. Improved the overall clarity of Section 3.\\n3. Added a paragraph to Section 3 relating the backdoor adjustment to importance sampling.\\n4. Added another paragraph to Section 3 clarifying the relation between the causal concepts and RL.\\n5. Added Algorithm 1 to Appendix D to describe the model training.\\n6. Added Algorithm 2 with model usage to generate a simulation.\\n7. Added a discussion to Appendix E to compare the properties of autoregressive models, deterministic non-causal models and the causal partial models.\\n8. We reran the experiment from Figure 5 with 50 seeds (instead of 5) and provided 95% confidence intervals.\"}", "{\"title\": \"Authors response\", \"comment\": \"We thank the reviewer for the constructive review.\\n\\nThe MDP example from Section 2 is just for illustration purposes, it serves to demonstrate that learning an action-conditional partial model that has no access to the true state s will make the wrong predictions in general, resulting in sub-optimal behaviour.\\n\\nWe understand that the proposed backdoor adjustment may seem similar to off-policy policy evaluation and we added an extra paragraph to Section 3 clarifying the difference between both ideas. For completeness, we discuss the key differences below.\\n\\nThe main problem we are addressing is not only how to evaluate a new policy, but also how to learn a causally correct model that is robust against policy changes (this notion of robustness is more formally defined in Section 3). In order to make correct predictions under a new policy, the model must be trained in such a way as to learn the independent effects of the environment's stochasticity and the policy stochasticity (the actions) in the future states of the environment.\\n\\nIn principle, this could be achieved with an importance-sampling method whereby we re-train the model with importance weights calculated using the ratio of old and new policy probabilities (it does not matter whether y=r or not). However, this solution suffers from two major drawbacks:\\n1) It requires either re-training the model for every new policy or keeping the entire dataset around, resulting in high computational cost.\\n2) As we grow the prediction sequence length, the importance weights quickly degenerate (effective sample size ~1), resulting in high-variance training.\\n\\nOur proposed solution via backdoor-adjustment allows us to make rollouts under a new policy *without re-training the model*. This is only possible because the model is broken down into two components:\\na) The prediction likelihood p(y | h, z, a) which is conditioned on the right variable, the backdoor z. This conditioning breaks the dependency between the environment's state and the policy's actions -- but only if z is a backdoor for the pair a-y.\\nb) The likelihood p(z|h), which models the information necessary to act.\\nThese model components are independent of the behaviour policy \\\\pi(a|z). We can thus replace \\\\pi(a|z) by any other policy during evaluation/planning and the model rollouts should remain correct without any re-training or trajectory re-weighting.\\n\\nWe believe these contributions constitute a substantial and qualitative deviation from the classical importance-sampling solution. Our objective is to learn a model suitable for planning (e.g. planning by tree search). Classical importance sampling cannot be used for this.\\n\\nWe added a new discussion section to Appendix E comparing existing autoregressive models, deterministic models and the new causal partial models in the revised text. We hope that this will further clarify these points.\\n\\nThank you for mentioning the citation Silver et al. (2017b). The revised text does not refer to it in the paragraph about action-conditional next-step models anymore.\"}", "{\"title\": \"Authors response\", \"comment\": \"Thank you for your review. We substantially revised Section 2 based on your comments.\\n\\n1. We\\u2019ve changed Figure 1 following your suggestions: we removed the confusing symbols s_1, a_1, etc. from the diagram, named each state and action uniquely, associated a reward with each state-action pair, clearly indicated the terminal state, and mentioned in the caption that the MDPs are stochastic. The new diagram should make the definition of the two FuzzyBear MDPs clear. As for the symbols s_1, a_1, etc., they should not be understood as designating specific states/actions, but as random variables ranging over states/actions. For example, s_1 is the random variable with the meaning \\u201cstate at time 1\\u201d and can take on values in the set {teddy bear, grizzly bear}, where \\u201cteddy bear\\u201d and \\u201cgrizzly bear\\u201d designate specific states.\\n\\n2. We added a definition for the Dirac delta and the parent notation par. Are there any other notations you would like to be clarified?\\n\\n3. We added a paragraph to Section 3 clarifying the connection to reinforcement learning.\", \"4_7\": \"We clarified and reworked writing based on your suggestions.\\n\\n8. We reran the experiment with 50 seeds and provided 95% confidence intervals.\", \"question_1\": \"\\u201cTeddy bear\\u201d and \\u201cgrizzly bear\\u201d are different states, so a policy is defined by the probability of the action \\u201chug\\u201d (for instance) in each of those states. In other words, for this MDP, a policy is fully characterised by the pair (p1=P(hug | grizzly), p2=P(hug | teddy)). The change in behavior policy we consider is any change from a policy (p1, p2) used to collect data to an arbitrary policy (p1\\u2019, p2\\u2019).\", \"question_2\": \"This simple environment is not partially observable, only stochastic. We hope that the revised Figure 1 is much clearer.\"}", "{\"title\": \"Authors response\", \"comment\": \"Thank you for the excellent review. Your comments greatly helped to improve the paper.\\n\\n1) We added two algorithms to the appendix, where we describe the model training and usage in a simulation.\\n\\n2) We added an extra discussion section to the appendix, where we discuss the tradeoffs of different models. The manifested tradeoffs will depend on the used planning algorithms. For example, if using a tree search, the causal partial model introduces a larger branching factor and higher variance. In practice, we have not observed slower learning when using causal partial models. We will explore more advanced planning algorithms in future works.\\n\\n3) Thank you for the curious questions. The Figure 4(a) is interesting exactly because of the observed behavior. In this experiment, the partial view is the intended action. If the behavior policy is random, the intended action is uninformative about the underlying state, so the simulation policy used inside the model has to choose the most rewarding action, independently of the state. If the behavior policy is good, the simulation policy can reproduce the behavior policy by respecting the intended action. And if the behavior policy is bad, the simulation policy can choose the opposite of the intended action. This allows us to find a very good simulation policy, when the behavior policy is very bad. Intuitively, given two options, a person who is consistently wrong is more informative than a person who is wrong 50% of the time. \\n\\nTo further improve the policy, the search for better policies should be done also in state s_1. And the model can then be retrained on data from the improved policies. We updated the experiment description to clarify this.\"}", "{\"title\": \"Authors response\", \"comment\": \"Thank you for the question.\", \"about_the_existence_of_the_backdoor\": \"It is true that in a general graphical model, backdoors may not always exist for a given pair of covariate/dependent variables. In reinforcement learning however, the backdoor always exists, because we have access to the entire computational graph of the policy. For example, the vector of the action probabilities can always serve as a backdoor. We revised the text to make this more clear.\\n\\nA case where we would not have access to a trivial backdoor would be, for example, when learning a policy from human demonstrations (since we do not have access to the human decision mechanisms). There, the partial view would need to be the observations experienced by the human.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper considers the problem of predicting a variable y given x where x may suffer from a policy change, e.g., x may follow a different distribution than the original data or suffer from a confounding variable. The flow of the paper proceeds in learning a causally correct model in the sense that the model is robust to any intervention changes. Specifically, the paper considers a setting called \\\"partial model\\\" meaning a generative model conditioned on functions of past observations. To make the partial model causally correct, the paper considers the partial model conditioned on the backdoor that blocks all paths from the confounding variables.\\n\\n1. The problem that this paper addresses seems to be new and interesting. The approach makes much sense: the problem is due to the confounding effect which can be addressed by introducing some other variables that implicitly blocks the confouders. \\n\\n2. The paper assumes the existence of the backdoor variable which is crucial for causal correctness. Does the backdoor always exist? Pearl's book may have some discussions on this. It will be still useful to include some materials in case of unfamiliar readers.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"SUMMARY:\\nThe authors apply ideas from causal learning to the problem of model learning in the context of sequential decision making problems. They show that models typically learned in this context can be problematic when used for planning. The authors then reformulate the model-learning problem using a causal learning framework and propose a solution to the above-mentioned problem using the concept of \\\"backdoors.\\\" They perform experiments to demonstrate the advantages of doing so.\", \"major_comments\": \"Overall, I'm positive about the submission, though I think there's room for improvement.\\n\\nThe paper is well-written and does a very nice job of formulating the model-learning problem through the lens of causal learning, and it is convincing that the baseline model-learning procedure suffers from confounds. Moreover, the proposed modified procedure for model learning would seem to have the potential to fundamentally and positively impact model-learning in general. Finally, I found that limited experiments supported the points made in the paper.\\n\\nThat said, I think the paper as written could be substantially improved with a little extra effort. First, it does not -- in my opinion -- pass a basic reproducibility test, i.e., I am not confident that I could implement the proposed modified model-learning procedure even after reading the paper. The authors should seriously consider adding an algorithm box with pseudocode to both show the flow of training data to the model-learning steps and also how planning can be accomplished with the learned models.\\n\\nSecond, the paper would greatly benefit from any discussion (theoretical, experimental, or -- ideally -- both) of the tradeoffs that would arise when considering the proposed technique. What price is paid for the gain in planning accuracy? Is more data required for learning compared to non-causal approaches? If such a price is incurred, how was this made clear in the presented results?\\n\\nFinally, an important aspect of the experiments seems to have gone un-discussed. Namely, what is the reason for the behavior of the proposed technique in Figure 4(a)? From much of the paper, I would have expected the red dots to live entirely on the horizontal dotted line $V^*_{env}$, but instead this only happens when the behavior policy has the same value. Why is this? Moreover, why do *better* behavior policies result in *worse* performance for the optimal model evaluation policies?\", \"minor_comments\": [\"The authors should define \\\"par_k\\\", first referenced at the bottom of p3, before using it. Similarly for $\\\\psi_k$\", \"p5, second paragraph. I think there's a typo in \\\"... sample $y_2$ from $q(\\\\cdot | h_1)$ ...\\\" (should probably be conditioned on $h_2$ not $h_1$).\"], \"post_response_comments\": \"In my opinion, the authors adequately addressed both my own concerns, and also several valid concerns from the other reviewers. Therefore, I'm raising my score to \\\"accept.\\\"\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper tackles the issue of identifying the causal reasoning behind why partial models in MBRL settings fail to make correct predictions under a new policy. The novel contribution was a framework for learning better partial models based on models learning an interventional conditional, rather than an observational conditional. The paper tried to provide both theoretical and experimental reasoning for this framework.\\n\\nI vote to (weak) reject the paper due to the major issues with section 2. Furthermore, the paper hard or at times almost impossible to understand as too many assumptions are made and too little is explained.\\n\\nRecommendations\\n\\nBecause your graphs are not MDPs, you are not framing your example as an RL problem. This is causing a number of issues with notation and lack of clarity in the argument you're making.\\n\\n1. It is unclear to me that the FuzzyBear example is correctly constructed as a RL example, reasons being that: \\n- Figure 1 (a) & (b) do not correspond to an MDP as two different states, teddy vs grizzly, are both designated as s_1 and similarly, the two possible actions, hug or run, are both designated as a_1 and thus are not distinct.\\n- Note your terminal state for the episodes\\n- Have a reward for (s0, a0) as every s-a pair should have a reward\\n- It would be helpful to note that the environments in Figure 1 are stochastic\\n\\n2. Clarify notation. There are a number of assumptions about what background knowledg the reader should have. Given the bridging of disciplines in the paper, it would be useful to provide more detail on notation in Section 3.\\n\\n3. Add a section on reinforcement learning in Section 3. If it's the last subsection in section 3, you could describe the relationship between the various causal reasoning and RL principles. This would further clarify how you're bridging these subtopics.\\n\\n4. For sentence,\\n\\n\\\"Fundamentally, the problem is due to causally incorrect reasoning: the model learns the observational condi- tional p(r|s0, a0, a1) instead of the interventional conditional given by p(r|s0, do(a0), do(a1)) = s1 p(s1|s0, a0)p(r|s1, a1).\\\"\\n\\nAs you don't cover the meaning of the do() operator until a later paragraph, provide a quick description of it as it is not common knowledge to a general AI audience, e.g., where do() indicates that the action was taken.\\n\\n5. Correct the following sentences,\\n\\n\\\"Mathematically, the model with learn the following conditional probability:\\\"\\n\\n\\\"In Section 3, we review relevant concepts from causal reasoning based on which we propose solutions that address the problem.\\\"\\n\\n6. I recommend putting the interventional conditional equation, p(r|s0, do(a0), do(a1)) = \\udbff\\udc00s1 p(s1|s0, a0)p(r|s1, a1), on its own line as the reader is doing a comparison of it with the previous equation, p(r|s0, a0, a1), given on page 2.\\n\\n7. Strengthen your abstract by aligning more with claims you make in your conclusion.\\n\\n8. The experiments in Figure 5 are averaged over 5 seeds. This is not enough to be statistically significant - furthermore, there are no error bars in the Figure.\\n\\nQuestion(s):\\n1. You've indicated two policies for Figure 1 (a):\\n- pi1: the agent knows it is encountering a teddy bear, so it will hug\\n- pi2: the agent knows it is encountering a grizzly bear, so it will run\\nIs this the \\\"change in the behavior policy\\\" that you're referring to? If so, make this clearer, this currently requires a lot of work by the reader to make sense of it.\\n\\n2. What are the partially observable parts of the environments in Figure 1 (a) & (b)? Make this clear.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"*Summary*\\n\\nThis paper considers the effect of partial models in RL, authors claim that these models can be causally wrong and hence result in a wrong policy (sub optimal set of actions). Authors demonstrate this issue with a simple MDP model, and emphasize the importance of behavior policy and data generation process. Then authors suggest a simple solution using backdoors (Pear et al) to learn causally correct models.They also conduct experiments to support their claims.\\n\\n*Decision*\\n\\nI vote for rejection of this paper, based on the following argument:\\n\\nTo my understanding authors are basically solving the \\u201coff-policy policy evaluation\\u201d problem, without relating to this literature. For example, the MDP example is just an off-policy policy evaluation problem, and it is very well known that in this case you need to consider the behavior policy, for example with importance sampling.\\nEven authors definition of the problem at the end of the page 4, and beginning of page 5, is the problem of \\u201coff-policy policy evaluation\\u201d when y_t = r_t \\nAuthors have not cited any paper in this literature, and did not situate their work with respect to this literature. To my understanding, the proposed solution is basically importance sampling, that is very well known and studied in the field.\\n\\nAdditionally, I suggest that authors be more careful with their citations, for example, authors cited Silver et al 2017 [The Predictron: End-To-End Learning and Planning], as one recent paper using the method; however Silver et al 2017 is in MRP setting (Markov Reward process) where there is no action, so the described problem setting doesn't apply. \\n\\nImprovement\\n\\nThe current manuscript needs a major revision, mainly 1) situate the work with respect to off-policy policy evaluation literature, and then 2) Considering step 1, a clarification for what is the novelty/ contribution of the current paper is needed.\"}" ] }
rkgb9kSKwS
Spectral Nonlocal Block for Neural Network
[ "Lei Zhu", "Qi She", "Lidan Zhang", "Ping guo" ]
The nonlocal network is designed for capturing long-range spatial-temporal dependencies in several computer vision tasks. Although having shown excellent performances, it needs an elaborate preparation for both the number and position of the building blocks. In this paper, we propose a new formulation of the nonlocal block and interpret it from the general graph signal processing perspective, where we view it as a fully-connected graph filter approximated by Chebyshev polynomials. The proposed nonlocal block is more efficient and robust, which is a generalized form of existing nonlocal blocks (e.g. nonlocal block, nonlocal stage). Moreover, we give the stable hypothesis and show that the steady-state of the deeper nonlocal structure should meet with it. Based on the stable hypothesis, a full-order approximation of the nonlocal block is derived for consecutive connections. Experimental results illustrate the clear-cut improvement and practical applicability of the generalized nonlocal block on both image and video classification tasks.
[ "Nonlocal Neural Network", "Image Classification", "Action Recgonition" ]
Reject
https://openreview.net/pdf?id=rkgb9kSKwS
https://openreview.net/forum?id=rkgb9kSKwS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "7cwa_fkVn7", "rJx8yhU2sr", "BJxv3o8njH", "S1luaFS3ir", "r1eqZWr3sS", "BJx4ktM3oH", "rkgAEzGhiB", "rke6CgqpqH", "S1gW3qf29r", "rkeQXGZ_qS", "Hyg-EusOKH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734536, 1573837790392, 1573837742728, 1573833152124, 1573830913654, 1573820636134, 1573818934398, 1572868309333, 1572772520543, 1572504091276, 1571498025206 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1868/Authors" ], [ "ICLR.cc/2020/Conference/Paper1868/Authors" ], [ "ICLR.cc/2020/Conference/Paper1868/Authors" ], [ "ICLR.cc/2020/Conference/Paper1868/Authors" ], [ "ICLR.cc/2020/Conference/Paper1868/Authors" ], [ "ICLR.cc/2020/Conference/Paper1868/Authors" ], [ "ICLR.cc/2020/Conference/Paper1868/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1868/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1868/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1868/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes a new formulation of the non-local block and interpret it from the graph view. The idea is interesting and the experimental results seems to be promising.\\n\\nReviewer has two major concerns. The first is the presentation, which is not clear enough. The second is the experimental design and analysis. The authors add more video dataset in the revision, but still lack comprehensive experimental analysis for video-based applications. \\n\\nOverall, the idea of non-local block from graph view is interesting. However, the presentation of the paper needs further polish and thus does not meet the standard of ICLR\", \"title\": \"Paper Decision\"}", "{\"title\": \"Compared with more SOTA methods and improve the clarity of the writing (2/2)\", \"comment\": \">4. The relation of the coverage of the critical parts on birds and the long-range dependency. More background descriptions and interpretations of the results are needed.\\n\\nThe influence of the long-range dependencies on the convergence of critical parts can be shown especially by the similar parts (e.g. the left and right bird feet, left and right bird swing). If the long-range dependency is not well considered, these similar parts of the birds sometimes are not learned simultaneously. For example, in the third row of Fig4, the ResNet only focuses on the right swing of the bird while neglecting the same important left swing. But when adding SNL to concern the long-range dependencies, it can also focus on the left swing, which is also the critical part the same as the right swing. The long-range dependence of similar features are kept via our SNL.\\n\\n>5. Clarity of the writing.\\nWe do thank your suggestions about our writing. The informal use of English, mismatched descriptions, undefined acronyms have been solved in the updated paper. All the corrections should improve the writing quality of this paper. In our modified version, we have used the full name of the model for clarity such as the Compact Generalized Nonlocal Block (CGNL), the Double Attention Network (A2Net). \\n\\nHope our *additional experiments* on 4 extra SOTA models eliminate your concern and we humely ask whether you can improve your rating of our work. Thank you !\"}", "{\"title\": \"Compared with more SOTA methods and improve the clarity of the writing (1/2)\", \"comment\": \"Thanks so much for pointing out our cons in explaining our idea. We agree with your concerns about reasonability of the experiments. Hope our explanations below and the updated manuscript can make it clearer to you.\\n\\n>1. Experiments for adding our block into more SOTA models. \\n\\nWe have done *additional experiments* on more SOTA models inserted with our SNL blocks, the results are shown below:\\n\\n-----------------------------------------------------------\\n| Methods | Top1 |\\n| P3D [1] | 81.23% |\\n| P3D + SNL(Ours) | 82.65% |\\n-----------------------------------------------------------\\n| SlowFast [2] | 80.54% |\\n| SlowFast + SNL (Ours) | 83.92% |\\n----------------------------------------------------------\\n| MARS (RGB) [3] | 92.29% |\\n| MARS + SNL (Ours) | 92.79% |\\n----------------------------------------------------------\\n| VTN [4] | 90.06% |\\n| VTN + SNL (Ours) | 90.34% |\\n-----------------------------------------------------------\\n\\nFor Pseudo 3D Convolutional Network (P3D) and Motion-augmented RGB Stream (MARS) , our SNL block is inserted into the Pseudo- 3D right before the last residual layer of the res3. For the Slow-Fast Network (Slow-Fast), we replace original NL block with our SNL block. For the Video Transformer Network (VTN), we replace its multi-head self-attention blocks (paralleled-connected NL blocks) with our SNL blocks. The slowfast network are trained end-to-end on the UCF-101 dataset while others use the model pretrained on Kinetic400 dataset and finetuning on the UCF-101 dataset. \\n\\nWe can see that all the performances are improved when adding our proposed SNL model. In sum, our SNL blocks have shown superior results across three additional SOTAs (the P3D, SlowFast, VTN and MARS) in the action recognition tasks. We have included these results in Appendix F.\\n\\n[1] Qiu Z, Yao T, Mei T. Learning spatio-temporal representation with pseudo-3d residual networks[C]//ICCV 2017: 5533-5541.\\n[2] Feichtenhofer C, Fan H, Malik J, et al. Slowfast networks for video recognition[C]//ICCV 2019: 6202-6211.\\n[3] Crasto N, Weinzaepfel P, Alahari K, et al. MARS: Motion-Augmented RGB Stream for Action Recognition[C]//CVPR 2019: 7882-7891.\\n[4] Kozlov A, Andronov V, Gritsenko Y. Lightweight Network Architecture for Real-Time Action Recognition[J]. arXiv preprint arXiv:1905.08711, 2019.\\n\\n>2. the conclusion from Table 4; Are you trying to demonstrate that the best configuration is DP3, and increasing the number of consecutive non-local blocks (from SP3 to SP5) doesn't work? the paper gives a stable hypothesis for deeper nonlocal structure, but experimentally the deeper structure doesn't work well.\", \"the_conclusion_of_the_table_4_are\": \"1. Adding more SNL block into different layers is better than adding into only one layer. (According to \\\"1\\\" (DP1) and DP3); \\n2. The proposed gSNL block is more robust without performance drops when going deeper based on our stable hypothesis. (According to \\\"1\\\"(SP1), SP3 and SP5).\\n\\nFor clarifying these two conclusions, we replace the \\\"1\\\" into \\\"SP1 or DP1\\\" and add the explanations in Sec.4 of the updated version. \\n\\ngSNL is stable and has the potential for constructing the deeper connections. The reason why more blocks do not improve much is actually task dependent. Deeper structures should be able to learn better representations and converge to the optimal solution, while not decreasing the performance (that is why Resnet is proposed for enabling much deeper network structures for learning representations). On one hand, we show the deeper nonlocal structure under stable hypothesis does not have performance drop (SP5 vs SP3), and on the other hand, SP3 (the deeper structure) is better than SP1 (one block), which already validates deeper structure is better. These claims have been updated in Sec 4.\\n\\n>3. Fig 4 needs background descriptions. Are the images randomly chosen? Ours here means SNL or gSNL? Is the colored superimposition the attention map and how to interpret it?\\n\\nFig.4 shows the feature map (not the attention map) of the SNL block, which is the output (e.g. 32 * 32) of the last feature layer up-sampled into the original size (e.g. 512 * 512) of the input image and then added on the source images. Thus, the feature map = up-sampled feature (32*32 -> 512*512)+ source image (512 *512). They are randomly chosen. Both SNL and gSNL are our proposed nonlocal blocks. SNL focuses on more crucial part of the birds benefited from the flexibility of $W_1$ ( $W_1$ controls the intensity of the graph filter, which is related to the feature representations as discussed in Filx et al 2019). We have added more descriptions into Section4.3.\"}", "{\"title\": \"Presentation of the paper has been improved and Results improvement is not incremental at all (2/2)\", \"comment\": \">6. Explanation of table 4.2. Where do we see different number of nonlocal units.\\n\\nSorry for the typo. Table.4.2 is actually the Table.4 mentioned in the context, we have corrected it in the updated version.\\n\\nThe experimental results of different numbers are shown in Table.4. According to the results of DP1 and DP3, we can see that inserting our SNL block into more layers can achieve higher performance than other models and has better results than inserting it into only one layer. Moreover, the proposed gSNL block are more stable learned than others by meeting our proposed stable hypothesis based on the results of SP1, SP3, SP5.\\n\\n>7. Explanations of Top1 and Top5 accuracy in the paper. \\n\\nTop1 and Top5 is the evaluation criterion of the image classification and action classification. Top1 accuracy means that the model prediction (the one with the highest probability) is exactly the expected label. Top5 accuracy means that any of your model that gives 5 highest probability answers that match the expected answer. Sorry for the confusion, we have added details for distinguishing from the confused word \\u201ctop 32 eigenvalues\\u201d at the first paragraph of page 9.\\n\\n(2) Related to the results.\\n\\nThe spectral nonlocal (SNL) block is an efficient, simple, and generic component for capturing long-range spatial-temporal dependencies with deep neural networks. Compared with other alternatives (NL, CGNL NL-stage), we focus on emphasizing both effciency and robustness of SNL w.r.t. the number and position inserted in the network, which have been shown in Table.2,3, and 4. \\n\\nBecause the large-scale datasets are used for validating our block, the improvement 1% is much higher than alternatives. The CGNL (Yue et al 2018) only improves 0.58% and the NS (Tao et al 2018) improves 0.09% than the original NL block. \\n\\nTo eliminate your concern, we have also done *additional experiments* on the video-based person re-identification task, which utilizes the network structure RTM with attention called RTMta (Gao et al 2018). It shows that our proposed block can generate a clear-cut improvement (nearly 14% accuracy improvement on the ilidsvid dataset)\\n\\n| Methods | ilidsvid |\\n| |Rank1 | mAP |\\n-------------------------------------------------------------\\n|RTMta |58.70%|69.00%|\\n-------------------------------------------------------------\\n|RTMta + NL |56.00%|66.30%|\\n-------------------------------------------------------------\\n|RTMta + SNL (Ours) |70.00%|79.40%|\\n-------------------------------------------------------------\\n\\n*Rank1: cumulative matching curve at rank-1 (larger is better, maximum 1)\\n*mAP: mean average precision score\\n\\nWe have included this person re-identification task results in Appendix E. Furthermore, we would like to note that the performance improvements are actually task-dependent. We have also summarized that the different tasks we have done: image classification (datasets: Cifar10, Cifar100, CUB-200 in Section4), action recognition (datasets: UCF101 in Section4, Kinetics-400 in Appendix F (new added) ), semantic segmentation (datasets: VOC2012 in Appendix C), person re-identification (datasets: Mars, ilidsvid, prid2011 in Appendix E (new added)) benefit differently from our SNL, but they consistently have improvements. \\n\\nWe here provide the details of the improvements of all the nonlocal based methods, which are mostly less or around 1%.\\n\\n-----------------------------------------------------------------------------------------------------------------------\\n| | CIFAR10|CIFAR100| CUB200| UCF101| Mars | ilidsvid |prid2011|VOC2012| Kinetics-400 |\\n| NS (Tao et al NeurIPS 2018) | + 0.09% | +0.09% | - | - | - | - | - | - | - |\\n| CGNL (Yue et al, NeurIPS 2018) | - | - | +0.58% | +1.03% | - | - | - | - | +1.20% | \\n| A2 (Chen et al, NeurIPS 2018) | - | - | - | +0.04%| - | - | - | - | +2.6% |\\n| CGD (He et al,arXiv:1907.09665)| - | +0.99% | - | +1.18% | - | - | - | - | +0.3% |\\n| Ours | +1.58% | +1.10% | +0.56% | +1.84% | +1.00% | +7.35% | +2.75%| +0.3% | +0.82% |\\n\\n* '-' : means that the dataset are not tested in their paper. \\n* +' : means the average improvement compared with the original NL block in each dataset.\\n\\nMany thanks for your time and suggestions, and hope our responses and updated paper make our presentation and results more solid to you.\"}", "{\"title\": \"Presentation of the paper has been improved and Results improvement is not incremental at all (1/2)\", \"comment\": \"Thank you for your comments. We address specific concerns as follows.\\n(1) Related to the presentation\\n\\n>1. The theorem in Eq.4 is original or belongs to Shuman et al 2013.\\n\\nThe Eq.4 is originally proposed by us, which is derived from the definition of the graph filter in Shunman et a. 2013. In Sec.3.A of this reference, the author gives the formulation of graph filter and demonstrate that it can be used to implement the nonlocal means filter without giving a specific formulation as ours. \\n\\nOur Eq.4 formulates a mathematical description: $F(A,Z)=U \\\\Omega U^{T} Z$ that defines a fully connected graph filter to represent the nonlocal means. $Z$ is the transformed feature map, $A$ is the affinity matrix, $U$ is the eigenvector and $\\\\Omega$ is the parameter matrix to reflect the strength of the filter.\\n\\nWe have clarified more details about the originality of Eq.4 and its extension from Shuman et al 2013 paper in the first paragraph of Sec.3.1.\\n\\n>2. The Eq.8 seems to be an arbitrary decomposition of the original nonlocal operator without any reference to the chebyshev expansion.\\n\\nIt is not the arbitrary decomposition of the original nonlocal operator. \\n\\nOriginal nonlocal operator (NL) is $F(A,Z)= A Z W$, in which the weight matrix $W$ has dimension: $C_1 \\\\times C_2$ and $A$ has dimension: $N \\\\times N$ ($W$ cannot multiplicate the affinity matrix $A$ directly); our proposed SNL operator is $F_s(A,Z)=Z W_1 + A Z W_2$, thus the first term $Z W_1$ cannot be obtained except for A = I (identity matrix). Our SNL is obtained by approximating the fully-connected graph filter with the help of $1_{st}$-order Chebyshev approximation (Defferrard et al 2016). We have further explained this point in Sec.3.1 of our paper.\\n\\n>3. Fig1 & 4 are unclear and how the weights variants are related to the regions highlighted in the bird images. \\n\\nIn Fig 1 & 4, the representation of bird wings and beak have been better learned using SNL (with more distinguished regions highlighted), because $W_1$ in our SNL is more flexible without assumptions as in NL and NL-stage.\\n\\nFig.4 gives more examples of the feature maps, which shows that our proposed SNL focuses on more crucial part of the birds (the same as fig 1) benefited from the flexibility of $W_1$ ($W_1$ controls the intensity of the graph filter, which is related to the feature representations as discussed in Filx et al 2019)\\n \\nWe have added the retrospects of why the proposed novel operators have achieved better discriminative feature representations in the second paragraph of Sec.4.3.\\n\\n>4. The explanation of CFL; how is it related to the values of Ws; Can we take very small ?\\n\\nYes, CFL condition is the Courant-Friedrichs-Lewy sampling condition. This condition holds when the weight parameters are small, which has been demonstrated in Tao et al 2018. A brief illustration here, if using the same affinity matrix $A$ and connecting multiple SNL blocks, these successive blocks can be seen as a diffusion progress which satisfies: $$X^{N + 1} - X^{N}=\\\\frac{dX}{dt} = X N W_{1}+ A X^{N} W_{2} \\\\quad, s.t.\\\\quad a_{ij} > 0, x_{ij} > 0$$ \\n \\nIf $|W_{1}|$ and $|W_{2}|$ are much larger than 0, i.e. $|W_{1}| \\\\gg 0 $, $|W_{2}| \\\\gg 0$. It will make $|X^{N} W_{1}| \\\\gg 0$, $|A X^{N} W_{2}| \\\\gg 0$ and then make $|\\\\frac{dX}{dt}| \\\\gg 0$ which does not satisfies the CFL condition and lead to the unstable dynamics, details can be seen in Tao et al 2018. We also verified the authenticity of this hypothesis in the following table by considering the learned weight parameters in $W_{1}$ and $W_{2}$:\\n\\n|The range of weight valus| <-0.4 | (-0.4,-0.2)| (-0.2, -0.1) | (-0.1, 0) | (0, 0.1) | (0.1, 0.2) | (0.2,0.4) | >0.4 |\\n|Number of weights | 14 | 151 | 10104 | 526557 | 525300 | 10099 | 156 | 17 |\\n|Percentage |0.001%| 0.014% | 0.942% | 49.1% | 49.0% | 0.942% | 0.014% |0.001%| \\n\\nWe can see that nearly 98% of learned weights are in the range of (-0.1, 0.1). The maximum and minimum value of those weight parameters are -0.72 and 0.82, which meet the CFL condition. \\n\\nThe above results also reflect that we cannot take arbitrarily small value for the weight parameters matrix $W$, because there are still some parameters in the range of (-0.2, -0.1) and (0.1, 0.2). \\n\\n>5. The upper limit in the sums after Eq. 10 is unclear.\\n\\nThe sums after Eq.10 are the learned weight parameters which form the matrix $W_{1}$, $W_{2}$, so it should also satisfy the CFL condition as we discussed above. From the above table, the empirical study shows that the sums are less than 1. Sorry that we can only prove the sums should meet CFL condition while cannot in theory prove its upper bound (less than \\\"1\\\") at current stage.\\n\\nWe have already clarified these points after the Eq.10 in our updated version.\"}", "{\"title\": \"Add 2 more large-scale video classfications on Mars and Kinetics-400 datasets\", \"comment\": \"Many thanks for your positive and encouraging comments.\\n\\n>Improve our experiments on larger datasets. \\nWe have done experiments on the large-scale video dataset Mars (used for video-person Re-identification), which contains 20,000 videos (nearly 260,000 images) with 1,261 person IDs, the results are shown below:\\n\\n| | Mars[1] |\\n| |Rank1 | mAP |\\n-----------------------------------------------------\\n|RTMta |79.10%|71.70%|\\n-----------------------------------------------------\\n|RTMta + NL |80.90%|72.90%|\\n-----------------------------------------------------\\n|RTMta + SNL (Ours)|81.90%|74.00%|\\n-----------------------------------------------------\\n|RTMtp |82.30%|75.70%|\\n-----------------------------------------------------\\n|RTMtp + NL |83.21%|76.54%|\\n-----------------------------------------------------\\n|RTMtp + SNL (Ours)|83.40%|76.80%|\\n-----------------------------------------------------\\n\\n*Rank1: cumulative matching curve at rank-1 (larger is better, maximum 1)\\n*mAP: mean average precision score\\n\\nWe can see that in Mars datasets, our proposed block inserted in SOTA networks can generate 1% - 2% improvements consistently. For the network backbone, we follow the strategy of [2] that uses the pooling (RTMtp) and attention (RTMta) to fuse the spatial-temporal features. More details have been included in Appendix E in the updated paper.\\n\\nExperiments for Kinetics-400 with the state-of-the-art model (slowfast[3]) using our SNL block is shown below, more details have been included in Appendix F in the updated paper.\\n\\n| Method | Top1 | \\n----------------------------------------------\\n|Slowfast |77.88%|\\n----------------------------------------------\\n|Slowfast + SNL (Ours)|79.98%|\\n---------------------------------------------- \\n\\nOur SNL inserted into Slowfast model can generate also higher classification accuracy.\\n\\nIn summary, we have done totally 4 computer vision tasks over 9 popular benchmarks, including image classification (Cifar10, Cifar100, CUB-200 in Section4), action recognition (UCF101 in Section4, Kinetics-400 in Appendix F (new added) ), semantic segmentation (VOC2012 in Appendix C), person re-identification (Mars, ilidsvid, prid2011 in Appendix E (new added)), which all benefit from our SNL. Details please refer to our updated paper. \\n\\nThanks again for your comments ! Wish our response make our work more solid for you.\\n\\n\\n[1] Zheng L, Bie Z, Sun Y, et al. Mars: A video benchmark for large-scale person re-identification[C]//European Conference on Computer Vision. Springer, Cham, 2016: 868-884.\\n[2] Gao J, Nevatia R. Revisiting temporal modeling for video-based person reid[J]. arXiv preprint arXiv:1805.02104, 2018.\\n[3] Feichtenhofer C, Fan H, Malik J, et al. Slowfast networks for video recognition[C]//Proceedings of the IEEE International Conference on Computer Vision. 2019: 6202-6211.\"}", "{\"title\": \"18 more experiments on 3 additional datasets have been added to demonstrate our classification results.\", \"comment\": \"Thanks for reviewing our paper with informative suggestions.\\n\\n>1. CIFAR-10 and CIFAR-100 datasets may not be the best datasets. \\n\\n*Additional experiments* on other challenging datasets such as Mars[1], ilidsvid[2], and prd2011[3] have been conducted for video-based person Re-identification tasks as follows. We choose CIFAR-10/100 datasets due to the fairly compared with other SOTAs (Tao et al 2018, He et al, 2019). \\n\\n| | Mars[1] | ilidsvid[2] | prd2011[3] | \\n| |Rank1 | mAP | Rank1| mAP | Rank1 |mAP |\\n--------------------------------------------------------------------------------------------------\\n|RTMta |79.10%|71.70%|58.70%|69.00%|79.80%|86.60%|\\n--------------------------------------------------------------------------------------------------\\n|RTMta + NL |80.90%|72.90%|56.00%|66.30%|85.40%|90.70%|\\n--------------------------------------------------------------------------------------------------\\n|RTMta + SNL (Ours)|81.90%|74.00%|70.00%|79.40%|86.50%|91.50%|\\n--------------------------------------------------------------------------------------------------\\n|RTMtp |82.30%|75.70%|74.70%|81.60%|86.50%|90.50%|\\n--------------------------------------------------------------------------------------------------\\n|RTMtp + NL |83.21%|76.54%|75.30%|83.00%|85.40%|89.70%|\\n--------------------------------------------------------------------------------------------------\\n|RTMtp + SNL (Ours)|83.40%|76.80%|76.70%|84.80%|88.80%|92.40%|\\n---------------------------------------------------------------------------------------------------\\n\\n*Rank1: cumulative matching curve at rank-1 (larger is better, maximum 1)\\n*mAP: mean average precision score\\n\\nFor the backbone, we follow the strategy of [4] that uses the pooling (RTMtp) and attention (RTMta) to fuse the spatial-temporal features. (Note that the models are totally trained on ilidsvid and prid2011 rather than fine-tuning the pre-trained model on *Mars* datasets. We can see that in these datasets, our proposed block can generate consistent improvements on these datasets.\\n\\nWe have added these additional experiments into Appendix E.\\n\\n[1] Zheng L, Bie Z, Sun Y, et al. Mars: A video benchmark for large-scale person re-identification[C]//European Conference on Computer Vision. Springer, Cham, 2016: 868-884.\\n[2] Wang T, Gong S, Zhu X, et al. Person re-identification by video ranking[C]//European Conference on Computer Vision. Springer, Cham, 2014: 688-703.\\n[3] Hirzer M, Beleznai C, Roth P M, et al. Person re-identification by descriptive and discriminative classification[C]//Scandinavian conference on Image analysis. Springer, Berlin, Heidelberg, 2011: 91-102.\\n[4] Gao J, Nevatia R. Revisiting temporal modeling for video-based person reid[J]. arXiv preprint arXiv:1805.02104, 2018.\\n\\n>2. Think about using this in a generative model such as GAN.\\n \\nThanks for the suggestion! Our nonlocal block has the potential to be applied to the self-attention GAN. More than considering high-resolution details as a function of only spatially local points in lower-resolution feature maps, our SNL is good at generating more details using cues from all feature locations even much better than the conventional nonlocal block (Wang et al 2018). It can thus learn specific structures and geometric features besides only texture features. We have added the future application of the proposed block in the Conclusion part.\\n\\nThanks again for your review, and hope our response make the work clearer.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": [\"I have two general concerns, the first is related to the presentation and the second to the relevance of the results.\", \"(1) Presentation is confusing at many points, for instance:\", \"It is unclear if theorem in Eq. 4 is original or belongs to Shuman et al 2013. (no proof is given)\", \"Eq. 8 seems an arbitrary decomposition of the original NonLocal operator that could have been proposed without any reference to the Chebyshev expansion (which, on the other hand, is truncated to 1st order with no extra explanation).\", \"The point of Fig. 1 and Fig 4 is not clear. Fig. 1 explains how SpectralNonLocal reduces to NonLocal and NonLocalStage, but we can see this from the formulas. I dont see how this discussion on the Ws relates to the regions highlighted in the bird.\", \"The same applies to Fig. 4. What are we supposed to see in Fig. 4 (and, more importantly, why?).\", \"What is the CFL condition? (is it the Courant-Friedrichs\\u2013Lewy sampling condition?). How is that related to the values of Ws. Can we take those arbitrarily small as suggested in that proof?\", \"The upper limit in the sums after Eq. 10 is unclear.\", \"First time table 4.2 is cited there is no context to understand it. (actually there is no table labeled as \\\"Table 4.2\\\") Where do we see the different number of NonLocal units?. This is only clear when you arrive and read text of page 9 (but not when cited the first time from page 6).\", \"Explanation of experiments is a little bit confusing (e.g. what does it mean top1 and top5 in tables?). The only explanation of \\\"top-something\\\" I found in the text has to do with eigenvectors in fig. 5. This also apply to the \\\"topX\\\" in the figures?\", \"(2) Nevertheless, the main concern is the scarce relevance of the results: differences of behavior in all tables are about 1%. Then, what is the real advantage of the proposed modification?\"]}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper proposes a spectral non-local block, which is a generalized method of the non-local block and non-local stage in the literature. The proposed spectral non-local block can be plugged into a neural network to improve its effectiveness. The paper also provides theoretical analyses of the stability of the proposed method, and also extend the method by including more Chebyshev polynomial terms. Experiments are conducted on image classification and action recognition tasks, and they valid the effectiveness of the proposed method.\\n\\nThe idea is well-motivated, and it is a generalization of existing works in the literature. I do like this idea. However, I am afraid that the idea is not well explained and supported, thus I gave a weak reject to encourage the authors to further improve the paper.\\n\\nThe major concern I have is the reasonability of the experiments. The experiments in the paper show relative performance gain with respect to a baseline method. It seems that there is a lack of comparison with state-of-the-art methods in the literature. For example, in Table 8, a performance gain is observed when compared with I3D. However, the recent STOA models can achieve much higher accuracy than the baseline. and also the proposed method. Since the proposed method is generic to all neural nets, it makes more sense to compare with SOTA and make improvements based on SOTA. What is the conclusion from Table 4? Are you trying to demonstrate that the best configuration is DP3, and increasing the number of consecutive non-local blocks (from SP3 to SP5) doesn't work? It is awkward since the paper gives a stable hypothesis for deeper nonlocal structure, but experimentally the deeper structure doesn't work well. Figure 4 is abrupt without much background descriptions. Are the images randomly chosen? Ours here means SNL or gSNL? Is the colored superimposition the attention map (I believe so but the paper doesn't indicate so) and how to interpret it? What is the relation of the coverage of the critical parts on birds and the long-range dependency? More background descriptions and interpretations of the results are needed. \\n\\nAnother concern I have is the clarity of the writing. There are quite a number of informal use of English, mismatched descriptions, undefined acronyms, etc. For example, in the caption of Fig. 1, it is said self-attention and self-preserving are taken effect by W1 and W2, which is contradictory to what is illustrated in the figure. Also, the terms self-attention and self-preserving, and other terms such as CGNL, A2, Hadama (Hadamard?) product, are not formally defined or described. A lot of grammar errors and informal use of English are present, such as \\\"which lead to\\\", \\\"the weight means\\\", \\\"when using in the neural network\\\", \\\"fig. 4\\\", \\\"Figure. 2\\\", \\\"more liberty for the parameter learning.\\\", etc.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": [\"SUMMARY:\", \"Propose spectral non-local block\", \"improvement on image and video classification tasks\", \"Apologies, I am not at all familiar with the theory and math behind this proposal, I do not think I am in a position to review this paper. The experiments seem convincing enough that the authors made enough effort to prove their method might work.\", \"Feature maps to show robustness of method is a good point\", \"CIFAR-10 and CIFAR-100 are certainly a good start, but might not be the best datasets to test for image classification, in lieu of ImageNet and others.\", \"Classification itself is a good start, it might be interesting to think about using this in a generative model such as GAN. The content reminds me of Self-Attention GAN which uses a similar non-local block (self-attention).\"]}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, authors propose a spectral nonlocal block. First, they re-interpret the nonlocal blocks in a graph view and then use Chebyshev approximation to obtain the spectral nonlocal block which is quite simple by adding a ZW_1 term. Furthermore, they analyze the steady-state to build up a deeper nonlocal structure. Also, the gSNL is simple by adding a (2A-I)ZW_3 term.\\n\\nOverall, the paper is written well. I like the idea to interpret the nonlocal operation in the graph view. More important, the resulting formulation is quit concise for implementation. However, my main concern is the experiment, which should be further enhanced by perform large-scale video classification like Kinetics400.\"}" ] }
BJlZ5ySKPH
U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
[ "Junho Kim", "Minjae Kim", "Hyeonwoo Kang", "Kwang Hee Lee" ]
We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based method which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets. Experimental results show the superiority of the proposed method compared to the existing state-of-the-art models with a fixed network architecture and hyper-parameters.
[ "Image-to-Image Translation", "Generative Attentional Networks", "Adaptive Layer-Instance Normalization" ]
Accept (Poster)
https://openreview.net/pdf?id=BJlZ5ySKPH
https://openreview.net/forum?id=BJlZ5ySKPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "qr0-XhhOa", "SJl2YNBMjB", "BklxmksZjB", "SkgVvj-ZiS", "Syg1daPM9B", "rJeTLRLCFr", "B1g6gslaFr", "HJeuDHzfYr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1576798734507, 1573176451859, 1573134104286, 1573096284377, 1572138343289, 1571872340736, 1571781365017, 1571067232388 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1867/Authors" ], [ "ICLR.cc/2020/Conference/Paper1867/Authors" ], [ "ICLR.cc/2020/Conference/Paper1867/Authors" ], [ "ICLR.cc/2020/Conference/Paper1867/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1867/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1867/AnonReviewer3" ], [ "~Kyunghyun_Cho1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes a new architecture for unsupervised image2image translation.\\nFollowing the revision/discussion, all reviewers agree that the proposed ideas are reasonable, well described, convincingly validated, and of clear though limited novelty. Accept.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewer #3\", \"comment\": \"Thank you for the valuable comments and constructive feedback. In the revised draft, we mark our major revisions by \\u201cviolet\\u201d , and would like to answer the reviewer\\u2019s questions as follows:\\n\\n1. Description for CAM\\n\\nAs you suggested, in the revised draft, we add the related works including the description for CAM in Appendix A.\\n\\n2. The local and global discriminators\\n\\nAs you suggested, in the revised draft, we add the description for the multi-scale discriminator in Sec. 2.1.2.\\n\\n3. Result without CAM (Figure2(f))\\n\\nTo make sure that the effect of the CAM, we have all set the same hyper-parameters and retrained the model.\\n\\n4. The generator model architecture (Figure 1)\\n\\nYou're right. We will modify the figure to make it appear that the encoder feature maps are fed into the adaptive residual blocks.\\n\\n\\n5. Why not using GN? \\n\\nWe thought GN was theoretically an intermediate version of IN and LN. Therefore, the GN can be properly expressed by the /rho value. The selection was based on whether the result would be closer to the target domain or more biased toward the source domain rather than the naturality of the results. In Figure 3 (f), you can see more textures from the background of the source domain.\\n\\n6. Ablation Study\\n\\nAs shown in Table 1, it includes the ablation study for generator and discriminator separately with KID. If space is allowed, we will add the image results.\\n\\n7. Discussion on the attention mechanism compared with other related works\\n\\nOur experiment results already are including the result of AGGAN[1] and discussed about that. \\n[1] Unsupervised-Attention-guided-Image-to-Image-Translation. NIPS\\u201918\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer for the valuable comments and constructive feedback. In the revised draft, we mark our major revisions by \\u201cviolet\\u201d , and would like to answer the reviewer\\u2019s questions as follows:\\n \\n*About Novelty:\\n\\n1. Attention mechanism \\n\\nAlthough we proposed similar attention concept, the goal and how to generate are different. Previous attention-based work [1] do not allow to transform the shape of the instance because of attaching the background to the (translated) cropped instances. Unlike these works, we assumed that our model will guide to focus on more important regions and ignore minor regions by distinguishing between the target and not-target domains based on the importance map obtained by the auxiliary classifier. These attention maps are embedded into the generator and discriminator to focus on semantically important areas, thus facilitating the shape transformation. The attention map in the generator induces focus on areas that specifically distinguish between the two domains. The attention map in discriminator helps fine-tuning by focusing on the difference between real image and fake image in target domain.\\n\\n2. Normalization \\n\\nAdaLIN tells the model how much it should transform. Instance Norm (IN) is capable of preserving the characteristics of source image. Layer Norm (LN) which uses layer-wise feature statistics is better at transforming to target domain. We found that combining advantages of both IN and LN is beneficial to image-to-image translation task in various datasets by controlling the amount of transform. Though its idea borrows from previous work [2], the proposed method is the first attempt to combine IN and LN in image-to-image translation task as far as we investigated.\\n\\n[1] Y. Alami Mejjati, C. Richardt, J. Tompkin, D. Cosker, and K. I. Kim. Unsupervised attention-guided image-to-image translation. In NIPS. 2018. \\n[2] H. Nam and H.-E. Kim. Batch-instance normalization for adaptively style-invariant neural networks. In NIPS, 2018. \\n\\n*Specific Comments:\\n\\n1. The formulation of AdaLIN in Equation (1)\\n\\nI agree with the comment from the Reviewer. In the revised draft, I modify like \\\"parameters, /gamma and /beta are dynamically computed by a fully connected layer from the attention map\\\" in sec 2.1.1.\\n\\n2. The motivation for using layer normalization \\n\\nWe assumed that optimal stylization method was \\\"whitening and Coloring Transform\\\". However, the computational cost is high due to the calculation of the covariance matrix and matrix inverse. To compensate between the computational cost and the quality of the result, we borrowed two sub-optimal normalization methods, AdaIN and Layer Normalization. During stylization, while the AdaIN has the characteristics keeping more contents information, the Layer Normalization tends to make stylization more obvious instead keeping content information less.\\n\\n3. The term \\\"important weights\\\"\\n\\nIn the revised draft, we modify it to \\\"the weight of the k-th feature map for the source domain\\\" in sec 2.1.1.\\n\\n*Questions for the authors:\\n\\n1. U-GAT-IT vs TransGaGa\\n\\nThe goal of U-GAT-IT is to change the shape of the foreground while maintaining the content of the background. TransGaGa deal with the geometry and appearance separately and fully converts the appearance of the source domain into that of the target domain without considering the foreground and background. Therefore, as can be seen from the experimental results, the background of the source image is not maintained at all. However, U-GAT-IT can maintain or change the contents of the source domain adaptively through attention. In addition, U-GAT-IT can achieve good results in style transfer as well as shape change through AdaLIN. Therefore, we think U-GAT-IT is a more generalized version than TransGaGa.\\n\\n2. What are the shortcomings of the model and how could they possibly be addressed?\\n\\nThe shortcomings for our model is \\\"one-to-one mapping\\\". But we will design UGATIT with our future work to be multi-modal and multi-domain together.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for the valuable comments and constructive feedback , and would like to answer the reviewer\\u2019s questions as follows:\\nThe perceptual study was conducted on 153 participants of an AI community, which includes a mix of experts and non-specialists.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"Summarize what the paper claims to do/contribute.\", \"The paper proposes a new image-to-image GAN-based translator that uses attention and a new normalization that learns a proper ratio between instance and layer normalization. Experiments benchmark the new method against multiple prior ones, and on a number of dataset pairs.\", \"Clearly state your decision (accept or reject) with one or two key reasons for this choice.\", \"Weak Accept\", \"The paper was well-written and the method and contributions are clearly explained.\", \"There is clear novelty in this paper, even if slightly limited. However, the newly proposed normalization seems to work quite well.\", \"The results look good, however it is hard to compare methods quantitatively with only few samples. (Nothing that the authors could have done: there are many samples in the supplementary material and results seem consistent.) Qualitative measures like FID and KID should be taken with a grain of salt also. It is a big plus that a user study was conducted! (However, details of how these subjects were selected would be useful)\"]}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"I have read the authors' rebuttal and satisfied with their response. Novelty is a little on the lower side, but thorough writing, results, and insightful comparisons make up for this in my opinion. I have updated my score to 8: Accept.\\n\\n=====\\n\\nThis paper proposes an approach to perform image translation called U-GAT-IT. In image translation, the goal is to learn a mapping from images in a source domain to corresponding images in a target domain. Contemporary image translation approaches are able to transfer local texture but struggle to handle shape transfer. To address this concern, the authors introduce an attention mechanism based on CAM [1] and an adaptive normalization layer into a GAN-based image translation framework. Results indicate favorable quantitative and qualitative performance relative to a number of baselines.\", \"specific_contributions_include\": [\"Introduction of a normalization layer called AdaLIN that can interpolate between instance normalization and layer normalization based on the input.\", \"Introduction of an attention mechanism based on CAM [1] that allows the model to focus on specific parts of the image when either generating or discriminating.\", \"Collection and release of a selfie-to-anime dataset.\", \"Release of U-GAT-IT code.\", \"In my opinion this paper is borderline, leaning towards weak accept. The experiments are thorough and the paper is well-written. I have concerns about the novelty and significance of the work, but overall the paper feels very close to being a finished piece of work in spite of its (relatively minor) flaws.\", \"Strong points of this work include the writing and experiments. The paper is clearly organized and feels polished. It cites many relevant works, giving the reader a sense of the contemporary approaches for image translation. There is a thorough description of model architecture, dataset and tuning parameters in the appendix. In addition, code and the selfie-to-anime dataset have been released by the authors. In terms of experiments, the authors provide many qualitative visualizations comparing the proposed model to baselines on various datasets. Quantitative evaluation includes KID and a perceptual evaluation on human subjects.\", \"Weak points include novelty and significance. The proposed approach combines two ideas already applied to image translation (adaptive normalization [3] and attention [4]). It therefore synthesizes these ideas into an effective algorithm rather than directly adding something new. It is unclear to me how others can build on top of this work to further advance state-of-the-art in image translation. Are more sophisticated normalization and attention mechanisms truly the key to improving image translation in the future?\"], \"specific_comments\": [\"The formulation of AdaLIN in Equation (1) is vague. The text states \\\"parameters are dynamically computed by a fully connected layer from the attention map\\\", but it's not clear what those parameters are in the equation. Explicitly writing \\\\gamma and \\\\beta as functions of the fully-connected layer and \\\\mu_I, \\\\sigma_I, \\\\mu_L, \\\\sigma_L as the corresponding mean and standard deviation expressions would make things more clear.\", \"The motivation for using layer normalization was discussed in 2.1.1 but I still do not understand why it is beneficial.\", \"The term \\\"importance weights\\\" has a specific meaning in the context of Monte Carlo methods. I would suggest choosing a different term here.\"], \"questions_for_the_authors\": \"* How does U-GAT-IT compare to TransGaGa [2]? One of the stated goals of U-GAT-IT is to better handle shape when performing image translation. TransGaGa has a similar motivation and so I would have liked to see an experimental comparison or at the very least a description of how U-GAT-IT differs. What sorts of shape transfer could U-GAT-IT handle that TransGaGa couldn't and vice versa?\\n* What are the shortcomings of the model and how could they possibly be addressed? \\n \\n[1] Zhou, B., Khosla, A., Lapedriza, A., Oliva, A. and Torralba, A., 2016. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2921-2929).\\n[2] Wu, W., Cao, K., Li, C., Qian, C. and Loy, C.C., 2019. Transgaga: Geometry-aware unsupervised image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 8012-8021).\\n[3] Huang, X. and Belongie, S., 2017. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1501-1510).\\n[4] Mejjati, Y.A., Richardt, C., Tompkin, J., Cosker, D. and Kim, K.I., 2018. Unsupervised attention-guided image-to-image translation. In Advances in Neural Information Processing Systems (pp. 3693-3703).\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a new attention mechanism for unsupervised image-to-image translation task. The proposed attention mechanism consists of an attention module and a learnable normalization function. Sufficient experiments and analysis are done on five datasets.\", \"pros\": \"1. The proposed method seems to generalize well to the different datasets with the same network architecture and hyper-parameters compared to previous works. This could benefit other researchers who want to apply the method to other data or tasks.\\n2. The translated results seem more semantic consistent with the source image compared to other methods, although the sores are not the top on photo2portrait and photo2vangogh. The results also look more pleasing.\", \"cons\": \"1. The CAM loss is one of the key components in the proposed method. However, there is only the reference and no detailed description in the paper. More intuitive descriptions are necessary for easy understanding.\\n2. The local and global discriminators are not explained until the result analysis. It\\u2019s a bit confusing when I see the local and global attention maps visualization results. It\\u2019s better to mention it in the method section.\\n3. I wonder why some translations are not done at all in the results without CAM in Figure 2(f). Because without CAM, the framework would be somehow similar to MUNIT or DRIT. I suppose the hyper-parameters are not suitable for this setting.\\n4. The generator model architecture in Figure 1 is confusing. The adaptive residual blocks only receive the gamma and beta parameters. I suppose that the encoder feature maps are also fed into the adaptive residual blocks.\\n5. In Figure 3, the comparison of the results using each normalization function is reported. While in my view, the results only using GN in decoder with CAM looks more natural. I wonder why the proposed method only consists of instance norm and layer norm? I suppose the group norm might help with the predefined group.\\n6. In the ablation study, the CAM is evaluated for generator and discriminator together. I would recommend doing this ablation study for generator and discriminator separately to see if it\\u2019s necessary for generator or discriminator.\\n7. It would be good to see some discussion on the attention mechanism compared with other related works. For example, [a,b] predict the attention masks for unsupervised I2I, but applies them on the pixel/feature spatial level to keep the semantic consistency.\\n[a] Unsupervised-Attention-guided-Image-to-Image-Translation. NIPS\\u201918\\n[a] Exemplar guided unsupervised image-to-image translation with semantic consistency. ICLR\\u201919\\n\\nMy initial rating is above boardline.\"}", "{\"comment\": \"[NOTE BY PROGRAM CHAIR: the urls were redacted to preserve the anonymity. this submission does *not* violate the ICLR's policy on double blind reviewing. see https://iclr.cc/Conferences/2020/CallForPapers which states \\\"... papers that have appeared on non-peered reviewed websites (like arXiv) or that have been presented at workshops (i.e., venues that do not have a publication proceedings) do not violate the policy. The policy is enforced during the whole reviewing process period. Submission of the paper to archival repositories such as arXiv are allowed.\\\"]\\n\\nIt seems like double-blind review is violated. This paper was published a long time ago (25 July 2019).\\n\\n[URL REDACTED]\\n\\nThere are all author names at the top of the paper, their affiliation, etc.\\nFurthermore, there is an official implementation of this model on GitHub for both Tensorflow (4k stars) and Pytorch (1.3k stars) frameworks of the first author of the paper [REDACTED] and his co-author [REDACTED].\\n\\n[URL REDACTED] \\n[URL REDACTED]\", \"title\": \"[REDACTED BY PROGRAM CHAIR] Double-blind review is violated\"}" ] }
BJe-91BtvH
Masked Based Unsupervised Content Transfer
[ "Ron Mokady", "Sagie Benaim", "Lior Wolf", "Amit Bermano" ]
We consider the problem of translating, in an unsupervised manner, between two domains where one contains some additional information compared to the other. The proposed method disentangles the common and separate parts of these domains and, through the generation of a mask, focuses the attention of the underlying network to the desired augmentation alone, without wastefully reconstructing the entire target. This enables state-of-the-art quality and variety of content translation, as demonstrated through extensive quantitative and qualitative evaluation. Our method is also capable of adding the separate content of different guide images and domains as well as remove existing separate content. Furthermore, our method enables weakly-supervised semantic segmentation of the separate part of each domain, where only class labels are provided. Our code is available at https://github.com/rmokady/mbu-content-tansfer.
[ "domains", "unsupervised content transfer", "separate content", "masked", "problem", "translating", "unsupervised manner", "additional information", "common", "separate parts" ]
Accept (Poster)
https://openreview.net/pdf?id=BJe-91BtvH
https://openreview.net/forum?id=BJe-91BtvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "vCnL7BiIIy", "SkgBIOljjr", "S1eCIHHcoH", "Bygy8Up4jH", "HJlLRHaNoB", "BylcPH6NoB", "BJxWXS6VsB", "S1g1wmaEsr", "BylwOzYpqr", "S1xJ_D2jtr", "H1eBm1OYOB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734478, 1573746764653, 1573700949840, 1573340743289, 1573340622054, 1573340514079, 1573340440758, 1573339991398, 1572864622795, 1571698535471, 1570500380798 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1866/Authors" ], [ "ICLR.cc/2020/Conference/Paper1866/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1866/Authors" ], [ "ICLR.cc/2020/Conference/Paper1866/Authors" ], [ "ICLR.cc/2020/Conference/Paper1866/Authors" ], [ "ICLR.cc/2020/Conference/Paper1866/Authors" ], [ "ICLR.cc/2020/Conference/Paper1866/Authors" ], [ "ICLR.cc/2020/Conference/Paper1866/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1866/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1866/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper extends the prior work on disentanglement and attention guided translation to instance-based unsupervised content transfer. The method is somewhat complicated, with five different networks and a multi-component loss function, however the importance of each component appears to be well justified in the ablation study. Overall the reviewers agree that the experimental section is solid and supports the proposed method well. It demonstrates good performance across a number of transfer tasks, including transfer to out-of-domain images, and that the method outperforms the baselines. For these reasons, I recommend the acceptance of this paper.\", \"title\": \"Paper Decision\"}", "{\"title\": \"L_{recon2}^{A} and L_{recon2}^{B}\", \"comment\": \"Thank you for your question!\\n\\nThe mask in z(a,a) is generated using the encodings of the common and specific parts of a, and z(b,b) uses the encodings of the common and specific parts of b. Also, a and b are not symmetrical and since images in A do not contain the specific part, their mask should be minimal.\\n\\nThe loss L_{Recon2}^A encourages a minimal distance between z(a,a) and a. Similarly, L_{Recon2}^B encourages a minimal distance between z(b,b) and b. \\n\\nTherefore, in L_{Recon2}^A the mask is generated from the encoding of an \\u201cempty\\u201d specific part, while in L_{Recon2}^B, the mask is based on the encoding of a non-trivial specific part. \\n\\nIn the ablation, the mask difference is evaluated for the case of images from domain B, where the mask should be non-empty (we specify that \\u201can ablation analysis is performed\\u2026 for the task of facial hair content transfer\\u201d and this will be further clarified in the next revision). Therefore, the loss L_{Recon2}^B is both more relevant to the mask being tested and more relevant to the generation non-trivial masks. This makes its impact more substantial.\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thanks for the detailed response.\\n\\nThe two reconstruction losses introduced in equation (8) basically encourage the model to ignore z^{raw} and generate empty mask to perfectly reconstruct the image using itself. Their effects on the mask size should roughly be the same. But in Table 7, all the metrics are comparable except for the mask size when comparing without L_{recon2}^{A} and without L_{recon2}^{B}. Why does L_{recon2}^{B} have a larger impact on the mask size?\"}", "{\"title\": \"A revision following the feedback from the reviewers\", \"comment\": \"In the revised version, we have corrected the typos pointed to by the reviewers and addressed all requests for elucidations.\\n\\nIn addition, following the reviews we have added the following: First, we added a discussion of the choice and sensitivity of the lambda coefficients including Fig. 39 and Fig. 40 shows the sensitivity of $L_{Recon2}^{A}$ (Eq. 8) and $L_{Recon1}^{A}$ (Eq. 5) losses as well as the choice of the threshold for mask binarization presented in Fig. 38. As the results show, the method is largely robust to its parameters.\\n\\nSecond, we further extend our ablation analysis with regards to the choice of L1 norm vs L2 norm in our losses. \\n\\nThird, further discussion was added to the Method section and Ablation Analysis section, with regards to the effect of the loss terms in Eq. (3), (5), (7) and (8) and the mask learned. We have also added a discussion with regard to the formulation of the domain confusion loss.\"}", "{\"title\": \"Response to Review #1\", \"comment\": \"We thank the reviewer for the comments. We apologize for the citation errors and have corrected these in the revised version.\"}", "{\"title\": \"Response to Review #2 Part 2\", \"comment\": \"With regards to the sensitivity of the lambda coefficients in our loss, the values of the coefficients were set early on in the development process in a way that reflects the relative importance we attributed to each component and the observed trade-offs. For example, if the mask obtained was too large we would increase $L_{Recon2}$ (Eq. 8). As illustrated in Fig. 39, our network is not overly sensitive to the choice of these values. For example, for $L_{Recon2}^{A}$ (Eq. 8) loss each value in the range 0.4-1.0 results in a similar output and for $L_{Recon1}^{A}$ (Eq. 5) each value in the range 3.0-7.0 results in a similar output.\"}", "{\"title\": \"Response to Review #2 Part 1\", \"comment\": \"We thank the reviewer for the supportive review. We address the comments one by one below. Please let us know if there are any further concerns.\\n\\nWith regards to the choice of L2 instead of L1 for the reconstruction losses, this is an unfortunate typo made in the manuscript, which is now corrected. We used L1 norm for both $L_{Recon2}$ (Eq. 8) and $L_{Recon1}$ (Eq. 5), (Eq. 6) and used L2 norm for the cycle loss (Eq. 9) as can be seen in the supplementary code submitted with the paper.\\n\\nWe added an experiment to the ablation analysis where for loss $L_{Recon1}$ (Eq. 5),(Eq. 6) and loss $L_{Recon2}$ (Eq. 8) the L1 norm is replaced with the L2 norm. We found the result to be comparable for the losses in Eq. 5 and Eq. 6 and inferior for the one in Eq. 8. Intuitively, since L1 encourages sparsity better than L2, it supports well localized masks. The revised version addresses this in Ablation Analysis section 4.1\\n\\nOur formulation of the domain confusion loss is similar to that of \\u201cAdversarial Discriminative Domain Adaptation\\u201d (Tzeng et al. 2017) except where for Eq. 3, $E_c$ (the encoder of the common part) attempts to fool the discriminator, so that the encodings of both domain A and domain B would be classified as 1. Namely, the discriminator tries to distinguish between encodings of domain A and B, while $E_c$ attempts to produce an encoding which is indistinguishable for the discriminator.\\n\\nWhile the loss in Eq. 7 directly affects the mask generation, the losses in Eq. 3 and Eq. 5 affect it indirectly. Without the loss in Eq. 3, no disentanglement is possible, and the common encoder would contain all of the image information including the separate information. This means that the image produced by $D_A(E_c(b))$ is close to b and, therefore, the generated mask is empty. \\n\\nFurthermore, without the loss of Eq. 5, we empirically observe that $D_A(E_c(b))$ outputs the image with the specific part intact (for example, the facial hair is not removed). This indirect effect on the disentanglement probably stems from the fact that without this loss, there is reconstruction only on faces with facial hair (the specific part). Thus, $E_c$ can encode generic facial hair information for shaved faces and have $E_c(b)$ and $E_c(a)$ still indistinguishable. Eq. 5 makes sure that $E_c$ won\\u2019t encode facial hair for shaved faces, since it requires reconstruction of an image without facial hair. This is an interesting phenomenon and is worth investigating as future work.\\n\\nThe loss introduced in the first term of Eq. 8 ($L_{Recon2}^A$) encourages a minimal distance between $z(a,a)$ and $a$, where\\n $z(a,a)= z^{raw}(a,a) \\\\otimes m(a, a) + a\\\\otimes(1-m(a,a))$ . Ideally, $z^{raw}$ would be equal to a, but since we use an encoder and a decoder which cannot auto-encode perfectly, we get that there is some distance between $z^{raw}$ and a. Hence, in order to minimize the distance between $z(a,a)$ and a, the network minimizes the size of the mask. Similar argument holds for $L_{Recon2}^B$.\\n\\nWith regards to running the loss term introduced in (Press et al., 2019) on top of the network introduced in Fig. 2, we found that the network fails in this case. The loss function of (Press et al., 2019) contains a domain confusion term that is equivalent to our domain confusion loss and a reconstruction loss which is $||D(E_c(a)) - a||_1$ + $||D(E_c(b),E_s(b)) - b||_1$. Note that the first term of this reconstruction loss is identical to Eq. 5. The second term needs to be adjusted to a mask formulation, as is done in Eq. 6. As we show quantitatively in the ablation study, without further regularization, the mask produced is very large and the output is of low quality. \\n\\nThe model trained to add glasses would not add a mustache. As future work, it may be interesting to study generalization beyond the attributes seen during training by learning multiple attributes at once. \\n\\nRegarding previous methods such as Mejjati et al, we thank the reviewer for this observation. We have changed the text to emphasize that the adaptation of the previous work is not performed on the specific information of a guide image, as is in our case.\\n\\nWith regards to the binary mask threshold, most values are very close to 0 or 1. Early on during the experiments we found the value of 0.6 is visually appealing and we kept it for all experiments. See fig. 38 for the effect of changing this threshold. As can be seen, the change is minimal.\\n\\nAs shown in Tab. 7, there is a 6% difference in the size of the mask when $L_{Cycle}$ is removed. Indeed other losses further affect the size of the mask. \\n\\nFollowing the review, the citation style has been corrected in the revised version.\\n\\nFor L2_reg, please refer to the discussion at the end of the ablation analysis of section 4.1 (last 10 lines before last paragraph).\"}", "{\"title\": \"Response to Review #3\", \"comment\": \"We thank the reviewer for the supportive review. We address the comments below. Please let us know if there are any further concerns.\", \"for_the_choice_of_lambda_coefficients\": \"these were set early on in the development process in a way that reflects their relative importance and were observed. For example, if the mask obtained was too large we would increase $L_{Recon2}$ (Eq. 8).\\n\\nAs illustrated in Fig. 39, our network is not overly sensitive to the choice of these values. For example, for $L_{Recon2}^{A}$ (Eq. 8) each value in the range 0.4-1.0 results in a similar output and for $L_{Recon1}^{A}$ (Eq. 5) each value in the range 3.0-7.0 results in a similar output.\\n\\nWe constructed the train/test sets using 90\\\\%-95\\\\% split (see ablation section A). This consists of about 7,200-18,000 examples for train and about 800-2,000 examples for test for each attribute. \\n\\nAs for the possibility of overfitting, we observed the same performance on the train and test sets, with no noticeable difference between the two.\\n\\nWith regards to potential bias, as far as we observed, there was no bias toward specific shape or appearance, and we did not observe mode-collapse in the form of repeated appearance elements.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a method to disentangle the common and separate parts of two domains and to focus the attention of the underlying network to the desired part only, without reconstructing the entire target. The proposed method is also able to add or remove separate contents, and to enable weakly-supervised semantic segmentation of the separate part of each domain.\\nThis work relates to the problem of content transfer between images. The proposed method consists of five networks: one encoder for common domain invariant features, one encoder for separate domain specific information, one network for mapping encodings from common features from both domains undistinguishable, a decoder that generates sample in the origin domain and a decoder that generates the image that combines content from the origin image and domain specific content from the target image. This last decoder also outputs a mask that focuses the attention of the model to the specific part.\", \"the_proposed_model_is_trained_using_a_combination_of_different_losses\": \"domain confusion loss, reconstruction losses, and cycle consistency losses. Ablation studies reported in the paper nicely show the contribution of each loss. The final loss is obtained by a weighted sum of the losses: how are the lambda coefficient chosen/learned?\\nThe proposed method is evaluated on guided content transfer, out of domain manipulation, attribute removal, sequential content transfer, sequential attribute removal and content addition, weakly supervised segmentation of the domain specific content. Experimental results are clear, thorough and satisfactory, both quantitative and qualitative results are reported, as well as a user study. Presented results demonstrate the strengths and limitations of the proposed approach, and the analysis of the results helps understanding and emphasizing the contribution of the paper.\\nInformation in appendix also enable reproducibility by providing parameters and architecture structure.\", \"other_comments\": \"did you observe overfit for some choice of parameter? is your validation/test set large enough for evaluating results? did you observe biases in your method (e.g. to specific features/domain specific information)?\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This work proposed a mask based approach for instance-level unsupervised content transfer, which is an extension of the disentanglement work in (Press et al., 2019) and the attention guided translation (Chen et al., 2018, Mejjati et al., 2018). Unlike the disentanglement work, the introduced mask allows the adaptation to focus on the relevant content which substantially reduce the complexity of the generation. On the other hand, the proposed method extends the attention guided translation from the domain level to the instance level which allows more specific and diverse translations. Experiments on benchmark data shows both improved qualitative and quantitative results comparing to existing methods. It is really nice that the authors also considered the situation of generalization to out of domain images.\\n\\nHowever, I would encourage the authors to spend more discussion on the \\\"Method\\\" and \\\"Ablation Analysis\\\" sections to give a better illustration. First is the choice of the L2 norm in all the reconstruction losses, which is different from L1 norm used in both (Press et al., 2019) and (Mejjati et al., 2018). What is the advantage of using L2 instead of L1 norm here? Does it work better with the mask generation? Second, the domain confusion loss. The presence of both equation (3) and (4) are quite confusing and the domain confusion loss (3) seems different from traditional ones. In Table 7, it shows that the learned mask is empty without any of the losses (3), (5), (7). But only loss (7) is directly related to the mask generation. How does the loss (3) or (5) impact the mask learning? It is also unclear why the losses introduced in (8) would encourage the mask to be minimal despite the quantitative results shown in Table 7. Actually, I am very curious about the performance of the loss introduced in (Press et al., 2019) on top of the network introduced in Figure 2.\", \"other_comments\": [\"It would be nice to see the out of domain transfer in the \\\"attribute\\\" domain. Ideally, the network should be able to detect \\\"difference\\\" in the image from domain B and apply it to the image from domain A. For example, the model is trained on faces without and with glasses, but applied to faces without and with facial hair. Indeed, the introduction of mask alleviates the decoder to learn the attribute itself, and provides the ability to locate the place of difference.\", \"Please unify the citation style: there are both Press et al. (2019) and (Press et al., 2019) used.\", \"In Section 2 under \\\"Mask Based Approaches\\\", the authors argued that the existing attention guided translation \\\"does not allow for the adaptation of the image information in the masked area\\\". I do not think this is the case. The existing work also introduced adaptation of the image information in the masked area. For example, the equation (1) in (Mejjati et al., 2018).\", \"How is the binarized mask generated in inference? Specifically, how to determine the threshold?\", \"In Section 4.1, the authors argued that \\\"without L_{Cycle} the masks produced include larger portions of the face\\\". But this actually produces the second smallest mask in Table 7.\", \"What is the \\\"L2 reg\\\" in Table 7?\", \"It would be good to show the sensitivity of the lambdas in the overall loss.\"]}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a method for unpaired image-to-image translation, where the target domain explicitly contains some additional information than the source domain. The authors use auto-encoders to separate the common and specific representations and to generate masks, which seems to be related to [1]. The authors empirically show the proposed method can be used for image translation, attribute editing.\", \"a_small_citation_error\": \"\\\"Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networkss. arXiv preprint arXiv:1703.10593, 2017a.\\\"\\nnetworkss -> networks\\nIt is a published paper at ICCV, not just on arxiv.\\n\\n[1] Domain Separation Networks, Bousmalis et.al, NIPS 2016\"}" ] }
rkgl51rKDB
Efficient meta reinforcement learning via meta goal generation
[ "Haotian Fu", "Hongyao Tang", "Jianye Hao" ]
Meta reinforcement learning (meta-RL) is able to accelerate the acquisition of new tasks by learning from past experience. Current meta-RL methods usually learn to adapt to new tasks by directly optimizing the parameters of policies over primitive actions. However, for complex tasks which requires sophisticated control strategies, it would be quite inefficient to to directly learn such a meta-policy. Moreover, this problem can become more severe and even fail in spare reward settings, which is quite common in practice. To this end, we propose a new meta-RL algorithm called meta goal-generation for hierarchical RL (MGHRL) by leveraging hierarchical actor-critic framework. Instead of directly generate policies over primitive actions for new tasks, MGHRL learns to generate high-level meta strategies over subgoals given past experience and leaves the rest of how to achieve subgoals as independent RL subtasks. Our empirical results on several challenging simulated robotics environments show that our method enables more efficient and effective meta-learning from past experience and outperforms state-of-the-art meta-RL and Hierarchical-RL methods in sparse reward settings.
[ "new tasks", "past experience", "methods", "primitive actions", "subgoals", "efficient meta reinforcement", "meta goal generation", "reinforcement learning", "able" ]
Reject
https://openreview.net/pdf?id=rkgl51rKDB
https://openreview.net/forum?id=rkgl51rKDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "f5ddPv_r1", "HkxcTp05iS", "r1x9iTAcsH", "r1lfqFR5iS", "r1lVZ2-ZqH", "rkg-Q1gy9S", "SyllxgTtFS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734448, 1573739970040, 1573739938300, 1573738890140, 1572047868431, 1571909400855, 1571569640464 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1865/Authors" ], [ "ICLR.cc/2020/Conference/Paper1865/Authors" ], [ "ICLR.cc/2020/Conference/Paper1865/Authors" ], [ "ICLR.cc/2020/Conference/Paper1865/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1865/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1865/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper combines PEARL with HAC to create a hierarchical meta-RL algorithm that operates on goals at the high level and learns low-level policies to reach those goals. Reviewers remarked that it\\u2019s well-presented and well-organized, with enough details to be mostly reproducible. In the experiments conducted, it appears to show strong results.\", \"however_there_was_strong_consensus_on_two_major_weaknesses_that_render_this_paper_unpublishable_in_its_current_form\": \"1) the continuous control tasks used don\\u2019t seem to require hierarchy, and 2) the baselines don\\u2019t appear to be appropriate. Reviewers remarked that a vital missing baseline is HER, and that it\\u2019s unfair to compare to PEARL, which is a more general meta-RL algorithm. The authors don\\u2019t appear to have made revisions in response to these concerns.\\n\\nAll reviewers made useful and constructive comments, and I urge the authors to take them into consideration when revising for a future submission.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thank you for carefully reviewing the paper\", \"comment\": \"We appreciate the reviewer's valuable and constructive reviews. We will improve our paper as suggested.\"}", "{\"title\": \"Thank you for carefully reviewing the paper\", \"comment\": \"We appreciate the reviewer's valuable and constructive reviews. We will improve our paper as suggested.\"}", "{\"title\": \"Thank you for carefully reviewing the paper\", \"comment\": \"We appreciate the reviewer's valuable and constructive reviews. We will improve our paper as suggested.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Update 11/21\\nI maintain my score. I like the idea and hope the authors improve the paper and submit to a future conference.\\n\\nSummary\\nThis paper combines hierarchical RL with meta-learning. The idea is that high-level plans transfer across settings (e.g. picking up a mug), while low-level execution may differ across tasks (e.g. different robot morphologies). To this end, the approach meta-learns a two-level hierarchical policy. The higher level policy conditions on a latent task context to produce high-level actions, or goals for the lower level policy. The lower level policy is trained via HER to reach these goals (it may need to be completely re-trained at test time). \\n\\nConcerns and Questions\\nI am very concerned about the experimental results. I do not think that these tasks require hierarchy to solve, as the exact same tasks (with the same simulated robot) were solved in Hindsight Experience Replay, Andrychowicz al. 2017. Thus HER (preferably implemented with SAC rather than DDPG for fair comparison) is a vital baseline that is missing from Figure 3. Could the authors please address this point?\\nWhile the introduction and Section 4 claim that one important benefit of such a hierarchical approach is that one could transfer to more disparate tasks, there are no experiments supporting this idea. I think the addition of these experiments would greatly strengthen the paper.\\nIn Figure 3, does \\u201cPEARL with sparse reward\\u201d refer to only the encoder receiving sparse rewards or also the actor-critic?\\n\\nWriting suggestions\", \"a_suggestion_about_the_title\": \"consider including the word \\u201chierarchical\\u201d\\nIn some places the writing is quite informal, I suggest revising it (in the intro: \\u201cDRL barely works\\u201d.\\nI disagree with the sentence in the intro \\u201cIntuitively this is quite similar to how a human behaves\\u201d, which is said in support of the idea of transferring high-level goals instead of low-level execution. Human behavior also supports the opposite view - that we reuse primitive motions over and over in support of new goals. So I think it\\u2019s best to avoid the appeal to human behavior here (as well as in related work).\\nThe first paragraph of section 4.2 is redundant and can be removed, or at least moved to the beginning of Section 4.\\n\\nIn conclusion, my current impression is that while the idea is interesting, the results achieve the same performance as a non-hierarchical method, which is not included as a baseline.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"### Summary ###\\n\\nIn this paper, the authors focus on the problem of meta-reinforcement learning (meta-RL). Specifically, the authors consider the setting of meta-RL for goal reaching tasks where each task corresponds to an unknown goal. Existing meta-RL algorithms directly train for a policy that output low level actions, which might be inefficient in this goal-reaching setting. In this paper, the authors combine the hierarchical RL framework of HAC[1] with the probabilistic task context inference method of PEARL[2], and propose the meta-goal generation for hierarchical RL (MGHRL) algorithm. In this algorithm, a two layer hierarchical policy is used where the high level policy generate goals for the low level goal-reaching policy to reach. In order to adapt to an unknown goal, the high level policy is conditioned on the output of a task inference module to generate goals for the unknown ground truth goal. The goal-reaching policy would then use the generated goal to interact with the environment.\\n\\nThe authors evaluated the proposed method on simulated robotic manipulation tasks and compare to PEARL as baseline. The experiment results show that the proposed method outperforms the baseline method significantly, especially under sparse reward settings.\\n\\n\\n### Review ###\\n\\nOverall I think this paper presents an interesting idea in learning fast adapting goal-reaching policies. The idea is very well presented and authors include many empirical evidence to support the proposed method. However I do find a number of shortcomings that need to be addressed.\", \"pro\": \"1. The idea for this paper is really well presented. The structure of the paper is well organized and the authors include informative illustration to explain the architecture of the hierarchy of policies. The experiment results are easy to interpret.\\n\\n2. The authors provide a detailed description of the configurations and the hyperparameters for each experiments. Such description would be very helpful if the results in this paper are to be reproduced.\", \"con\": \"1. The experiments presented in this paper do not include appropriate comparisons to baseline methods. While indeed the proposed method outperforms PEARL, this comparison is inherently unfair. PEARL is a general meta-RL algorithm, which can adapt to arbitrary variations of reward functions and dynamics in the distribution of tasks. The proposed method only applies in the setting of goal reaching meta-RL, where each task corresponds to an unknown goal. With this information artificially encoded into the hierarchical architecture, the proposed method should certainly perform better than any general meta-RL algorithms. Therefore, directly comparing the proposed method to any general meta-RL algorithm is unfair. Instead, the authors could compare with baseline methods with builtin goal-reaching components, such as the following one: train a goal reaching policy using HER[3], and then meta-train a goal conditioned reward function using standard supervised meta-learning methods. At test time, find the goal that maximizes the adapted reward function, and then feed that goal into the HER policy for evaluation. Note that this baseline is different from the proposed method in the way that the goal reaching and goal inference were done separately both using existing methods.\\n\\n2. I\\u2019m not convinced about the novelty of this paper. The proposed method seems like a straightforward combination of HAC and PEARL, and it seems to me that the two methods are combined in order to apply an existing meta-RL algorithm in a goal reaching setting rather than to create a better general meta-RL algorithm.\\n\\nThe idea in the paper is well presented and carefully investigated. However, I am still not convinced about the novelty of the proposed idea and the magnitude of performance improvement given the lack of proper baselines. Therefore, I would not recommend acceptance before these problems are addressed. \\n\\nReferences\\n\\n[1] Levy, Andrew, et al. \\\"Learning multi-level hierarchies with hindsight.\\\" (2018).\\n\\n[2] Rakelly, Kate, et al. \\\"Efficient off-policy meta-reinforcement learning via probabilistic context variables.\\\" arXiv preprint arXiv:1903.08254 (2019).\\n\\n[3] Andrychowicz, Marcin, et al. \\\"Hindsight experience replay.\\\" Advances in Neural Information Processing Systems. 2017.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies the problem of leveraging past experience to quickly solve new control tasks. The starting point (and perhaps the main contribution) is the observation that some tasks have similar high-level goals, while differing in how those goals are achieved. To that end, the paper introduces an meta-RL algorithm that, given a new task, attempts to solve it by adapting a high-level, goal-setting module, and learn a new, low-level policy to reach each commanded goal. The proposed method might be viewed as a combination of PEARL [Rakelly 19] and HAC [Levy 19]. The proposed method is compared against state-of-the-art hierarchical RL and meta-RL methods on four robotic manipulation tasks. The proposed method outperforms the baselines on each task.\\n\\nWhile the proposed method is quite strong empirically, I am leaning towards rejecting this paper because many of the claims made in the paper are not empirically validated. While much emphasis is put on the hierarchical aspect of the algorithm, I don't think that the tasks used in the experiments require hierarchy to solve (see [Plappert 18]). In the introduction, the second claim is that the proposed method \\\"focus[es] on meta learning the overall strategy \\u2026 [and] provides a simpler and better way for meta RL.\\\" While the experiments show that the proposed method learns better than baselines, I don't think the paper show that the proposed method learns some sort of \\\"overall strategy.\\\" I don't think that the proposed method is simpler than the baselines.\\n\\nA second concern is that I'm confused about the experimental protocol. If the high-level policy outputs a desired XYZ position for the gripper (Section 5.1), how can the high-level policy indicate when the gripper should be closed to pick up the block? How is the reward function for the low-level policy defined? PEARL doesn't have access to this extra information (the reward function), right?\\n\\nA third concern is the large number of grammatical errors in the paper.\\n\\nI would consider increasing my review if (1) a new plot were added to visualize the commanded subgoals (I have a hunch that the high-level policy directly outputs the true goal, obviating the need for hierarchy and contradicting the claim that the method \\\"learns to generate high-level meta-strategies over subgoals\\\"); (2) the experimental protocol were clarified; and (3) the number of grammatical errors were significantly reduced.\", \"other_comments\": [\"\\\"inefficient to to directly learn such a meta policy\\\" -- Why? Also, \\\"to to\\\" is repeated.\", \"\\\"Deep Reinforcement learning\\\" -- \\\"Reinforcement\\\" shouldn't be capitalized.\", \"\\\"failing to generalize\\\" -- Can you add a citation?\", \"\\\"it would be quite inefficient to directly learn such a policy\\u2026\\\": Doesn't [Plappert 18] do exactly this?\", \"\\\"When the tasks distribution is much wider \\u2026 these methods can hardly be effective\\u2026\\\" -- Where is this claim substantiated? Also, \\\"tasks\\\" should be singular.\", \"\\\"sparse reward settings which is\\\" -> \\\"sparse reward settings, which are\\\"\", \"\\\"the above mentioned problems\\\" -> \\\"the problems mentioned above\\\"\", \"\\\"our algorithm focus on meta learning \\u2026 which provides a much simpler \\u2026\\\" -- Where is it shown that the proposed method is simpler? Also, \\\"focus\\\" should be \\\"focuses\\\"\", \"\\\"1991),which\\\" -- Missing space\", \"\\\"complex tasks which requires\\\" -> \\\"complex tasks that require\\\"\", \"\\\"Nachum et al \\u2026 set of sub-policies\\\" -- Run on sentence.\", \"\\\"... human leverage\\u2026\\\" -- Don't humans also transfer low-level knowledge across tasks, in addition to high-level knowledge? Also, \\\"human\\\" should be plural.\", \"\\\"algorithms.The\\\" -- Missing a space, I think.\", \"\\\"PEARL leverages \\u2026 latent variable Z\\\" -- This sentence doesn't make sense as written.\", \"\\\"z's\\\" -- This should not be a possessive.\", \"\\\"Good sample efficiency enables fast adaptation \\u2026 and performs structured exploration \\u2026\\\" -- Isn't the first part true by definition? Why does good sample efficiency perform structured exploration?\", \"\\\"a goal is a 3-d vector\\\" -- If the goal output by the high-level policy is the XYZ coordinates of the\", \"\\\"SAC, such non-hierarchical\\\" -- Grammar doesn't make sense here.\", \"\\\"such non-hierarchical RL method has been proved to perform badly before on \\u2026\\\" -- Can you add a citation? Generally, \\\"proved\\\" is reserved for mathematical proofs.\", \"\\\"In this paper, We have\\\" -- \\\"We\\\" should not be capitalized.\", \"------------------------ UPDATE AFTER AUTHOR RESPONSE ------------------\", \"I thank the authors for at least reading the reviews. My concerns with experiments and clarify remain unaddressed, and are amplified by reading the other reviews. I therefore vote to \\\"reject\\\" this paper.\"]}" ] }
B1elqkrKPH
Learning robust visual representations using data augmentation invariance
[ "Alex Hernandez-Garcia", "Peter König", "Tim C. Kietzmann" ]
Deep convolutional neural networks trained for image object categorization have shown remarkable similarities with representations found across the primate ventral visual stream. Yet, artificial and biological networks still exhibit important differences. Here we investigate one such property: increasing invariance to identity-preserving image transformations found along the ventral stream. Despite theoretical evidence that invariance should emerge naturally from the optimization process, we present empirical evidence that the activations of convolutional neural networks trained for object categorization are not robust to identity-preserving image transformations commonly used in data augmentation. As a solution, we propose data augmentation invariance, an unsupervised learning objective which improves the robustness of the learned representations by promoting the similarity between the activations of augmented image samples. Our results show that this approach is a simple, yet effective and efficient (10 % increase in training time) way of increasing the invariance of the models while obtaining similar categorization performance.
[ "deep neural networks", "visual cortex", "invariance", "data augmentation" ]
Reject
https://openreview.net/pdf?id=B1elqkrKPH
https://openreview.net/forum?id=B1elqkrKPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "ZEbtWjZuMc", "SkgrSdX3iB", "r1gkbvQ3iH", "SyeKcLQhiH", "BklwIQg0cH", "rylAHbq7cS", "Syxxveg0tH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734417, 1573824573023, 1573824247035, 1573824144951, 1572893518620, 1572213061883, 1571844184301 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1864/Authors" ], [ "ICLR.cc/2020/Conference/Paper1864/Authors" ], [ "ICLR.cc/2020/Conference/Paper1864/Authors" ], [ "ICLR.cc/2020/Conference/Paper1864/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1864/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1864/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper introduces an unsupervised learning objective that attempts to improve the robustness of the learnt representations. This approach is empirically demonstrated on cifar10 and tiny imagenet with different network architectures including all convolutional net, wide residual net and dense net. Two of three reviewers felt that the paper was not suitable for publication at ICLR in its current form. Self supervision based on preserving network outputs despite data transformations is a relatively minor contribution, the framing of the approach as inspired by biological vision notwithstanding. Several references, including at a past ICLR include:\", \"http\": \"//openaccess.thecvf.com/content_CVPR_2019/papers/Kolesnikov_Revisiting_Self-Supervised_Visual_Representation_Learning_CVPR_2019_paper.pdf\\nand \\nGidaris, P. Singh, and N. Komodakis. Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations (ICLR), 2018.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Some new results and feedback incorporated\", \"comment\": \"We first sincerely thank the reviewer for their feedback. We especially appreciate the interesting suggestions.\\n\\n\\\"I do, however, rate it as Weak Accept only for one reason: I would expect that making the model more robust should improve classification accuracy. But according to the paper, accuracy does not improve (and even degrades slightly). The paper does not experimentally demonstrate that the proposed methods objectively improves the model.\\\"\", \"we_have_a_few_comments_in_this_regard\": [\"1) We have 5 models to compare in Table 1 and only in two cases the accuracy slightly degraded: on DenseNet on CIFAR by -0.9; on WRN on Tiny ImageNet by -0.26 (but the top5 accuracy improved!) . In the rest of the cases, the accuracy improved: All-CNN on CIFAR +0.99; WRN on CIFAR by +0.28; All-CNN on Tiny ImageNet by +1.48 (and top5 by 3.05!).\", \"2) The goal of the method was not to improve the accuracy, but increase the robustness of the features. As pointed out by the reviewer, \\\"since the model is no longer optimized 100% for the classification task\\\", it came as a surpirse that in some cases the performance even improved.\", \"3) As also pointed by the reviewer, the hyperparameters are likely suboptimal because they were tuned for different conditions.\", \"The reviewer suggests a few improvements and experiments that would improve their rate of the paper. Due to time constraints and computational limitaitons (2 GPUs only), we could not try all the ideas on all architectures and data sets, but we have observed promising results. These are the experiments and results we obtained so far:\", \"Data augmentation invariance pre-training (5 % of the total epochs) and subsequent annealing of alpha (suggested by the reviewer): there is a moderate improvement in the final classification accuracy:\", \"All-CNN on CIFAR-10: 92.47 --> 92.87\", \"WRN on CIFAR-10: 94.86 --> 95.17\", \"Adjustment of the learning rate (divided by M=8):\", \"All-CNN on CIFAR-10: 92.47 --> 93.11\", \"WRN on CIFAR-10: 94.86 --> 95.47\", \"It seems that both ideas slightly increase the performance, especially adjusting the learning rate, since performing M in-batch data augmentation has an implicit effect of reducing the batch size by a factor of M (only approximately, since the augmented samples are only similar and not identical), and in turn, approximately equivalent to multiplying the learning rate by M. Pre-training and annealing alpha towards zero also seems to help, although it requires tuning several additional hyperparameters (epochs and decay factor). We will update the manuscript with the complete set of results, in case of acceptance.\"], \"regarding_the_rest_of_the_feedback_points\": [\"We agreed on the issue about alpha. We have changed alpha by S. Thanks.\", \"Denominator in Eq. 2. The actual operation is as indicated by the reviewer, that is we divide by the *total* average in the set. It was wrongly phrased in the paper and we have updated it.\", \"\\\"it would be good to rerun the baseline with the same in-batch augmentation\\\": this was already the case. The baseline results are on the original models, with standard data augmentation.\", \"We have added (higher is better) and (lower is better) to the captions accordingly. We preferred to keep the concept of invariance score, which is a desirable objective, which can be optimized by defining a loss, inverse of the invariance. This is analogous to classification accuracy and cross-entropy loss.\", \"Finally, we totally agree with the last remark. Data augmentation is, of course, just an approximation. We have added a paragraph discussion this in the last section of the paper.\", \"We are confident the feedback has improved our paper and hope the reviewer's concerns have been effectively addressed.\"]}", "{\"title\": \"Is simplicity a weakness?\", \"comment\": \"We first thank the reviewer for their feedback. Several interesting points were raised that we are happy to discuss next:\\n\\nAccording to the reviewer, the main weakness of the paper is that our approach makes use of a standard method (data augmentation) and a well-known distance metric (Euclidean). We humbly argue that rather than a weakness, the simplicity of our approach ought to be considered a strength. It would have been more questionable, for instance, if we had used a far-fetched distance metric to demonstrate our hypothesis. We use data augmentation as a framework precisely because it is a well-known method commonly used to transform existing images into new, plausible examples. Our work first demonstrates that the standard way of applying data augmentation yields representations with undesirable properties (lack of robustness), misaligned with a fundamental property of biological vision. Second, we show that a simple modification of the objective function can greatly improve the features robustness while preserving the classification performance. Simplicity is a positive aspect in this case, as it will be straightforward to incorporate the proposed method into existing models of image object classification.\\n\\nThe reviewer also states that the problem of DNNs identified in our paper is well known in computer vision and points out to a recent paper. We would like to note that the problem we address in our work is fundamentally different to the one in the mentioned paper and other related works. While previous work reveals that DNNs fail at *classifying* objects on a different pose or perceptually similar (as in the problem of adversarial examples), we instead focus on the analysis of the intermediate features and reveal that DNNs represent transformations of the same image (obtained via standard data augmentation) very differently, even if they are correctly classified. To the best of our knowledge, this is a novel contribution.\", \"reviewer\": \"\\\"Taking a step back, do we even want convolutional networks to be rotation invariant? The categorical label of some objects could change depending on the rotation. For example, a door is only a door if it is upright against a wall. If you rotate the door, it just becomes a board.\\\"\\n\\nIn our opinion, we do want convolutional networks to be reasonably invariant to rotations of the objects within the ranges in which they are perceived (by humans) as the same objects in the real world. This is why we take inspiration from visual perception for our work and why we use data augmentation as a comprehensible way to generate \\\"identity-preserving transformations\\\". As illustrated by the reviewer, an extreme rotation of a door would change its category. In the particular case of rotation, humans likely perceive objects invariably within a range of rotation equivalent to the range in which the head can be tilted sideways. We aim at simulating such perceptual properties by setting appropriate data augmentation parameters. In particular, our data augmentation scheme performs rotations in the range of -22.5 to 22.5 degrees, sampled via a truncated normal distribution centered at 0 degrees. Therefore, our data augmentation scheme would never rotate a door image such that it would become a board. The parameters of the data augmentation scheme are detailed in the Appendix A.\\n\\nWe hope this addresses the reviewer's concerns and we are open to further discussion.\"}", "{\"title\": \"On the novelty and the relevance of the results\", \"comment\": \"First, we thank the reviewer for the assessment of our paper. We appreciate that some of the strengths have been identified.\\n\\nOne concern is the novelty of the proposed method. In this regard, we would like to highlight that our paper makes two significant contributions: 1) we show that the features of DNNs (3 distinct, popular architectures) are surprisingly non-robust to the transformations of standard data augmentation. Note that, unlike many previous studies which have revealed that DNNs fail at *classifying* images which are perceptually similar (adversarial examples, noise [1], change of pose [2]), here we focus on the intermediate features and reveal that, even when similar images are classified correctly, their internal representations are remarkably different. To the best of our knowledge this is a novel contribution. 2) We propose a simple modification of the loss function that effectively and efficiently solves this robustness problem, while preserving or even improving the classification performance.\\n\\nAccording to the reviewer, our proposed solution is just a \\\"straightforward engineering trick, but it is of less scientific interest\\\". This is surpising to us, especially because the proposal is motivated by visual neuroscience and perception. In particular, we aim at incorporating one of the key mechanisms for robust visual object categorization in the visual cortex, that is the invariance to identity-preserving transformations. The paper offers an introduction to this idea in the paper and reviews some of the relevant neuroscientific literature in this regard. This is rarely found in machine learning papers, which often focus indeed on engineering tricks with no scientific motivation.\\n\\n\\\"Results shown in figs. 2,3,4 are obvious because the proposed objective eq.3 prefers a higher invariance score in eq(2) and the increasing alphas prefer increasing invariance score.\\\"\\n\\nThis statement might be missing a subtle, but very important aspect of the results: all the results presented in our paper are obtained on test data, including the results in Figures 2, 3 and 4. According to the reviewer, it is \\\"obvious\\\" that the models obtain a higher invariance score, but this implies taking for granted good generalization given optimization on training data. This is equivalent to saying that it is obvious that models correctly classify objects in unseen images, while it is well-known that this is not the case in, for instance, challenging data sets such as ImageNet or the aforementioned problems of DNNs such as adversarial vulnerability. In other words, we humbly argue that it should not be taken for granted that optimizing robustness to identity-preserving transformation on the training data, should automatically grant robustness to *potentially different* transformations on unseen images.\\n\\nThe reviewer further considers that the \\\"empirical results are not encouraging\\\", referring to the results presented in Table 1. We would first like to highlight that the main results of our paper are the ones that refer to the robustness of the features, which is the contribution of our work, in Figures 2, 3 and 4. The results in Table 1 aim to demonstrate that the modification of the objective function we propose does not significantly degrade the classification performance. Note that the motivation for our work is not to improve classification but to improve robustness. This is similar to works on adversarial robustness, which aim at reducing adversarial vulnerability, although many methods do impact the classification. Note that, nonetheless, our proposed method even improves the classification in some cases, surprisingly.\\n\\nFinally, following the reviewer's suggestion, we have carried out additional experiments including explicit regularization in the models. Unfortunately, due to time constraints and limited computational resources, we could only carry out a few pilot tests with subsets of data. The results we have observed are perfectly consistent with the results of the models trained without explicit regularization, therefore regularization seems to play no relevant role in this regard. We will add an appendix section to the manuscript including these results once they are available. The reason why we trained without explicit regularization is that it has been shown that it is unnecessary if data augmentation is applied [3] and their hyperparameters are extremely sensitive to changes in the learning procedure, as is our case by changing the objective function and applying in-batch data augmentation.\\n\\nWe hope this discussion effectively addressed the reviewer's concerns.\\n\\n[1] Geirhos et al.. Generalisation in humans and deep neural networks. NeurIPS, 2018.\\n\\n[2] Alcorn et al.. Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects. CVPR, 2019.\\n\\n[3] Hern\\u00e1ndez-Garc\\u00eda and K\\u00f6nig. Data augmentation instead of explicit regularization. arXiv preprint arXiv:1806.03852, 2018.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper introduces an unsupervised learning objective that attempts to improve the robustness of the learnt representations. This approach is empirically demonstrated on cifar10 and tiny imagenet with different network architectures including all convolutional net, wide residual net and dense net.\\n\\nThis paper is well written and organized. However, I have several concerns. The novelty of the proposed method is limited. The unsupervised objective in eq. (3) is a good and straightforward engineering trick, but it is of less scientific interest. The empirical results are not encouraging. Table1 shows the comparison results between the proposed method and a baseline method, however, for ALL-CNN, the reported top1 in the original paper is 92.75%. The reviewer is aware that this paper mentions the proposed method doesn't apply regularization, but why not compare to the original results. Results shown in figs. 2,3,4 are obvious because the proposed objective eq.3 prefers a higher invariance score in eq(2) and the increasing alphas prefer increasing invariance score. \\n\\nIn my opinion this work is not sufficient for ICLR.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes to explicitly improve the robustness of image-classification models to invariant transformations, via a secondary multi-task objective. The idea is that the secondary objective makes intermediate-layer representations invariant to transformations of the image that should lead to the same classification. The paper also establishes that the typical models do not actually learn such representations by themselves.\\n\\nThis is an interesting idea (I am not specialist enough on image classification to know whether this idea has been tried before.) and highly relevant to this conference. The paper is g. well-written, easy to read, and correct.\\n\\nI do, however, rate it as Weak Accept only for one reason: I would expect that making the model more robust should improve classification accuracy. But according to the paper, accuracy does not improve (and even degrades slightly). The paper does not experimentally demonstrate that the proposed methods objectively improves the model.\\n\\nIn a sense, it is to be expected, since the model is no longer optimized 100% for the classification task.\", \"i_can_think_of_three_changes_to_the_paper_that_would_flip_my_review_to_a_strong_accept\": \"* Try using the multi-task objective as a pre-training, followed by fine-tuning without multi-task objective. This should foster robust internal representations while allowing to fully optimize for the classification task. Alternatively, you could anneal alpha to 0. Try whether this alleviates the losses on the tests that got worse, and leads to higher gains on the others.\\n* Maybe the robust models, while being worse on benchmarks, are better on real-life data, e.g. where training and test mismatches are higher. Can you find a test set that demonstrates a larger, reliable accuracy improvement from robustness?\\n* \\\"However, note that the hyperparameters used in all cases were optimized to maximize performance in the original models, trained without data augmentation invariance. Therefore, it is reasonable to expect an improvement in the classification performance if e.g. the batch size or the learning rate schedule are better tuned for this new learning objective.\\\" -- Then that should be done.\\n\\nBesides this, I have a few detailed feedback points:\\n\\nEq. (2): Using sigma for \\\"invariance\\\", which is the opposite of the usual meaning of sigma... I wish you had used a different symbol. Not a dealbreaker, but if you have a chance, it would be great to change to a different one.\\n\\n\\\"we normalize it by the average similarity with respect to the *other* images in the (test) set\\\" -- If you use only *other* images, I think it is theoretically possible that sigma becomes negative. I think you should include the numerator image in the denominator as well. I understand that in practical terms, this is never going to be a problem, so it should be OK, no need to rerun anything.\\n\\n\\\"we first propose to perform in-batch data augmentation\\\" -- This increases correlation of samples within the batch, and may therefore affect convergence. To be sure that this is not the cause of the degradation, it would be good to rerun the baseline with the same in-batch augmentation (but without the additional loss). Was that already done?\", \"figure_5\": \"I was a little confused at first because I read this as the invariance score (like Figures 3 and 4), not the invariance loss. They seem to be opposite of each other. So I wondered why the right panel would show poorer invariance score as training progresses. Do you actually need the concept of \\\"invariance score\\\" at all? Or can you redefine the invariance score as a non-invariance score (1- of it), so that its polarity is the same as the invariance loss? If not, an easy fix would be to add \\\"(lower=better)\\\" to the caption of Fig. 5, and likewise (\\\"higher=better\\\") to Fig 3 and 4.\\n\\n\\\"DATA AUGMENTATION INVARIANCE\\\" -- You really want more. You want to be robust to all valid transformations of the object in the images. Obviously it is not possible to augment data for that, but it would be good to state this somewhere as the idealized goal of this work, which you approximate by data augmentation.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Motivated by biological visual systems, this paper investigates whether the representations of convolutional networks for visual recognition are invariant to identity preserving transformations. The results show that empirically they are not, and they further propose a data-augmentation approach to learn this invariance. Since transformations can be automatically generated, this does not require additional manual supervision.\\n\\nThe main weakness of this paper is that the approach is mostly data-augmentation, which is standard. Whereas standard data augmentation simply adds the transformed examples to the training set, this paper goes one step further and adds an additional regularization between two different views, so that they lie in the same space, such as using Euclidean or cosine distance. However, these loss functions are widely known and applied as a baseline in metric learning.\\n\\nThe problem itself is also well known in computer vision, and there are several empirical papers that demonstrate this. For example, the recent paper \\\"Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects\\\" at CVPR 2019.\\n\\nTaking a step back, do we even want convolutional networks to be rotation invariant? The categorical label of some objects could change depending on the rotation. For example, a door is only a door if it is upright against a wall. If you rotate the door, it just becomes a board. \\n\\nIn my view, the novelty of this paper lies in the application of standard approaches to a standard vision problem. Due to this, I feel this contribution is not sufficient for ICLR.\"}" ] }
rJxyqkSYDH
A Simple Dynamic Learning Rate Tuning Algorithm For Automated Training of DNNs
[ "Koyel Mukherjee", "Alind Khare", "Yogish Sabharwal", "Ashish Verma" ]
Training neural networks on image datasets generally require extensive experimentation to find the optimal learning rate regime. Especially, for the cases of adversarial training or for training a newly synthesized model, one would not know the best learning rate regime beforehand. We propose an automated algorithm for determining the learning rate trajectory, that works across datasets and models for both natural and adversarial training, without requiring any dataset/model specific tuning. It is a stand-alone, parameterless, adaptive approach with no computational overhead. We theoretically discuss the algorithm's convergence behavior. We empirically validate our algorithm extensively. Our results show that our proposed approach \emph{consistently} achieves top-level accuracy compared to SOTA baselines in the literature in natural training, as well as in adversarial training.
[ "adaptive LR tuning algorithm", "generalization" ]
Reject
https://openreview.net/pdf?id=rJxyqkSYDH
https://openreview.net/forum?id=rJxyqkSYDH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "wSEyBuSwHw", "H1eS-rGssB", "Hygg_4zosH", "rJlPaXGojH", "r1glYXfioB", "rkguCfMjsS", "HyeOzMfjsr", "B1e-hZfsoH", "S1xV-3gVqr", "Hyx_lophKr", "H1xlCzUqtr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734379, 1573754109211, 1573753959735, 1573753791496, 1573753719651, 1573753551563, 1573753359642, 1573753257291, 1572240379752, 1571769071922, 1571607239575 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1863/Authors" ], [ "ICLR.cc/2020/Conference/Paper1863/Authors" ], [ "ICLR.cc/2020/Conference/Paper1863/Authors" ], [ "ICLR.cc/2020/Conference/Paper1863/Authors" ], [ "ICLR.cc/2020/Conference/Paper1863/Authors" ], [ "ICLR.cc/2020/Conference/Paper1863/Authors" ], [ "ICLR.cc/2020/Conference/Paper1863/Authors" ], [ "ICLR.cc/2020/Conference/Paper1863/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1863/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1863/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes an automatic tuning procedure for the learning rate of SGD. Reviewers were in agreement over several of the shortcomings of the paper, in particular its heuristic nature. They also took the time to provide several ways of improving the work which I suggest the authors follow should they decide to resubmit it to a later conference.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Addressing the major comments\", \"comment\": \"Thank you for the insightful review and comments. \\n\\n1. We agree that we have tried AALR on image classification, both natural and adversarial training, and we should try it on other tasks too. This is definitely our plan for future work. \\n\\n2. Yes we could measure the time to convergence. AALR converges around the same time as the LR tuned baselines. However, in case of the other periodic methods like SGDR and CLR, measuring the convergence becomes very difficult (in fact detecting the convergence is difficult) due to their oscillating nature, hence we have not reported these values. \\n\\n3. We have included a Conclusion. \\n\\n4. We have also added details on adversarial versus natural training principles on section 5.4.\\n\\n5. We have added the missing values. We have also started experiments on adversarial training of CIFAR100 and reported values for those that are complete by this revision. Additionally, as pointed out by the other reviewers, we have started additional experiments on Hypergrad etc. We have reported the values for runs that are complete by the time of this revision. \\n\\n6. The catastrophic failure of ADAM on multiple cases is a drawback of the method, unlike the proposed algorithm AALR that works uniformly well. In fact, we performed an experiment with AALR on top of ADAM optimizer for one of the cases where ADAM failed. It considerably improved the performance of ADAM from 10% to 78.46%, again showing the effectiveness of AALR. \\n\\n7. We have corrected the typos, and have added the citations in the Introduction.\"}", "{\"title\": \"Addressing the major comments\", \"comment\": \"Thank you for the insightful and thorough comments and review. \\n\\n1. We have tried on a different dataset FMNIST as suggested. \\n\\n2.We have experimentally verified the performance dependence of AALR on batchsize. We found that the generalization performance of AALR remains unaffected wrt batchsize. A more detailed discussion on this has been provided as an official comment above. The new results and observations are presented in the revised version Section 5.5\\n\\n3. We have implemented SGDHD algorithm of Hypergrad (Baydin et al.). We have reported the results of runs that are already finished. Other runs are on the way. We will add these values as soon as they are complete. \\n\\n4. Hoffer et al. had pointed out that \\u201crules of thumb\\u201d like linear scaling of learning rate with batchsize may not hold in every case. We agree that there needs to be a solid theoretical founding behind every such rule. However, for ease of analysis we have assumed that OPT behaves like a typical LR regime on image datasets, that typically obeys the stated rule of thumb. \\n\\n5. We agree the usage of \\\"expectation\\\" was not rigorous or formal and have removed the usage from the text. \\n\\n6. The references for the baseline values, including those obtained on CIFAR100 are reported in section 5.3. \\n\\n7. We have added some details explaining the concepts of adversarial training like FGSM and PGD in section 5.4.\\n\\n8. We have changed the description in Phase 2 to take into account the reviewer\\u2019s comments. \\n\\n9. Weight decay: it is a regularization parameter that can be optionally provided by the user. This does not refer to the factor by which LR is adjusted by AALR. \\n\\n10. Epoch counter: yes the epochs in Phase 1 are *counted* in the total number of epochs (T). \\n\\n11. We will include the comparison with Adagrad and RMSProp in the camera ready version, if accepted. However, in general, it has been observed in practice that ADAM achieves a slightly better generalization compared to these. \\n\\n12. Tuning of competitor algorithms: We agree that the relevant metrics are test set performance and training time and resources required. All the algorithms tried for any given dataset-model-batchsize were run for the same number of epochs, thereby approximately same training time and resources were allocated to each. Hence, this did not leave any time/resources for the manual tuning part.\\n\\nIn any case, our aim with AALR is to present an autonomous approach that works without tuning across tasks, eliminating the need for manual, exhaustive experimentation.\"}", "{\"title\": \"batchsize vs LR\", \"comment\": \"Thank you for the insightful review and detailed comments and suggestions.\\n\\nWe have performed experiments to verify this relationship. The detailed comment can be found as an official comment above this. The new results can be found in the revised version of the paper in the last Section before Conclusion. \\n\\nWe have tried to address all other comments below.\"}", "{\"title\": \"Low values of SGDR; PGD instead of FGSM; Informal convergence; other tasks\", \"comment\": \"\", \"low_values_of_sgdr\": \"We don\\u2019t know, it is probably due to some inherent behaviour of the algorithm SGDR that leads to its catastrophic failure in certain cases. In fact, we would like to add that even ADAM failed catastrophically in a few cases. We have performed an experiment where we applied AALR on top of ADAM in one such case. The accuracy improved from 10% to 78.46%.\", \"pgd_vs_fgsm\": \"We have added some results on using AALR for adversarial training with PGD attack (10 steps).\", \"informal_convergence\": \"We agree the development in section 4 is not completely rigorous. We have removed the statement on generalization.\", \"other_tasks\": \"We haven't tried AALR on the proposed tasks yet. That is definitely the next step in future work.\"}", "{\"title\": \"Inconsistency in definition; double well potential\", \"comment\": \"Thank you for pointing out the inconsistency in the existing definition.\", \"divergence_of_training_means_here\": \"if the LR is set to any larger value than that of OPT, gradients will explode and the training of the network cannot be recovered from there. Yes \\u201cAny SGD algorithm\\u201d refers to any first order stochastic gradient-based algorithm. We have edited the text to reflect these corrections.\", \"double_well_potential\": \"We addressed this point earlier in the comment on the reason for persisting in higher LR and increasing LR when loss decreases. OPT would know the exact LR which would help it to escape from the sharp minima without leading to divergence due to exploding gradients. AALR is designed to handle such situations. For ease of analysis, we have assumed we have a reasonably smooth loss surface where a typical SGD regime (where LR never increases) is a proxy for OPT and the loss will show a decrease after p (\\\\geq 2) epochs on being trained at the tuned LR.\"}", "{\"title\": \"Reason for persisting in a higher learning rate; increasing LR when loss decreases.\", \"comment\": \"The reason for persisting in the learning rate even when the loss increases in the first stage is to handle scenarios such as the double well potential as the reviewer has pointed out in question 3. In order to escape a sharp minima, the loss might first increase for a few rounds, before starting to decrease again. Hence, reverting to a lower learning rate at the first instance of loss increase might not help in escaping from a sharp minima. To handle such situations, we have designed AALR to persist in the higher learning rate for p more epochs.\", \"we_increase_the_learning_rate_when_we_see_a_decrease_in_training_loss_for_two_reasons\": \"a) it might be a sharp minima and we want AALR to explore the vicinity of the loss valley (in particular, escape if it is a sharp minima); b) if we are in a smooth, flat loss well, increasing the LR will likely not harm, and it will simply accelerate AALR to reach the minimum point, without escaping from the loss valley.\"}", "{\"title\": \"Relationship between LR, batchsize and entropy.\", \"comment\": \"We thank the reviewers for their careful review and kind, insightful comments and pointers.\\n\\nThe reviewers wanted to know if AALR can find the relationship between LR and batchsize and they wanted us to try on datasets other than CIFAR10 and CIFAR100. \\n\\nWe performed additional experiments to examine how AALR handles different batchsizes and if it can find the relationship between learning rate, batchsize and entropy. (We would like to highlight here that for all the other experiments we used the settings including batchsizes as used in the baselines reported in the literature. The baseline sources are all provided in the section 5.3).\\n\\nWe tried on FMNIST (since it was recommended to try on datasets other than CIFAR10 and CIFAR100) on different batchsizes. We found that AALR finds LR trajectories that roughly obey the *square root* relationship between LR and batchsize (and not linear). \\n\\nMoreover, we compared the LR trajectory as obtained on the same batchsize with and without random erasing. We found that the LR trajectory is much more conversative (i.e. low LR) for the one with random erasing. This is in line with the observation in https://arxiv.org/abs/1710.11029 (pointed out by Reviewer#1) that the implicit entropy in SGD is determined by the ratio of LR to Batchsize. Since with random erasing regularization the entropy increases, the implicit SGD entropy is lowered by AALR by reducing the LR. \\n\\nThese new experiments and observations have been presented in the revised version in Section 5.5. We would like to mention that in all of these cases, AALR\\u2019s generalization is unaffected, it performs similar to reported baseline values, across batchsizes.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes an algorithm for automatically tuning the learning rate of SGD while training deep neural networks. The proposed learning rate tuning algorithm is a finite state machine and consists of two phases: the first phase finds the largest learning rate that the network can begin training with for p = 10 epochs; the second phase is an optimistic binary exploration phase which increases or decreases the learning rate depending upon whether the loss is NaN, increasing or decreasing. Empirical results are shown on a few standard neural networks for image classification on CIFAR-10/100 datasets and for adversarial training on the CIFAR-10 dataset.\", \"i_recommend_rejecting_this_paper_for_the_following_reasons\": \"(i) the algorithm developed here is extremely heuristic, no insight, theoretical or empirical, is provided as to why this could be a general algorithm, (ii) a major claim in the paper is that the automatic learning rate tuning does not have any hyper-parameters but the actual algorithm does have parameters such as patience and successive doubling of the learning rate although they are tuned adaptively using ad-hoc heuristics, (iii) the convergence analysis is not at all rigorous, in particular the optimal oracle for SGD may not exist, and (iv) the baseline algorithms are not tuned and the minor improvements of the proposed algorithm over them is therefore not significant.\", \"some_questions_that_i_would_like_the_authors_to_answer\": \"1. While the first phase of the algorithm seems a reasonable thing to do, the second phase is full of heuristics which I am not sure will work well for all problems. For instance, I do not see why the algorithm performs trains for p epochs twice even if the loss increased after the first stage, or why the learning rate should be increased if the loss decreased after the second stage.\\n2. Section 4, bullet 3/4 in the definition are problematic: the loss in SGD is not monotonically decreasing with respect to time. What does divergence of training mean here? What does \\u201cAny SGD algorithm\\u201d mean? Do you instead mean any first-order stochastic gradient-based algorithm?\\n3. If you imagine a double well potential with one wide minimum and one sharp minimum, both at the same training loss, if OPT starts in the sharp valley, it will not be able to go to the wide valley without the training loss increasing.\\n4. Have you tried this algorithm on other problems which are sensitive to the values of learning rate, e.g., training optical flow or segmentation networks?\\n5. The wordy and heuristic argument in Section 4.1 rests on statements like \\u201cAALR and OPT arrive at roughly the same location after so and so epochs and hence reaches similar generalization performance\\u201d. This cannot happen in a non-convex landscape, the trajectory of SGD starting from the same initial condition can be very different across two independent runs. Therefore, I also don\\u2019t see why the latter half of the statement about generalization should be true.\\n6. Can you make the development in Section 4 rigorous?\\n7. Why are some runs for SGDR stuck at 10% accuracy in Table 1-2?\\n8. FGSM is a very weak attack for measuring adversarial accuracy. Can you show results with a better attack, say a few steps of PGD?\", \"some_suggestions_to_improve_the_paper\": \"1. A simple experiment to check the automatic tuning would be to increase the batch-size of the same network while maintaining the ratio of batch-size and learning constant (see https://arxiv.org/pdf/1706.02677.pdf, https://arxiv.org/abs/1710.11029, among others). It would be interesting to see whether the auto-tuner finds a learning rate that corresponds to stable learning without degradation in the generalization performance.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper considers the problem of automated adaptation of learning rate during (deep) neural network training. The use cases described are standard and adversarial training for image classification. Given the wide use of DNNs in computer vision (and other areas), learning rate tuning is clearly an important problem and is being actively researched.\\n\\nThe proposed learning rate adaptation procedure consists of a straightforward combination of learning rate halving/doubling and model checkpointing. Experimental results from implementing the adaptive learning rule for standard and adversarial training on CIFAR are provided. Multiple architectures are tested in each setting. The paper claims a primary advantage of the proposed learning rule to be that it requires no tuning as opposed to other rules such as SGD, Adam.\\n\\nMy decision is to reject the paper due to methodological issues with the experiments and lack of evidence wrt/ dataset variety. The paper should be considered a work-in-progress that may have potential in a more focused setting, e.g., adversarial training as described in the paper.\\n\\n***\\n\\nThe major claim of the proposed algorithm not requiring any manual tuning is technically true but misleading. The algorithm does have parameters (SGD momentum, batch size, initial learning rate, patience) with values that were set somehow. In fact, a major methodological issue with the experiments is that the reader does not know if the datasets were used to both set these values and to assess performance, i.e., there are no obvious \\\"held-out\\\" datasets. Also, there is no rigorous or even informal justification of the settings. It could be that the paper is arguing that the specific values will result in competitive, if not better, performance than baselines across a variety of datasets - unfortunately, only two datasets are utilized in the experiments, and one, CIFAR10, is not considered challenging. This leads to the second issue with the paper: the experimental validation is not extensive wrt/ datasets which is significant given that the form of the evidence for the proposed method is almost entirely empirical.\\n\\nAdditionally, I don't agree that competitor algorithms should not be tuned b/c the proposed method does not require tuning. Even if the proposed method does not require tuning (as stated previously, I don't believe this to be accurate), that does not imply a fair comparison precludes tuning competitors via, e.g., cross validation. The only relevant quantities are final test-set performance and total training time/resources required. \\n\\nThe well-known interdependence between learning rate and batchsize as noted in e.g., Hoffer et al. (2018), is not addressed by the experiments. Batchsizes in the experiments vary, but no justification is provided for how these are selected.\\n\\nFinally, the paper is unfinished as some experimental runs were not complete at the time of submission.\\n\\nOn the positive side, the general point about the necessity of learning rate tuning for adversarial training (described in the fourth paragraph of the introduction) is a very good one, and there may be an opportunity for a more focused application of the proposed algorithm perhaps among further datasets and considering additional, alternative attacks.\\n\\n***\\n\\nSuggestions for improvement / questions (related to decision):\\n\\n* It should not be a challenge to find more image classification datasets to include in the experimental comparison: SVHN, Fashion MNIST, Imagenet, ... Using these, the paper can either follow the standard train/test methodology *across datasets*, i.e., split the meta-dataset into train/test, and/or provide a more compelling body of evidence for the proposed method. Also, the performance dependence on batchsize amongst the proposed algorithm and competitors should be investigated experimentally.\\n\\n* The Baydin et al. (2018) algorithm should be added to the set of competitors since it would provide a relatively easy* reference point wrt/ \\\"hypergradient\\\" approaches. I don't agree with the statement in the related work section that this entails \\\"additional computation of gradients.\\\" *In the sense that the rule should be straightforward to implement.\\n\\n* The convergence analysis assumption that the optimal oracle SGD follows typical learning rate regimes motivated by loss plateauing seems to be in direct contradiction to the sentiment expressed in the cited Hoffer et al. (2018) paper that such \\\"rules of thumb\\\" may be misguided. Can the authors discuss the appropriateness of their assumption wrt/ this point? Also, in the convergence analysis, the phrase \\\"in expectation\\\" is used twice. This has a specific probabilistic meaning, but appears to be used heuristically in this section. Can the authors clarify whether this usage is informal or formal? If the latter is true, it would be better to provide a more formal convergence argument that explicitly takes the inherent randomness into account.\\n\\n***\\n\\nEditorial comments (not related to decision):\\n\\n* Introduction: The first two sentences of the second paragraph, particularly the second, would do well to have an accompanying reference or references.\\n\\n* Proposed method: Even as an informal statement, the second sentence of the second paragraph under the Phase 2 sub-heading is problematic. The proposed method does not \\\"resist\\\" lowering the learning rate \\\"for as long as possible\\\" so much as it doesn't lower the learning rate for a fixed number of epochs (algorithm parameter).\\n\\n* \\\"Adversarial training\\\" section (5.4): The paper assumes the reader is familiar with the terms \\\"FGSM\\\", \\\"white box\\\", and parameters \\\\epsilon and \\\\alpha since these are referred to w/o description. Perhaps a short (2-3 sentence) description of the adversarial scenario could be added?\\n\\n* Experiments: It would be good for the paper to include RMSProp and Adagrad results to the experimental tables as these rules are both readily available for use and widely used.\\n\\n* Experiments: Is the reporting of the peak accuracy standard in the literature?\\n\\n* Experiments: I want to give the paper credit for performance on CIFAR100, but this is difficult without explicit points of comparison. This can be easily remedied by including SOTA performance values (along with appropriate references) in the tables or text.\\n\\n* (Potential) Typos:\", \"proposed_method_algorithm_description\": \"Requirements has a weight decay parameter which seems strange given that the algorithm is performing automated learning rate adaptation...\\n\\t\\tThe epoch counter is incremented in line 5, but not reset prior to Phase 2. Does this mean that Phase 1 training epochs are counted toward the total (T)?\\n\\t\\tLine 7 should be \\\\eta_t <- \\\\eta_0 / 2.\\n\\t\\tThe patience counter in line 15 is not utilized below.\\n\\t\\tLine 23 could/should be an else statement.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper proposes a new way of scheduling the learning rate in optimization algorithms such as SGD. It is a stand-alone, parameter-free approach that optimistically doubles the learning rate at every loss improvement between two epochs, until the loss increases too much or diverges, in which case the learning rate is divided by two.\", \"This approach is theoretically proven to converge and to follow an optimal scheduling strategy.\", \"In addition, the authors experimentally tested their approach on two image classification tasks, showing that the proposed algorithm yields similar to baseline results.\", \"I am rejecting this paper because it seems to motivate things with non-related facts, experiments are not robust and thorough enough, and there is no conclusion (not even in the appendix).\", \"The most important thing in this paper to me is the fact that \\\"adversarial training\\\" is used to motivate this approach a lot. it is mentioned 14 times across the paper: 3 times in the abstract alone. Yet there is no explanation of what it is, and how is it different from \\\"natural training\\\" as mentioned in the paper. I suggest the authors either to clearly explain the difference between the two and explain why their approach may help in one setting or the other; or to simply remove the mentions of \\\"adversarial training\\\" if it is not important to the approach.\", \"to better motivate the approach, I would suggest the authors include different tasks, rather than different training settings. For instance by having one image classification task (keep one of the two current ones) and one text classification or even generation task. This would show that the proposed approach generalizes well to other network architectures.\", \"The second concern I have is about the experiments. If increasing the learning rate like the proposed approach is making training to convergence faster, then why are the experiments only measuring test set accuracy and not also runtime to convergence?\", \"Overall, the experiments are not complete and thorough enough: some table values are missing, the set of adversarial training experiments on CIFAR100 are not reported, and some experiments diverged with the ADAM optimizer. Less than 20% accuracy on a 10-class image classification task seems very far from optimal.\", \"Eventually, I strongly suggest the authors submit a better closing statement than \\\"We use cross entropy loss in all cases.\\\" (especially after having read this same sentence earlier in section 5.1 of the paper). No conclusion is added to the paper, not even in the appendix.\"], \"below_are_a_few_minor_points_not_taken_into_account_in_the_scoring_but_that_could_make_the_paper_slightly_better\": [\"Section 1, paragraph #1, first sentence: a few citations here would be nice.\", \"typo on the first line of page 3: \\\"5The pseudocode ...\\\"\", \"typo in the 2sn paragraph of section 4: \\\"... the convergence is the fast*est* since the step sizes ...\\\"\", \"typos in the first line of the second paragraph of section 4.1: \\\"Assuming that *the* loss surface is smooth, *the* loss will continue...\\\"\", \"page 6: \\\"At this time, p=2^{n-1} at this time.\\\"\", \"Section 5.1, paragraph 2, first sentence: \\\"... with dropout and with both dropout and cutout, ...\\\"\"]}" ] }
Hkx1qkrKPr
DropEdge: Towards Deep Graph Convolutional Networks on Node Classification
[ "Yu Rong", "Wenbing Huang", "Tingyang Xu", "Junzhou Huang" ]
Over-fitting and over-smoothing are two main obstacles of developing deep Graph Convolutional Networks (GCNs) for node classification. In particular, over-fitting weakens the generalization ability on small dataset, while over-smoothing impedes model training by isolating output representations from the input features with the increase in network depth. This paper proposes DropEdge, a novel and flexible technique to alleviate both issues. At its core, DropEdge randomly removes a certain number of edges from the input graph at each training epoch, acting like a data augmenter and also a message passing reducer. Furthermore, we theoretically demonstrate that DropEdge either reduces the convergence speed of over-smoothing or relieves the information loss caused by it. More importantly, our DropEdge is a general skill that can be equipped with many other backbone models (e.g. GCN, ResGCN, GraphSAGE, and JKNet) for enhanced performance. Extensive experiments on several benchmarks verify that DropEdge consistently improves the performance on a variety of both shallow and deep GCNs. The effect of DropEdge on preventing over-smoothing is empirically visualized and validated as well. Codes are released on~https://github.com/DropEdge/DropEdge.
[ "graph neural network", "over-smoothing", "over-fitting", "dropedge", "graph convolutional networks" ]
Accept (Poster)
https://openreview.net/pdf?id=Hkx1qkrKPr
https://openreview.net/forum?id=Hkx1qkrKPr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "Osk5RzdEIC", "NjBwV6BRhO", "bhSBAXSwY", "TSFVeqQwYC", "uWqxgeGU7f", "ByxtBPLNsr", "rJx638IVjB", "BketVLLVsS", "ryl-3sBhcH", "rJlykwUUcr", "B1eC_r5atS", "rke283wbYS", "r1lmckKyYr", "HyxLTU-p_r", "S1xtst6ndH", "HyxukVbidr", "HklLbb8quH", "r1l2CcPKdS", "S1lR897_dH", "ryxAKgmudB", "Syemfc3E_B", "S1lB1PPVdr", "ryxwiUP4uS", "r1l1bxGEOr", "HkeGMPxEur", "rkx6MbeVuH", "S1e-_k1V_B", "BkgHe114uH", "BJgENiRX_H", "Bkxf8BA7dS", "rJxD6NnQdr", "HJgskAYXuS", "r1ltlsM7OS", "BJgOg_9WOH" ], "note_type": [ "official_comment", "comment", "official_comment", "comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "comment", "official_comment", "comment", "comment", "official_comment", "official_comment", "comment", "comment", "official_comment", "official_comment", "comment", "comment", "comment", "comment", "comment", "comment", "comment", "comment", "comment", "comment", "comment" ], "note_created": [ 1583648143795, 1582686019820, 1581259376164, 1580414003917, 1576798734349, 1573312320994, 1573312180883, 1573312048843, 1572785065268, 1572394711194, 1571820918024, 1571023956489, 1570897803041, 1570735805965, 1570720160629, 1570604000112, 1570558206502, 1570499284380, 1570417237883, 1570414725768, 1570191883414, 1570170589326, 1570170527450, 1570148343042, 1570141962218, 1570140436548, 1570135913289, 1570135789282, 1570134827830, 1570133321868, 1570124990579, 1570115042971, 1570085617028, 1569986544288 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper1862/Authors" ], [ "~Dongsheng_Luo1" ], [ "ICLR.cc/2020/Conference/Paper1862/Authors" ], [ "~Petar_Veličković1" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1862/Authors" ], [ "ICLR.cc/2020/Conference/Paper1862/Authors" ], [ "ICLR.cc/2020/Conference/Paper1862/Authors" ], [ "ICLR.cc/2020/Conference/Paper1862/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1862/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1862/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1862/Authors" ], [ "ICLR.cc/2020/Conference/Paper1862/Authors" ], [ "~Alex_Williams1" ], [ "ICLR.cc/2020/Conference/Paper1862/Area_Chair1" ], [ "~Deli_Chen1" ], [ "~Alex_Williams1" ], [ "ICLR.cc/2020/Conference/Paper1862/Authors" ], [ "ICLR.cc/2020/Conference/Paper1862/Authors" ], [ "~Huaxin_Song1" ], [ "~Alex_Williams1" ], [ "ICLR.cc/2020/Conference/Paper1862/Authors" ], [ "ICLR.cc/2020/Conference/Paper1862/Authors" ], [ "~William_H_Cohen1" ], [ "~Not_Alex_Williams1" ], [ "~Alex_Williams1" ], [ "~Not_Alex_Williams1" ], [ "~Not_Alex_Williams1" ], [ "~Alex_Williams1" ], [ "~Alex_Williams1" ], [ "~Not_Alex_Williams1" ], [ "~Alex_Williams1" ], [ "~Not_Alex_Williams1" ], [ "~Alex_Williams1" ] ], "structured_content_str": [ "{\"title\": \"More Clarifications\", \"comment\": \"Hi, Dongsheng,\\n\\nReally appreciate your interest and the questions your raised for our paper. Based on your comments, we recognize that there are indeed unclear and imprecise descriptions in our current version. But note that the main story of our paper still holds. We summarize our clarifications below.\\n\\n1)\\tOur derivation is based on the work [1] where the over-smoothing is characterized in an asymptotical form, so we can only justify the increment of the RELAXED smoothing layer (which is an upper bound of the true smoothing layer) in Theorem 1. We have extra defined the relaxed smoothing layer in our paper and reflected the revisions in Theorem 1 and the proofs in the appendix.\\n\\n2)\\tYour illustration on the conductance is right. Our original proof is by connecting the (relaxed) smoothing layer with the conductance. But, we later find that using conductance is unnecessary. Instead, the proof is better explained via the connection with the resistance (see Eq.(6) in the new appendix). We have rearranged the proof and rigorously shown that the resistance (thus the relaxed smoothing layer) will increase if sufficiently edges are dropped. \\n\\nThanks for your questions, which make our paper more rigorous and well-qualified. We hope our explanations help.\\n\\n[1] Kenta Oono, Taiji Suzuki: Graph Neural Networks Exponentially Lose Expressive Power for Node Classification\"}", "{\"title\": \"Some questions about theoretical analysis.\", \"comment\": \"Dear authors,\\n\\nI think this is a very interesting paper. I have some concerns about the theoretical analysis.\\n\\n1) Formulation of l*. According to Definition 3, l* is the minimal value of the layers that satisfy Equation 3 and formulation is given in Appendix Lemma 2, Equation (6). since l* is the minimal, I think dM(H(l-1)) >=epsilon should be proved.\\n\\n\\n2) It seems that Definition 4 of conductance is not the standard form. According to the wiki and your reference (L\\u00e1szl\\u00f3 1993), the denominator should be min(V(S), V(\\\\bar(S)), where V(S) is the sum of node degrees in S. \\n\\n3) If we adopt the standard form conductance, the statement \\\"the conductance of the graph can only decrease if one edge is removed from the graph\\\" may not hold. Intuitively, we remove an edge inside a cluster, then the conductance of the graph increases.\", \"here_is_a_toy_example\": \"https://github.com/flyingdoog/DropEdge-tf/blob/master/DropEdge.ipynb\\n\\nV = [0,1,2,3,4,5]\\nadj_list = {}\\nadj_list[0]=[1,2,3]\\nadj_list[1]=[0,2]\\nadj_list[2]=[0,1]\\nadj_list[3]=[0,4,5]\\nadj_list[4]=[3,5]\\nadj_list[5]=[3,4]\\n\\nThe conductance is 0.1429, removing the edge(4,5) leading to 0.2.\\n\\nThank you!\\n\\n\\n[1] L\\u00e1szl\\u00f3 Lov\\u00e1sz et al. Random walks on graphs: A survey. Combinatorics, Paul erdos is eighty, 2(1):\\n1\\u201346, 1993\"}", "{\"title\": \"More clarifications\", \"comment\": \"Hi, Petar. Really appreciate your interest in our work.\\n\\nSorry for missing the part you mentioned in your paper. Yes, performing dropout on the attention coefficients should be a specific form of GAT with DropEdge, and the result for regular GAT was obtained without this dropout. \\n\\nHaving said that, there are several different points inbetween. First, your version is indeed a post-conducted version of DropEdge, as you compute all attentions prior to attention dropout. Here, in our work, we first perform DropEdge and then use GAT, avoiding unnecessary computation of edge attentions. Besides, we further perform adjacency normalization following DropEdge, which, even simple, is able to make it much easier to converge during training. Without normalization, it will also intensify gradient vanish as the number of layers grows. \\n\\nMore or less, the attention dropout seems an ad-hoc trick in your paper, and the relation to over-smoothing is never explored. In our paper, however, we have formally presented the formulation of DropEdge and provided rigorous theoretical justification of its benefit in alleviating over-smoothing. We also carried out extensive experiments by imposing DropEdge on several popular backbones.\\n\\nWe are happy you raising this discussion, and have added the above connection in the final copy.\"}", "{\"title\": \"Relationship to prior regularisers\", \"comment\": \"Very interesting work! Thank you for such a rigorous evaluation and the theoretical justification.\\n\\nFrom how I understood the method, it is very similar to the regularisation we employed in the GAT paper (https://openreview.net/forum?id=rJXMpikCZ ); as per Section 3.3:\\n\\n\\\"Furthermore, dropout (Srivastava et al., 2014) with p = 0.6 is applied to\\nboth layers\\u2019 inputs, *as well as to the normalized attention coefficients (critically, this means that at\\neach training iteration, each node is exposed to a stochastically sampled neighborhood)*.\\\"\\n\\nPerforming dropout on the attention coefficients should be more-or-less along the same lines as edge dropping? If so, perhaps this link should be better highlighted, and in relation to the \\\"GAT w/ DropEdge\\\" baseline you mentioned in a rebuttal comment below, does this mean that the result for regular GAT was obtained without this dropout?\\n\\nThank you!\"}", "{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes a very simple but thoroughly evaluated and investigated idea for improving generalization in GCNs. Though the reviews are mixed, and in the post-rebuttal discussion the two negative reviewers stuck to their ratings, the area chair feels that there are no strong grounds for rejection in the negative reviews. Accept.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"We appreciate the reviewer for ordering the questions with numbers, which helps us to respond more conveniently.\", \"q1\": \"DropEdge does change the graph properties for each epoch. But statistically, as discussed in our reply to Q4, Reviewer#4, DropEdge does not change the expectation of neighbor aggregation that plays a crucial role in characterizing input graphs. Hence, the statistics of graph properties are still preserved.\", \"q2\": \"Drawing for our reply in Q1, DropEdge will not change the connectivity in expectation, even it may result in disconnected components occasionally in one epoch.\", \"q3\": \"The information measurement in Thm.1 refers to how much freedom we have to describe a point in a certain space. The dimensionality of the space is a natural and direct choice, thus we use dimension reduction to reflect information loss.\", \"q4\": \"As we discussed in Section 4.3, The purpose of the graph sparsification and DropEdge are different. Graph sparsification aims to remove unnecessary edges of graphs, while keeping almost all information of the input graph, while DropEdge is an efficient approach to reduce the over-smoothing based on our theoretical analysis. Moreover, as mentioned in Q1, DropEdge preserves the statistic of graph properties, and involves no bias.\", \"q5\": \"According to our theoretical analysis, deeper GNN models suffer from more serious over-smoothing issues than that of shallower ones. It is thus not surprising that DropEdge can gain more improvements from more layers. The experimental results in Tab. 1 and Fig.2 validate our theoretical findings.\", \"q6\": \"The trend of Reddit dataset is still generally consistent with other datasets if we compare the results of 4/8/32 layers (the more layers the more improvements by applying DropEdge). The corner case happens when the depth is 16. If we check Table 7 in the appendix, there is a huge performance drop in GCN without DropEdge at 16 layers, making the improvement by DropEdge bigger than that of 32 layers.\", \"q7\": \"The motivation of FastGCN and ASGCN is to speed up GCN, and they can be considered as different efficient implementations of GCN. We believe performing a comparison on GCN is sufficient without further consideration on FastGCN and ASGCN. GAT is different from GCN, and we are willing to provide the results below:\\n| | Cora | Citeseer |\\n| GAT | 0.863 | 0.781 |\\n| GAT w/ DropEdge | 0.881 | 0.792 |\\nAs expected, DropEdge can still enhance its performance.\", \"minor_q3\": \"$C_l$ refers to the size of $l$-th hidden layer.\"}", "{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank the reviewer for accepting the interestingness and the completeness of our paper. We present our responses below.\\n\\nQ1. About the significance of performance improvement. \\n\\nAs mentioned in Section 5.1, the benchmark datasets are well-studied and well-tuned in the graph learning field. Achieving a 1-2% increase can be regarded as a remarkable improvement. For the baselines we consider in Tab.1, DropEdge generally improves them around (or bigger than) 1% under different depths, which is significant considering the challenge on these datasets. \\n\\nQ2. About the claim of our paper on deeper GNNs.\\n\\nThe reviewer possibly misunderstood our claim. Our paper is not showing deeper is better. Instead, we are more interested in investigating why GCN failed with deep layers and how over-smoothing happened. We hence propose DropEdge, a simple but effective method that is capable of enhancing various kinds of GNNs regardless of the network depth. Our motivation of discussing and reporting the results of varying depth in Tab.1 is to study how much DropEdge can enhance deep GNNs. The reviewer raised that \\\"it looks like most of the time, 2-layers networks are already the best (or close to the best)\\\", which is true but only when all models (including the 2-layer ones) have applied DropEdge. \\n\\nQ3. About the clarification of \\\"DropEdge either retards the convergence speed of over-smoothing or relieves the information loss caused by it\\\".\\n\\nThis sentence reflects two parts of Theorem 1. As the first part, it retards the convergence speed of over-smoothing and as the second part, it relieves the information loss caused by over-smoothing.\"}", "{\"title\": \"Response to Reviewer #4\", \"comment\": \"We really thank the reviewer for the recognition of our contributions to the experimental evaluations and the theoretical justification. Here, we would like to provide more explanations to address the reviewer's concerns.\\n\\u00a0 \\nQ1. The novelty of DropEdge.\\n\\nWe agree that our DropEdge is simple and is inspired by Dropout. Yet, when we put it in the context of graph learning, DropEdge is indeed a novel method that is able to alleviate over-smoothing, while Dropout cannot. DropEdge can be regarded as an extension of Dropout to graph edges, but this extension, in certain sense, is not straightforward\\u2014people usually adapt the idea of Dropout in GNNs to drop the network activations with less thinking in dropping the network input. Interestingly, simply dropping edges by random is sufficient to deliver promising results, as verified by our paper experimentally and theoretically.\\u00a0\\u00a0\\u00a0\\u00a0 \\u00a0\\u00a0\\n\\u00a0\\nQ2. Choosing the dropout proportion.\\n\\nThe reviewer probably misunderstood the experimental setting. We did not fix the drop proportion $p$ to be 0.8 for the main of our experiments. Instead, as already presented in the second paragraph in Section 5.1, we use the validation set to determine the dropping rate for each benchmark in Tab.1; different datasets could have different dropping rates. In Section 5.2.1 (and Fig.3), we fix it to be 0.8 for a case study on evaluating how DropEdge can prevent from over-smoothing. \\nTo further illustrate how the dropping proportion actually acts, we have conducted an example experiment for GCN-4 (the best GCN model for Cora in Tab. 2) by varying $p$ from 0 to 1 with an increasing step of 0.2. The results are\\n$p$ | 1 | 0.8 | 0.6 | 0.4 |0.2 | 0\\nGCN|0.624| 0.778|0.87 |0.869 |0.862 |0.855\\nClearly, it generally improves the performance when $0<p<0.8$. The exceptional cases are $p=0.8\\uff0c1$ when GCN degenerates (or closely degenerates) to MLP, which is reasonable due to the less expressive power.\\u00a0Furthermore, the best performance is achieved when $0.4 \\\\leq p \\\\leq 0.6$; the selection of dropout proportion $p$ near 0.5 may also be a good choice. \\n\\nQ3. Exploiting graph-specific properties when considering DropEdge.\\n\\nYes, this potentially promotes the performance. Given that our particular interest here is to keep the method simple and general, we are happy to explore more variants of sophisticated DropEdge in future work. \\n\\nQ4. The justification of why \\u201cDropEdge can be considered as a data augmentation technique\\u201d is valid.\\n\\nThanks for the comment and sorry for the unclear clarification in the current submission. We provide an intuitive understanding here. The key in GNNs is to aggregate neighbors' information for each node, which can be understood as a weighted sum of the neighbor features (the weights are associated with the edges). From the perspective of neighbor aggregation, DropEdge enables a random subset aggregation instead of the full aggregation during GNN training. Statistically, DropEdge only changes the expectation of the neighbor aggregation up to a multiplier $p$, if we drop edges with probability $p$. This multiplier will be actually removed after weights normalization, which is often the case in practice. Therefore, DropEdge does not change the expectation of neighbor aggregation and is an unbiased data augmentation technique for GNN training. We have added the above specifications in the paper. \\n\\u00a0\\nQ5. On the layer-independent DropEdge.\\n\\nThanks for the comment. Our original purpose of naming it like this is to reflect that DropEdge is conducted independently across layers. If this is a problem, we are willing to rename it as \\u201clayer-wise DropEdge\\u201d to remove the confusion. We agree that the analysis of \\u201clayer-wise DropEdge\\u201d is interesting and it is an important future work of theoretical aspects.\\n\\nQ6. Other comments.\\n\\nThanks for the valuable suggestions and we will re-organize Fig.1 accordingly. We will also fix the typos throughout our paper.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"The authors propose a simple but effective strategy that aims to alleviate not only overfitting, but also feature degradation (oversmoothing) in deep graph convolutional networks (GCNs). Inspired by dropout in traditional MLPs and convnets, the authors clearly motivate their contribution in terms of alleviating both overfitting and oversmoothing, which are problems established both in previous literature as well as validated empirically by the authors. Ultimately, the authors provide solid empirical evidence that, while a bit heuristic, their method is effective at alleviating at least partially the issues of overfitting and oversmoothing.\\n\\nI vote weak-accept in light of convincing empirical results, some theoretical exploration of the method's properties, but limited novelty.\", \"pros\": \"Simple, intuitive method\\nDraws from existing literature relating to dropout-like methods\\nLittle computational overhead\\nSolid experimental justification\\nSome theoretical support for the method\", \"cons\": \"Method is somewhat heuristic\\nMitigates, rather than solves, the issue of oversmoothing\\nLimited novelty (straightforward extension of dropout to graphs edges)\\nUnclear why dropping edges is \\\"valid\\\" augmentation\\n\\nFollowup-questions/areas for improving score:\\n\\nIt would be nice to have a principled way of choosing the dropout proportion; 0.8 is chosen somewhat arbitrarily by the authors (presumably because it generally performed well). There is at least a nice interpretation of choosing 0.5 for the dropout proportion in regular dropout (maximum regularization).\\n\\nAs brought up in the comments, edges to drop out to the graph's properties is an interesting direction to explore. While the authors state that they would like to keep the method simple and general, the method is ultimately devised as an adaptation of dropout to graphs, so exploiting graph-specific properties seems reasonable and a potential avenue to further improving performance.\", \"p2\": \"\\\"First, DropEdge can be considered as a data augmentation technique\\\" Why are these augmentations valid; why should the output of the network be invariant to these augmentations? I would like to see some justification for why the proposed random modification of the graph structure is valid; intuitively, it seems like it might make the learning problem impossible in some cases.\\n\\nDeeper analysis of the (more interesting, I think) layer-independent regime would be nice. (As a side-note, the name \\\"layer-independent\\\" for this regime is a bit confusing, as the edges dropped out *do* depend on the layer here, whereas in the \\\"layer dependent\\\" regime, edges dropped out do *not* depend on the layer).\", \"comments\": \"Figure 1 could probably be re-organized to better highlight the comparison between GCNs with and without DropEdge; consolidating the content into 2 figures instead of 4 might be more easily parsable. Adding figure-specific captions and defining the x axis would also be nice.\\n\\nUse \\\"reduce\\\" in place of \\\"retard\\\"\\np2 \\\" With contending the scalability\\\" improve phrasing\\np2 \\\"By recent,\\\" -> \\\"Recently,\\\"\\np2 \\\"difficulty on\\\" -> \\\"difficulty in\\\"\\np2 \\\" deep networks lying\\\" -> \\\"deep networks lies\\\"\\np3 \\\"which is a generation of the conclusion\\\" improve phrasing\\np3 \\\" disconnected between\\\" -> \\\"disconnected from\\\"\\np4 \\\"adjacent matrix\\\" -> \\\"adjacency matrix\\\"\\np4 \\\"severer \\\" -> \\\"more severe\\\"\\np5 \\\"but has no help\\\" -> \\\"but is no help\\\"\\np5 \\\"no touch to the adjacency matrix\\\" -> improve phrasing\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper studied the problem of \\\"deep\\\" GCNs where the goal is to develop training methods that can make GCN becomes deeper while maintaining good test accuracy. The authors proposed a new method called \\\"DropEdge\\\", where they randomly drop out the edges of the input graphs and demonstrate in experiments that this technique can indeed boost up the testing accuracy of deep GCN compared to other baselines.\\n\\nThis paper is clearly well-written and the authors conducted a comprehensive study on deep GCNs. I also like the discussion in sec 4.3 where the authors explicitly clarify what are the difference between DropEdge, Dropout and DropNode, as the other two are the methods that will pop up during reading this paper. The extensive experiment results also show that for deeper GCNs, DropEdge always win over other baselines (see Tab 1) despite most of them are marginal except the backbone being GraphSAGE on Citeseer. Can you explain why this is the case? Why other backbones seem to have similar performance even with DropEdge (i.e. most of the accuracy increase are less than 3 %).\", \"question\": \"1. When looking at Tab 1, it looks like most of the time, 2-layers networks are already the best (or close to the best) and are clearly better than 32 layers. Therefore, this makes me wonder: why do we need deeper networks at all if the shallow networks can already achieve a good (almost best) performance and it is also much similar and efficient in training? Can you please clarify why do we care to train a deeper network at all under this scenario? Are there any reasons that we would like to use deeper network as opposed to shallower networks?\\n\\n2. It is less clear to me regarding this sentence: \\\"DropEdge either retards the convergence speed of over-smoothing or relieves the information loss caused by it\\\" \\n\\nOverall, I think this paper presents an interesting study on making deeper GCNs comparable to shallow network performance, but since the the boosted performance doesn't really outperform most of the 2-layer networks, I would like to hear the justification of why we need the deeper networks for this node classification task.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors propose a simple and interesting strategy, DropEdge, to alleviate the over-fitting and over-smoothing in GCN. The logic is simple and clear and the paper is well-written.\", \"major_concerns\": \"1. After randomly enforce a certain rate of edges to be zero, how to preserve properties in the original complex network, such as degree power-law distribution, communities? If it was not necessary to preserve the properties, then what information should be preserved from the original graph.\\n2. Randomly drop edges may result in disconnected components, how to handle disconnected components?\\n3. Why do the authors use dimension difference as the measure to quantitatively evaluate information loss in Thm 1. More dimension reduction does not mean more information loss.\\n4. As a follow-up concern for C1, graph sparsification makes more sense than DropEdge because it has clear information reserve targets while there is no target for the randomness in DropEdge.\\n5. In Table 1 and Fig 2, why the improvements for more layers are bigger than those of the fewer layers?\\n6. In Fig 2, why the trend of Reddit dataset is so different from others (the more layers the more improvements by applying DropEdge)? \\n7. In Table 2, why there are the DropEdge versions for some methods while not for some other methods (e.g., FastGCN, ASGCN)? Why there is no result of GAT?\", \"minor\": \"1. Sec 3, \\\"notation\\\", \\\"\\\\mathbf{x}_n\\\" -> \\\"\\\\mathbf{x}_N\\\"\\n2. Eq (1), \\\"\\\\mathbf{h}_n^{(l+1)}\\\" -> \\\"\\\\mathbf{x}_N^{(l+1)}\\\"\\n3. What's C_l in the explaination under Eq(1)?\"}", "{\"comment\": \"Hi All,\\n We updated our code to support the semi-supervised setting of node classification. All semi-supervised classification results of Cora, Citeseer and Pubmed are available in the GitHub. \\n \\n Please check out them if you are interested in our work.\", \"link\": \"https://github.com/DropEdge/DropEdge\\n\\nThanks!\\nAuthors\", \"title\": \"The results of semi-supervised node classification.\"}", "{\"comment\": \"We appreciate your interests in our work. Your raising paper [1] brings us a different idea to measure over-smoothing and the approaches to relieve it. It is an interesting work. Here, we would like to take it as a chance to discuss the difference between [1] and our paper:\\n\\n(1)\\tThe different understanding on over-smoothing. While [1] considers over-smoothing as the issue that node representations become identical and indistinguishable as network depth increases, this paper follows previous studies [2,3] that prefer to define it as the convergence of node representations to a stationary distribution (or a subspace as proved by [3]). The latter understanding admits that the convergent representation of different node could be different but it is only topology-aware and independent to the initial input. This is consistent to the random walk theory. Guided by this, we proposed DropEdge, a novel method to slow down the speed of convergence; as shown by Theorem 1, we have proved that dropping any edge (not just the inter-class ones) is able to retard the speed of over-smoothing or relieve the information loss caused by it.\\n\\n(2)\\tDropEdge is simple yet effective. We agree that it could become more sophisticated if we apply certain heuristics other than randomness to determine which edge to be deleted (e.g. the inter-class edges). However, it will inevitably involve more complexity, and is impractical when the node labels are unknown (such as unsupervised learning). By contrary, DropEdge is efficient and applicable for broader cases. More importantly, as shown by our experiments, dropping edges by random is sufficient to enhance the performance of a variety of both shallow and deep GCNs.\\n\\n(3)\\tDropEdge can prevent over-fitting as well; in this sense, it is more like Dropout. As already presented in our paper, DropEdge can be considered as a data augmentation technique. By DropEdge, we are actually generating different random deformed copies of the original graph; as such, we augment the randomness and the diversity of the input data, thus better capable of preventing over-fitting. \\n\\n[1] Deli Chen, Yankai Lin, Wei Li, Peng Li, JieZhou, Xu Sun: Measuring and Relieving the Over-smoothing Problem for Graph Neural Networks from the Topological View.\\n[2] Johannes Klicpera, Aleksandar Bojchevski, Stephan G\\u00fcnnemann: Predict then Propagate: Graph Neural Networks meet Personalized PageRank.\\n[3] Kenta Oono, Taiji Suzuki: Graph Neural Networks Exponentially Lose Expressive Power for Node Classification\", \"title\": \"Dropping Edges by random is simple yet effective, which can prevent overfitting as well.\"}", "{\"comment\": \"Cannot agree with you at all , but will stop as suggested.\", \"title\": \"Cannot agree with your point of view , but will stop as suggested.\"}", "{\"comment\": \"I would appreciate if you stop offending the authors and stick to academic matters in all further discussions including this one.\\n\\nHaving different splits or settings for the same dataset is not a \\\"dirty\\\" trick, as long as the split is clearly specified, especially given that the authors have followed several previous publications that had used the same splits.\", \"title\": \"Please stop\"}", "{\"comment\": \"Thank you for your work.\\n\\n(1) The proposed DropEdge randomly removes all edges from the input graph, but in our recent study[1], we have proven that it is the intra-class edges that make GNN models work on the node classification task, which play an important role and should not be removed. We also prove[1] that the over-smoothing issue is caused by the over-mixing of information and noise, which is partly caused by the inter-class edges. So it is the inter-class edges instead of all edges that should be randomly removed.\\n\\n(2) [2] performed a theoretical analysis on GCN, and conclude that performing smoothing operation on node representations is the key mechanism why GCN work. [1] proposed that smoothing is inevitable for various GNN models. So we think the measurement for over-smoothing should pay more attention to the difference of inter-class nodes' representations instead of all nodes' representations. A solution is also given in [1].\\n\\n[1]Deli Chen, Yankai Lin, Wei Li, Peng Li, JieZhou, Xu Sun: Measuring and Relieving the Over-smoothing Problem for Graph Neural Networks from the Topological View (https://arxiv.org/abs/1909.03211)\\n[2]Li, Q.; Han, Z.; and Wu, X.-M. 2018. Deeper Insights into Graph Convolutional Networks for Semi-supervised Learning.\", \"title\": \"Not All Edges Should be Removed\"}", "{\"comment\": \"Based on your provided results, do you still think your methods can outperform the existing approaches [1]?\\n\\nThanks again for providing your results based on the regular semisupervised setting. Please don't mis-interpret my words. I don't care about competition, I only care about fairness. \\n\\nI can also write a paper, and increase the scores to more than 99% by changing the train/test ratios as what you do. Do you think such papers make sense or not for you? It is also not healthy for the community development, if everyone acts as dirty as your team.\\n\\nNot sure who in your author team decides to play \\\"dirty\\\". It really makes other researchers feel sad and frustrated. If the bad idea is from the young people, it is still suggested to show your respect for the other researchers' efforts, since you still have a long academic journey. Give up your \\\"dirty trick\\\" and you will receive the respect from the community as well. This is the \\\"constructive\\\" suggestion from me.\\n\\nIf you still insist and argue your dirty tricks are correct, I will keep fighting against such misconducts with you. I don't really know you authors nor your institute, but your team has a really bad impression for me.\\n\\nPS. This is my last post. We will stop here.\\n\\n\\n\\n[1] https://paperswithcode.com/task/node-classification\", \"title\": \"Give up your dirty trick. Show your respect for other researchers' efforts, then you will receive respect from them as a reward.\"}", "{\"comment\": \"(1)\\tFirst of all, we believe that the ICLR open review here is a platform that only accepts constructive and valuable discussions on each submitted paper. If you respect this and are really interested in our paper, please remove your offensive words like \\u201ccheat\\u201d, \\u201cindiscriminately change the settings\\u201d, \\u201cmisconduct\\u201d, \\u201ckeep fighting\\u201d, \\u201cthe results are too good to be true\\u201d.\\n\\n(2)\\tYour accusations on our experimental setting are rude and not reasonable by any means. \\n\\n(2.1) First, we have stated our settings clearly in the beginning of our experiments. We have no way to \\u201ccheat\\u201d anyone. The full supervised setting is originally introduced by FastGCN (not AS-GCN ). Our using the same setting as FastGCN here is due to our concern that both Cora and Citeseer are too small for benchmarking and it could incur bias if we keep using part of the labelled data. \\n\\n(2.2) Second, all compared methods in our experiments are conducted in the same setting. It is an apple-to-apple comparison. We have never contrasted our numbers against those semi-supervised methods, and of course we are never meant to. Your comment on saying our comparisons are unfair is unfair itself.\\n\\n(2.3) Different from the other three datasets, the Reddit dataset proposed by GraphSAGE is used under full supervision, which is consistent with our paper.\\n\\n(2.3) Finally, research is not just about competition. We believe that professional researchers in the community will respect a paper for its novelty, technicality and interestingness, not just because it can beat all methods under the machine-parsable setting. \\n \\n(3)\\tStill, we are willing to provide the results under the semi-supervised setting on Cora, Citeseer and Pubmed. We obtained the following results on 2-layer GCN:\\n\\n| orginal | no-dropedge (ours) | dropedge (ours) |\\n--------------------------------------------------------------------------------\\nCora | 81.5 | 81.1 | 82.8 |\\nCiteseer | 70.3 | 70.8 | 72.3 |\\nPubmed | 79.0 | 79.0 | 79.6 |\\n\\nThe results in the first column are from the original GCN paper, and those of the last two columns correspond to the GCNs w/o and w DropEdge, respectively. Note that the results w/o DropEdge are comparable to those in the GCN paper, demonstrating the reliability of our experiments; adding DropEdge consistently promotes the performance on all three datasets. We will add more results on other backbones if necessary.\\n\\nOverall, we sincerely hope you to show more respect to our work, and continue an encouraging discussion on the motivation, formulation and interestingness of our method. If you keep using offensive tone, we will refuse to respond to any question by you.\", \"title\": \"We only accept constructive and valuable discussions but not offensive and impolite comments.\"}", "{\"comment\": \"Hello Huaxin,\\nThe link wrongly includes the last dot. Please remove it to reach the correct website. The correct link is\", \"https\": \"//github.com/DropEdge/DropEdge\\n\\nBest,\", \"title\": \"Please remove the last dot\"}", "{\"comment\": \"As titled. Thanks!\", \"title\": \"Link not available?\"}", "{\"comment\": \"(1) Firstly of all, thanks for showing up finally and providing your source code.\\n\\n\\n(2) Can you please also provide the results on regular semi-supervised setting (like [1], GCN, GAT)? \\n\\nCora, Citesser and PubMed are all benchmark datasets, you cannot change the train/test ratio as you may want just to increase your scores. It is not serious research any more... \\n\\nNot clear about your model based on the normal settings, so we can see the TRUE improvement.\\n\\n\\n(3) Everyone is racing together to compete, you cannot take a rocket by breaking the rules only to increase your scores. \\n\\nDon't you think it is unfair for the other published and existing papers ? You break the record and get Number One. So what? You cheat and indiscriminately change the settings.\\n\\nIt doesn't make sense at all by using the one or two bad example papers to demonstrate your motivation is correct and reasonable. One of the papers [2] listed in your response is also from you authors, as the rule breaker. You break the rule before, and use your misconduct as an evidence to support you to break the rules again?\\n\\nYou get 91.7% on Pubmed by cheating and changing the train/val/test ratios ! Have you thought about the existing papers and researchers, who obey the rules and get accuracy below 80%, like [3]? How will they survive in the competition with you? I will keep fighting against such misconduct in research with you forever.\\n\\n\\n(4) According to Table 2, Your results only show your methods can improve GCN and GraphSage. How about the others you use in Table 2 ? Do you think if you can provide the results of the missing vanilla methods as follows?\\nResGCN\\nJKNet\\nIncepGCN\\n\\nas well as the results of your boosted DropEdge versions of several methods as follows?\\nFastGCN + EdgeDrop\\nASGCN + EdgeDrop\\n\\nIt will make the comparison more complete. \\n\\n\\n(5) Will check your source code and let you know if I have any problems in reproducing all the reported results.\\n\\n\\n[1] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad.\\nCollective classification in network data. AI magazine, 29(3):93, 2008.\\n[2] Adaptive Sampling Towards Fast Graph Representation Learning\\n[3] AdaGCN: Adaboosting Graph Convolutional Networks into Deep Models\", \"title\": \"It's unfair. Everyone is racing to compete, you cannot take a rocket to increase your scores via changing the train/val/test ratios by yourself... I will keep fighting against your misconduct in research forever!\"}", "{\"comment\": \"Hi William H Cohen,\\n\\nThanks for your interest in our work. Our code can be downloaded from\", \"https\": \"//github.com/DropEdge/DropEdge\", \"title\": \"The link of our source code\"}", "{\"comment\": \"Hi, Alex Williams.\\n\\nThanks for your interest in our work, and sorry for the delay response. \\n\\n(1)\\tPlease note that the training-testing division of the datasets in this paper is different from that in the original GCN paper. As already mentioned in the last sentence of the first paragraph in Section 5, we follow the setting in FastGCN [1] and AS-GCN [2] to use full supervised data for the training (while the original GCN use the semi-supervised setting). This is why we obtained higher numbers for the same GCN model on Cora, Citeseer, and Pubmed. Our performance with DropEdge reaches 91.7% compared to 90.22% by GCN on Pubmed under the full supervision setting, which is still reasonable. The main purpose of this paper is to demonstrate the impact of DropEdge on promoting deep GCNs (see Table 1 and Table 7) without particular concern on what setting we prefer. \\n\\n(2)\\tOur code is available in https://github.com/DropEdge/DropEdge . Feel free to check. If you have any question on running it, please tell us.\\n\\n[1] Jie Chen, Tengfei Ma, and Cao Xiao. Fastgcn: Fast learning with graph convolutional networks via importance sampling. In Proceedings of the 6th International Conference on Learning Representations, 2018.\\n[2] Wenbing Huang, Tong Zhang, Yu Rong, and Junzhou Huang. Adaptive sampling towards fast graph representation learning. In Advances in Neural Information Processing Systems, pp. 4558\\u20134567, 2018.\", \"title\": \"The performance difference is due to the different experimental setting\"}", "{\"comment\": \"Dear Respective Authors,\\nIf it is possible, could you please release the code to verify the results? When the code could be released? It would be great to have some response from you.\\nBest regards, WCohen\", \"title\": \"Request for code!\"}", "{\"comment\": \"I changed it because apparently the word \\\"shit\\\" is a swear word, and offensive -- I recommend you read some of Jonathan Swift's works. I changed the word because it hurt someone's sentiments, I have no expectation that you edit/modify your comment.\\n\\nAnd, for the sake of the authors, I'd like to clarify that I am in no way affiliated with the authors. This is a fake id, just like yours. I will end this here, and not disrespect the author's efforts anymore. I hope the community takes notice of your attempts at disgracefully attacking this paper with malicious intent. \\n\\nPS. That aside, I think it's great to have authors release source code but I think there's better ways to asking for the same. \\n\\nPPS. Why don't you post a link to your webpage, and I'll take off my \\\"cowardly mask\\\"?\", \"title\": \"Please share your webpage/official email, and I will reach out to your personally and discuss this offline.\"}", "{\"comment\": \"If the authors can release the code and clarify my concerns in the very beginning, I will be very happy to revise and even delete my post. The authors didn't show up, but you come as a trouble maker. Be serious and don't make things worse... If you are the author of this paper, you already mess things up.\\n\\nI'm sure you know what you are doing, please show your respect to this paper and my questions. I like and also respect this paper, so I take the time to clarify my concerns over again to the authors and to YOU. Really looking forward to seeing the source code.\\n\\nPS. I see you delete your curse words, and then you come back to impeach me about my tone with your double standard ?... Huh, you are so funny. There is no respect for you (a coward behind mask) anymore, this will be my last post... \\n\\nPSS. Will not change my post as you may wish. I don't know who you are, but you also need to be responsible for your posted curse words (even though you delete them), rudeness and your double standard.\", \"title\": \"Huh, double standard. Why you delete your curse words ? Looking forward to seeing the source code\"}", "{\"comment\": \"I get a notification whenever anyone responds to my comments. As simple as that :)\", \"title\": \"Notifications\"}", "{\"comment\": \"Academic Curiosity/Constructive Feedback: You could pose it as a question and mention that you've noticed different accuracies, and ask the authors if they have a reasonable explanation for this. I could draft an example response for you.\\n\\nAs a mature researcher, I am sure you understand that tone matters, and having reviewed in multiple venues, it is often an instruction to maintain a positive tone and not be condescending. If the primary author on this is a early graduate student, even if he accidentally evaluated his numbers on a wrong dataset or on the train set -- attacks such as yours will make him question his desire to participate in science. You could also politely ask for source code as opposed to being vicious in your criticism. \\n\\nWords such as \\\"Untrustworthy Experimental Results\\\" come across as an attempt to impeach the paper/Bias Reviewers. \\n\\nPS. The internet doesn't guide me to a Mr. Alex Williams who was a research scientist at MIT. Can you share your webpage/work email, so we can continue this discussion offline?\", \"title\": \"Biasing the reviewers?\"}", "{\"comment\": \"PS. You make things more interesting now. I'm much more eager to see the source code of this paper after reading your response.\\n\\nI didn't expect the fast response from you actually. You can reply faster than the authors, huh... Hope the authors will thank you for your great \\u201chelp\\u201d.\", \"title\": \"Looking forward to seeing the source code\"}", "{\"comment\": \"(1) If you also work on GNNs, you will understand what I mean in my comments. Node classification on Pubmed is extremely hard, and 80% accuracy is already very hard to achieve. In this paper, the authors achieve 91.7% accuracy. Don't you feel it is interesting and wanna take a look into their model source code?\\n\\n(2) I don't want to impeach this paper by mistake, so I suggest the authors to release the source code out and the community can check their performance. Is it wrong? What do you mean by \\\"constructive\\\"? They publish a paper with ridiculously high and inconsistent scores, and we want to check their correctness, isn't it constructive in your mind?\\n\\n(3) Alex Williams is my true name and I also show my real workplace. Please stop the personal attack. Try to defend the paper by showing me my questions are wrong and the scores in this paper are correct. By the way, may I kindly ask \\\"What is your name, Mr. Not Alex Williams ?\\\" You haven't taken your mask off yet. If you are not the author, \\\"How can you respond to my comments to this paper so quickly without system notifications?\\\" \\n\\n(4) Anyway, let's focus on this paper. Really don't wanna continue the meaningless squabbles with you like kids\", \"title\": \"Thanks for your participation as a non-author of this paper\"}", "{\"comment\": \"Sorry about the \\\"swear\\\" word. I have nothing to do with this paper, and it is quite funny how you assume that the authors would actually go to such lengths to do this. The authors posting would display as an \\\"Official paper comment\\\" or something along those lines?\\n\\nPS. I am not being critical of your comment, but rather of the fact that your feedback is not constructive and you choose to be extremely critical of a work under the veil of a fake account. Please show us YOUR REAL NAME, and YOUR REAL AFFILIATION, and maybe also include a link to your website and some of your own papers in your profile. \\n\\nPPS. If I remember correctly, you profile stated you were still at MIT earlier and now it says you left at 2016. I think the organizers had an agenda in mind when they stopped anonymous comments -- to stop anonymous comments! Commenting from a fake-id doesn't really help the organizers accomplish what they intended to.\", \"title\": \"Not an author of this paper!\"}", "{\"comment\": \"Not sure if I should reply to your rude and mean reply or not. To be polite, I think I still want to.\\n\\n(1) I was a research scientist at MIT. You don't know me, it is fine but it doesn't mean my comments and questions on your paper are not important. It is really a shame for you to do this kind of attack instead of defending your paper with solid results and with your source code.\\n\\n(2) I'm discussing about the problems with this paper, so please respond to my academic questions directly. If you think your paper has no problem, please release your source code. Otherwise, I don't think the community can trust the results reported in your paper.\\n\\n(3) Please don't use swearing words like \\\"shit post\\\" anymore, it is a shame to do such personal attack in your response. Everyone in the community can see your post and your response.\\n\\n(4) Looking forward to seeing your code, you can send the link of your code in the response to this post. I think the community will be interested and happy to check it together. The datasets are benckmark datasets, you don't need to share them and we can download it from the web to ensure you didn't change the data by yourself.\\n\\nPS. We all want to see how you achieve 91.7% accuracy on Pubmed. Frankly speaking, if your method can do achieve such a big improvement, the community and me will all be happy to support your work.\\n\\nAlso if you are not the author of this paper, please show YOUR NAME and YOUR AFFILIATION. If you are the author of this paper, then it is a shame for you to create this non-existing account to defend your paper. It is so ridiculous and not funny at all... You are not very serious about your results, your submission, your response and your manner.\", \"title\": \"Academic open discussion, no swearing words please, looking forward to your source code\"}", "{\"comment\": \"Hi Alex Williams,\\n\\nI ran a quick check through the MIT directory and found no person named \\\"Alex Williams\\\". Could you please update your institute and other personal information to be more accurate before you troll papers? This isn't the only paper, I've seen innumerable trolls with fake accounts. \\n\\nI am not sure if the organizers did us authors a favor by disabling anon comments -- now people just make fake ids like this, and troll people.\", \"title\": \"Sorry, what? You want us to not trust these results based on comments from your fake id?\"}", "{\"comment\": \"This results reported in this paper have big problems, especially for the results on Pubmed dataset. It is suggested for the authors to release the source code for the community to check the results. Their results are too good to be true.\\n\\n(1) Inconsistent with GCN raw model\\n\\nAccording to the Table 1 in the paper, the reported results for GCN (i.e., original GCN) are inconsistent with the original GCN paper. For instance, in [1], the accuracy of GCN on Citeseer, Cora, and Pubmed are\\n70.3% 81.5% 79.0%\\n\\nHowever, according to this submission, in Table, the accuracy rate of the original GCN on these three datasets are\\n79.34% 86.64% 90.22%\\n\\nThese results are all inconsistent and have big problems, especially for the results on Pubmed dataset. If the authors really have run the models on Pubmed dataset, it is extremely hard to achieve an accuracy score higher than 80%. Compared with all the works on the Pubmed datasets according to [2] and several other datasets [4], the scores reported in this paper are all over-exaggerated. The authors probably may need to clarify how they get 90.22% for GCN on Pubmed and so high scores on the other datasets.\\n\\n(2) Inconsistent with their arXiv version\\n\\nThe authors also release a version at arXiv (submitted on Sept 9, 2019), and the results reported in this paper are also inconsistent with their arXiv version [3]. All the scores in this paper are much higher than their latest arXiv version. The authors may also need to clarify this.\\n\\n(3) Surprisingly GOOD performance\\n\\nWhat's more, for their own method, their accuracy on Pubmed is 91.7%. To be honest, the results are too good to be true. I cannot really trust this scores. The authors do need to clarify how they get such scores and also release the source code out for the community to check the reported results.\\n\\n\\n[1] SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS\\n[2] https://paperswithcode.com/sota/node-classification-on-pubmed\\n[3] https://arxiv.org/pdf/1907.10903.pdf\\n[4] https://paperswithcode.com/task/node-classification\", \"title\": \"Questions with the Experimental Results (Too Good To Be True)\"}" ] }
ByeAK1BKPB
Projected Canonical Decomposition for Knowledge Base Completion
[ "Timothée Lacroix", "Guillaume Obozinski", "Joan Bruna", "Nicolas Usunier" ]
The leading approaches to tensor completion and link prediction are based on the canonical polyadic (CP) decomposition of tensors. While these approaches were originally motivated by low rank approximations, the best performances are usually obtained for ranks as high as permitted by computation constraints. For large scale factorization problems where the factor dimensions have to be kept small, the performances of these approaches tend to drop drastically. The other main tensor factorization model, Tucker decomposition, is more flexible than CP for fixed factor dimensions, so we expect Tucker-based approaches to yield better performance under strong constraints on the number of parameters. However, as we show in this paper through experiments on standard benchmarks of link prediction in knowledge bases, ComplEx, a variant of CP, achieves similar performances to recent approaches based on Tucker decomposition on all operating points in terms of number of parameters. In a control experiment, we show that one problem in the practical application of Tucker decomposition to large-scale tensor completion comes from the adaptive optimization algorithms based on diagonal rescaling, such as Adagrad. We present a new algorithm for a constrained version of Tucker which implicitly applies Adagrad to a CP-based model with an additional projection of the embeddings onto a fixed lower dimensional subspace. The resulting Tucker-style extension of ComplEx obtains similar best performances as ComplEx, with substantial gains on some datasets under constraints on the number of parameters.
[ "knowledge base completion", "adagrad" ]
Reject
https://openreview.net/pdf?id=ByeAK1BKPB
https://openreview.net/forum?id=ByeAK1BKPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "U62l4vJvvv", "BygaD5i_jS", "rJxI8qs_oH", "SklWEco_ir", "SygDbciOoH", "SJeYwE_VqB", "rkl0nrbb5S", "B1gTZuqaKB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734317, 1573595749427, 1573595725746, 1573595688721, 1573595646985, 1572271200578, 1572046261846, 1571821572575 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1860/Authors" ], [ "ICLR.cc/2020/Conference/Paper1860/Authors" ], [ "ICLR.cc/2020/Conference/Paper1860/Authors" ], [ "ICLR.cc/2020/Conference/Paper1860/Authors" ], [ "ICLR.cc/2020/Conference/Paper1860/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1860/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1860/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes a tensor decomposition method that interpolates between Tucker and CP decompositions. The authors also propose an optimization algorithms (AdaImp) and argue that it has superior performance against AdaGrad in this tesnor decomposition task. The approach is evaluated on some NLP tasks.\\nThe reviewers raised some concerns related to clarity, novelty, and strength of experiments. As part of addressing reviewers concerns, the authors reported their own results on MurP and Tucker (instead of quoting results from reference papers). While the reviewers greatly appreciated these experiments as well as authors' response to their questions and feedback, the concerns largely remained unresolved. In particular, R2 found the gain achieved by AdaImp not significantly large compared to Adagrad. In addition, R2 found very limited evaluation on how AdaImp outperforms Adagrad (thus little evidence to support that claim). Finally, AdaImp lacks any theoretical analysis (unlike Adagrad).\", \"title\": \"Paper Decision\"}", "{\"title\": \"detailed comments\", \"comment\": \"(A) The idea of combining CP and Tucker is not new. For example, Tomioka et al. (2010; Section 3.4) considered the Tucker-CP patterns (CP decomposition of the Tucker core). Although they used the Tucker-CP model to improve the interpretability rather than link prediction, the paper needs to make some attribution to the prior work.\\n\\u2192 Indeed, the idea of combining CP and Tucker is far from new (we cite CANDELINC from 1980 and a method from Bro & Andersson from 1998). The interest of this paper is the method for optimization which differs from all of these prior work (and the work from Tomioka, Hayashi & Kashima) due to the tasks and scales considered. On these datasets, the loss is no longer Frobenius and the use of adaptive stochastic method is critical to obtain state of the art results. We show that adaptive algorithms are crucial to learn these decompositions in this context and that the diagonal approximation made in practical implementations of these algorithms is too crude to learn the Tucker decomposition.\\n\\n(B) By looking Figure 3, the proposed method, PComplEx, is not significantly better than the existing methods such as ComplEx. Except SVO data, PComplEx and ComplEx share almost the same performance curve. Also, other existing methods such as TuckER and MurP are evaluated only in a few points while (P)ComplEx is evaluated in many points. I feel this is unfair.\\n\\u2192 Regarding the evaluation of other methods, please, see the general comment. For the gain in performance: note that we provide curves whereas the standard in the field is tables for a fixed number of parameters. The maximal gains we observe for a fixed number of parameters are substantial on other datasets: +0.14 MRR (absolute) on WN18 and +0.05 MRR on YAGO. For SVO, our method provides better performances on all operating points compared to ComplEx, and by a fair margin.\"}", "{\"title\": \"detailed comments\", \"comment\": \"The authors present the problem as completion of a binary 3-order tensor, i.e. predicting for triplets (subject, predicate, ?) if '?' refers to 0 or 1. But they also write 'we formulate this problem as a multi-class classification problem, where the classes are the entities of the knowledge base' - so this is not a binary problem? does this mean there is some structure that must be present in the tensor? (e.g. there is exactly one '1' in each column of length N? This should be clarified.\\n\\u2192 Despite the ground truth tensor being binary, the evaluation of choice in this field is done by ranking. Hence, the estimate we learn is a tensor of scores for each triple (subject, predicate, object). This does not assume any particular structure on the columns (mode-3 fibers in our case). We use the cross-entropy as a surrogate for the ranking loss : if there are several ones in a fiber of the ground truth tensor our model should learn a uniform distribution over these objects.\\n\\nIt would be good to make the description of Algorithms 1 and 2 more precise and detailed. For example, the operation/algorithm AdaGrad(\\\\eta;w_k; g_k;G_k) is not defined. AdaGrad is described in the Appendix but it is hard to match it to get the precise operation used in Algorithm 1. Algorithm 1 shows one step of PComplEx, and it would be good to add the entire PComplEx algorithm, with input, output & parameters. \\n\\u2192 We added the full algorithm in the supplementary materials. (Appendix 9.6).\\n\\nThe authors present their method in the context of knowledge base completion, thus for tensors of order 3, but it is not clear if any of the components they proposed indeed specialized for this problem, or is it a contribution to general tensor decomposition. Some remarks regarding the (in?)applicability of the method more generally would be helpful.\\n\\u2192 No component of ADA^imp is specialized to tensors of order 3 and could be readily re-used for tensors of higher order. We present it here for the order 3 due to the application we target, for which adaptive algorithms (Adagrad / Adam) seems to be critical.\\n\\nFigure 3 describing the experimental results should be explained better. There are few methods shown only in some of the graphs and only for some parameter values - why?\\n\\u2192 This issue is addressed in the general comment.\\n\\nThe complexity measure 'parameters-per-entity' should be clearly defined (I didn't find it in the text).\\n\\u2192 Parameters per entity are the total amount of parameters divided by the total number of entities. Precise formulas for each method has been added in the supplementary (Appendix 9.11).\\n \\nSimilarly, the performance measures 'mean reciprocal rank' and 'hits at 5%' should be defined in terms of the tensor.\\n\\u2192 We added the precise definition of these metrics in the supplementary materials. (Appendix 9.11)\\n\\nThe authors should also add running times of the different experiments and methods.\\n\\u2192 Running times as well as a convergence curve have been added in the supplementary materials (Appendix 9.12).\", \"minor\": \"In the main paper, the authors define an (N,L,N) tensor, but in the appendix Section 9.9 they list N and P. Does P refer to L here?\\n\\u2192 yes sorry. This is fixed in the revision\\n\\nThe authors mention a few times usage of 'deep-learning techniques' - but I believe that in at least some of the contexts, they refer to optimization methods which are typically used in deep learning, and are applied here to train other models presented in the text, and not to the usage of actual deep learning architectures - this is confusing and should be clarified.\\n\\u2192Deep learning techniques here refer specifically to dropout, batch-normalization and learning rate annealing. This is clarified in the revision.\\n\\nPage 7, top: what are the matrices M^(1), M^(2), M^(3)? they seem to be different for different decompositions \\n\\u2192 Indeed. M^(1) is UP_1 for PCP, but U for CP or UPi_1 for PCP_full.\\nSince all these methods compute their final score in a CP fashion, we study the gradient with respect to the CP \\\"factors\\\" which are computed differently for different methods.\"}", "{\"title\": \"detailed comments\", \"comment\": \"1. Tucker decomposition results in lower dimension factors, \\\"d\\\" in the paper. So the resulting core tensor is of size (d \\\\times d \\\\times d). However, this core tensor is further decomposed with a rank-D CP as shown in Section 3, where D >= d. Basically, first the original tensor is factored into lower rank d, and the core tensor is then expanded into rank D >= d. The reader did not understand what is the justification for this approach? Please provide further explanation on this part.\\n\\n\\u2192 We start from a tensor of size n x n x p. A Tucker decomposition of rank d leads to :\\nd x (n +n + p) parameters for the factors and d x d x d parameters for the core tensor.\\nIn order to link this decomposition with the CP decomposition which is easier to optimize, we further decompose this core tensor with a CP decomposition of rank D. Thus, d x d x d parameters become d x (D + D + D) (which is smaller than d x d x d as long as D < d^2/3).\\nWe allow D > d because a tensor of shape d x d x d can have a CP rank as high as d^2. \\n\\n\\n2. The confusion of P_2 and P_3 terms in the paper. At the beginning of Section 3, P_2 is assumed to be identity through out the paper. But P_2 is mentioned to have specific attributes in other parts of the paper, such as in the second paragraph from the bottom of page 4, the first paragraph and first equation on page 5. And P_2 does not appear in AdaGrad algorithm.\\n\\u2192 There is indeed a confusion between P_2 and P_3 in the paper, we thank the reviewer for pointing this out. Since P_2 is assumed to be the identity, it should not appear in the paper outside of the definition of CPT (beginning of Section 3). All further occurrences of P_2 are typos and have been fixed in the revision.\\n\\n3. The experiment is lacking. First, the paper does not explain the meaning of evaluation metrics. Second, the authors do not provide an insight, why PComplEx is better than the ComplEx baseline on SVO dataset, but performs similarly on other datasets. Which factors lead to such improvement?\\n\\u2192 Regarding evaluation metrics, we have added the definition of the mean reciprocal rank and hits@5% in Appendix 9.11. We attribute the difference in performance on SVO to a difference in the underlying structure of the data that makes Tucker decomposition particularly suited. Similarly to MurP being better on WN18RR than on FB237, it is possible that SVO is a dataset that is more amenable to a Tucker decomposition. \\n\\n4.The comparison to other state-of-the-arts is inadequate, each compared method only has one or few configurations in terms of number of parameters. \\n\\u2192 We performed new experiments. Please, see the general comments.\"}", "{\"title\": \"Additional experiments for Tucker and MurP\", \"comment\": \"We thank all reviewers for their comments. We address more general issues here, and answer more particular points in separate comments. One of the main criticism of the reviewers is that methods from the state-of-art other than Complex are not evaluated on all operating points in terms of rank.\\nWe initially reported for each algorithms, the performances reported by the authors of each method. Considering the reviewers concerns, we have re-run MurP [1] and TuckEr [2] (we chose those because their code is publicly available, and they were close in performances to our methods). We updated Figure 3 and Appendix 9.10 by adding a complete rank profile on WN18RR, FB237 for TuckEr and MurP and also on WN18 and FB15K (*) for TuckEr (see Appendix 9.11 for a detailed description of the experimental protocol). With more operating points for these algorithms :\\n\\n* It is confirmed that TuckEr performs essentially similarly to PComplEx on most operating points for WN18RR and FB237. The small differences in performance can most likely be explained by the difference in loss / label smoothing used in the two set-ups. However, our model is much simpler to tune, as it only has one regularization parameter, and its optimization procedure is well understood as shown in our work. TuckEr underperforms on WN18 and FB15k.\\n* MurP performs better than PComplEx and TuckEr on some operating points of WN18RR but severely underperforms on FB237 as the dimensionality increases.\\n* Neither Tucker nor MurP matches the performances of ComplEx for higher dimensionalities, in contrast to PComplEx which, by design, is equivalent to Complex for d=D. \\n\\nIn conclusion, PComplEx optimized with AdaImp (or AdamImp for WN datasets) has less hyperparameters, and has performances that do not deteriorate at high ranks, while matching TuckEr's performances at lower parameters per entities. It also leads to faster convergence as shown in Appendix 9.12.\\n\\n[1] Ivana Balazevic, Carl Allen and Timothy Hospedales. Multi-relational poincar\\u00e9 graph embeddings. ArXiv 2019\\n[2] Ivana Balazevic, Carl Allen and Timothy Hospedales. TuckEr: Tensor factorization for knowledge graph completion. EMNLP 209\\n\\n(*) We did not run TuckEr and MurP on SVO because this would require coding a new forward pass for the model and tune all 6 hyperparameters of the method from scratch (because SVO\\u2019s task is to answer queries of the form (subject, ?, object)). We also did not run these models on YAGO since an epoch takes 335s with the available implementation leading to 10h experiments for 100 epochs making the tuning of all hyperparameters impractical. Nonetheless, we believe the current experiments are sufficient to support the conclusions in the paper.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"* Summary:\\nThe paper introduces a novel tensor decomposition that is reminiscent of canonical decomposition (CP) with low-rank factors, based on the observation that the core tensor in Tucker decomposition can be decomposed, resulting in a model interpolating between CP and Tucker. The authors argue that a straight application of AdaGrad on this decomposition is inadequate, and propose Ada^{imp} algorithm that enforces rotation invariance of the gradient update. The new decomposition is applied to ComplEx model (called PComplEx) that demonstrates better performance than the baseline.\\n\\n* Comments:\\nAlthough the approach is well motivated, the paper has many ambiguities that need to better clarification.\\n1. Tucker decomposition results in lower dimension factors, \\\"d\\\" in the paper. So the resulting core tensor is of size (d \\\\times d \\\\times d). However, this core tensor is further decomposed with a rank-D CP as shown in Section 3, where D >= d. Basically, first the original tensor is factored into lower rank d, and the core tensor is then expanded into rank D >= d. The reader did not understand what is the justification for this approach? Please provide further explanation on this part.\\n2. The confusion of P_2 and P_3 terms in the paper. At the beginning of Section 3, P_2 is assumed to be identity through out the paper. But P_2 is mentioned to have specific attributes in other parts of the paper, such as in the second paragraph from the bottom of page 4, the first paragraph and first equation on page 5. And P_2 does not appear in AdaGrad algorithm.\\n3. The experiment is lacking. First, the paper does not explain the meaning of evaluation metrics. Second, the authors do not provide an insight, why PComplEx is better than the ComplEx baseline on SVO dataset, but performs similarly on other datasets. Which factors lead to such improvement?\\n4. The comparison to other state-of-the-arts is inadequate, each compared method only has one or few configurations in terms of number of parameters.\\n\\nOverall the proposed decomposition method might have significant contribution to research progress in this field, but the paper fails to convince the reader of its significance. I feel the paper should be overhauled.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors present a new way of decomposing 3-order tensors which uses interpolation\\nbetween the Tucker and CP decompositions, called CPT. The main idea is to present the components of the CP model\\nwith an additional low-rank structure.\\nThe authors also provide a new optimization algorithm called ADA-imp, for learning this decomposition,\\nwhich is a variant of Adagrad adapted to their settings. \\nThe paper is overall interesting, clearly written and well-motivated. \\nThe mathematical derivations are, as far as I could follow, correct and non-trivial. \\u00a0(I did not read all the details in the Appendix). \\nThe authors also show favorable experimental results on two knowledge-base datasets, with improved loss vs. #parameter used tradeoff.\\nA few unclear issues and suggestions for improvements are below.\\n\\nThe authors present the problem as completion of a binary 3-order tensor, i.e. predicting for triplets (subject, predicate, ?) if '?' refers to 0 or 1.\\nBut they also write 'we formulate this problem as a multi-class classification problem, where the classes are the entities of the knowledge base' \\u00a0- so this is not a binary problem? does this mean there is some structure that must be present in the tensor? (e.g. there is exactly one '1' in each column of length N? This should be clarified. \\n\\nIt would be good to make the description of Algorithms 1 and 2 more precise and detailed. \\nFor example, the operation/algorithm AdaGrad(\\\\eta;w_k; g_k;G_k) is not defined. AdaGrad is described in the Appendix but it is hard to match it to get the precise operation used in Algorithm 1. \\nAlgorithm 1 shows one step of PComplEx, and it would be good to add the entire PComplEx algorithm, with input,output&parameters. \\n\\nThe authors present their method in the context of knowledge base completion, thus for tensors of order 3, but it is not clear if any of the components they proposed indeed specialized for this problem, or is it a contribution to general tensor decomposition. Some remarks regarding the (in?)applicability of the method more generally would be helpful. \\n\\nFigure 3 describing the experimental results should be explained better. There are few methods shown only in some of the graphs and only for some parameter values - why?\\nThe complexity measure 'parameters-per-entity' should be clearly defined (I didn't find it in the text). Similarly, the performance measures 'mean reciprocal rank' and 'hits at 5%' \\nshould be defined in terms of the tensor. \\nThe authors should also add running times of the different experiments and methods.\", \"minor\": \"--------\\nIn the main paper, the authors define an (N,L,N) tensor, but in the appendix Section 9.9 they list N and P. Does P refer to L here? \\n\\nThe authors mention a few times usage of 'deep-learning techniques' - but I believe that in at least some of the contexts, they refer to optimization methods which are typically used in deep learning, and \\u00a0are applied here to train other models presented in the text, and not to the usage of actual deep learning architectures - this is confusing and should be clarified. \\n\\nPage 7, top: what are the matrices M^(1), M^(2), M^(3)? they seem to be different for different decompositions\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, a tensor decomposition method is studied for link prediction problems. The model is based on Tucker decomposition but the core tensor is decomposed as CP decomposition so that it can be seen as an interpolation between Tucker and CP. The performance is evaluated with several NLP data sets (e.g., subject-verb-object triplets).\\n\\nAlthough the entire idea is interesting, the current form of the paper is not sufficient for acceptance. The main reasons are (A) the proposed model is not completely novel and (B) the empirical results are not significant. \\n\\n(A) The idea of combining CP and Tucker is not new. For example, Tomioka et al. (2010; Section 3.4) considered the Tucker-CP patterns (CP decomposition of the Tucker core). Although they used the Tucker-CP model to improve the interpretability rather than link prediction, the paper needs to make some attribution to the prior work. \\n\\n(B) By looking Figure 3, the proposed method, PComplEx, is not significantly better than the existing methods such as ComplEx. Except SVO data, PComplEx and ComplEx share almost the same performance curve. Also, other existing methods such as TuckER and MurP are evaluated only in a few points while (P)ComplEx is evaluated in many points. I feel this is unfair.\\n\\nTomioka, R., Hayashi, K., & Kashima, H. (2010). Estimation of low-rank tensors via convex optimization. arXiv preprint arXiv:1010.0789.\"}" ] }
Hke0K1HKwr
Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue
[ "Byeongchang Kim", "Jaewoo Ahn", "Gunhee Kim" ]
Knowledge-grounded dialogue is a task of generating an informative response based on both discourse context and external knowledge. As we focus on better modeling the knowledge selection in the multi-turn knowledge-grounded dialogue, we propose a sequential latent variable model as the first approach to this matter. The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge; as a result, it can not only reduce the ambiguity caused from the diversity in knowledge selection of conversation but also better leverage the response information for proper choice of knowledge. Our experimental results show that the proposed model improves the knowledge selection accuracy and subsequently the performance of utterance generation. We achieve the new state-of-the-art performance on Wizard of Wikipedia (Dinan et al., 2019) as one of the most large-scale and challenging benchmarks. We further validate the effectiveness of our model over existing conversation methods in another knowledge-based dialogue Holl-E dataset (Moghe et al., 2018).
[ "dialogue", "knowledge", "language", "conversation" ]
Accept (Spotlight)
https://openreview.net/pdf?id=Hke0K1HKwr
https://openreview.net/forum?id=Hke0K1HKwr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "hbKo5rF0Fw", "rJxe9OPnsr", "rJediVDhoS", "rkxOuxPhor", "Bkxtl24c5r", "ryxezvS0tB", "B1ehFnYiYr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734288, 1573841031578, 1573840031951, 1573838959973, 1572649968883, 1571866375848, 1571687556261 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1859/Authors" ], [ "ICLR.cc/2020/Conference/Paper1859/Authors" ], [ "ICLR.cc/2020/Conference/Paper1859/Authors" ], [ "ICLR.cc/2020/Conference/Paper1859/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1859/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1859/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper proposes a sequential latent variable model for the knowledge selection task for knowledge grounded dialogues. Experimental results demonstrate improvements over the previous SOTA in the WoW, knowledge grounded dialogue dataset, through both automated and human evaluation. All reviewers scored the paper highly, but they also made several suggestions for improving the presentation. Authors responded positively to all these suggestions and provided updated results and other stats. The paper will be a good contribution to ICLR.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank Reviewer 2 for positive and constructive reviews. Below, we respond to each comment in detail. Please see the blue fonts in the newly uploaded draft to check how our paper is updated.\\n\\n\\n1. Could you describe the updates from the previous sequential latent variable models more clearly?\\n\\nOur main contribution is to model the knowledge selection in dialogue as sequential latent variable models for the first time, and validate that it leads to the new state-of-the-art performances on two benchmark datasets. The use of sequential latent models can correctly deal with diversity nature in knowledge selection in a semi-supervised way and improves the interpretability of the flow of selected knowledge over other models. Methodologically, our model is similar to [1], although it uses the latent variable to represent the underlying attention in seq2seq models for machine translation (unlike ours for knowledge-grounded chit-chat problem).\\n\\n[1] S. Shankar and S. Sarawagi. Posterior Attention Models for Sequence to Sequence Learning. ICLR 2019.\\n\\n\\n2. Ablation studies for the three advantages of the proposed method: (i) weakly-supervised inference with no labels, (ii) reduce the scope of knowledge candidates, and (iii) better utilization of response information.\\n\\nWe here answer the reviewer\\u2019s second and fourth questions together.\\n\\nWe add the experiments of our model with partial knowledge labels (including an experiment without knowledge loss) on the Wizard of Wikipedia in Table 6 in Appendix D. Results show that the better performance is attained with more labeled knowledge data for training as expected. Furthermore, our model achieves competitive performance with less label. For instance, our model using only 1/4 labeled training data is comparable to E2E Transformer MemNet and is even better in Test Unseen. As a result, our sequential latent knowledge selection model can be utilized in a semi-supervised method without severe drop in its performance.\\n\\nDue to many new experiments during the limited time of rebuttal, we cannot finish an ablation study for the reduced scope and utilization of response information, which will be presented in the final draft. \\n\\n\\n3. It would be also interesting to see more detailed aspects of knowledge-selection itself in both quantitative and qualitative manners. \\n\\nWe add more quantitative and qualitative results of knowledge selection in Appendix C and G. In Appendix C, we measure the knowledge selection accuracy over turns. Our model consistently outperforms other models for all turns in knowledge selection accuracy. Notably, in all models, the accuracy significantly drops after the first turn (which is often easily predictable topic definition sentence), which shows the diversity nature in knowledge selection. In Appendix G, we show selected examples of utterance prediction along with selected knowledge. We will update more qualitative results (e.g. attention distribution) to our final version.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank Reviewer 1 for positive and constructive reviews. Below, we respond to each comment in detail. Please see blue fonts in the newly uploaded draft to check how our paper is updated.\\n\\n\\n1. Sanity check for BERT implementation & Reason for the low performance of BERT\\n\\nFollowing your suggestion, we measure our model\\u2019s performance with gold knowledge. The table below shows that providing gold knowledge significantly improves our model\\u2019s performance. It can be a good sanity check for our implementation.\\n\\n\\t\\t\\t\\tTest Seen\\t\\t\\t\\tTest Unseen\\nMethod\\t\\t\\tPPL\\t\\tR-1\\t\\tR-2\\t\\tPPL\\t\\tR-1\\t\\tR-2\\nOurs\\t\\t\\t52.0\\t\\t19.3\\t\\t6.8\\t\\t81.4\\t\\t16.1\\t\\t4.2\\nOurs (w/ gold)\\t23.1\\t\\t34.2\\t\\t18.4\\t\\t27.8\\t\\t32.6\\t\\t16.6\\n\\nThe reason for the low performance of BERT may be the diversity nature of knowledge selection in knowledge-grounded dialogue. As discussed in Section 2, there can be one-to-many relations between the dialogue context and the knowledge to be selected. One can choose any diverse knowledge to carry on the conversation. Table 1 confirms this conjecture. In the Wizard of Wikipedia dataset, knowledge selection is extremely challenging even for human (17.1) and BERT is marginally better than Transformer (23.4 of BERT vs 22.5 of Transformer). On the other hand, once we change the task to have one-to-one relations by providing a GT response, BERT significantly boosts performance over Transformer (78.2 in BERT vs 70.4 in Transformer).\\n\\n\\n2. Quantitative results of PostKS with GRU\\n\\nWe add quantitative results of PostKS+Transformer and PostKS+GRU on the Wizard of Wikipedia and Holl-E in Table 7 in Appendix E. Results show that PostKS+GRU consistently outperforms PostKS+Transformer without the knowledge loss term. The lower performance of PostKS+Transformer may be due to the data starvation problem as the review anticipated. However, PostKS+Transformer performs better than PostKS+GRU with the knowledge loss. It seems that the knowledge loss term reduces the overfitting and thus increases the data efficiency.\\n\\n\\n3. Multi-turn human evaluation results\\n\\nWe add human evaluation results in a multi-turn setting using the evaluation toolkit from Wizard of Wikipedia. Following their setting, humans are paired with one of the models and chat about a specific topic (given a choice of 2-3 topics) for 3-5 dialogue turns. After conversation, they rate their dialogue partner on a scale of 1-5, with the rating indicating how much they \\u201cliked\\u201d the conversation. We collect the votes for 110 randomly sampled conversations from 10 different turkers.\\n\\nModels\\t\\t\\t\\t\\t\\tTest Seen\\tTest Unseen\\nE2E Transformer MemNet\\t\\t2.36 (1.38)\\t2.10 (0.96)\\nOurs\\t\\t\\t\\t\\t\\t2.39 (0.99)\\t2.38 (1.01)\\n\\nAs shown in the table (avg and stddev), human annotators prefer our results to those of baselines with a larger gap in Test Unseen.\\n\\n\\n4. Overall, the rough improvement that is being provided in the first-stage of the two stage setting seems rather minor (23% -> 26% accuracy; 2.21 -> 2.35 human eval), and that the task remains extremely difficult.\\n\\nConsidering the difficulty of the task, our improvement in knowledge selection (23.2% -> 26.8%) is not minor. Dinan et al. (2019) recorded 25.5% accuracy in knowledge selection on WoW with additional 700 million Reddit conversations [1] and knowledge selection data of [2], while ours achieves better performance even without them. Agreeing that the task remains challenging, we strongly believe that our work brings important contributions for knowledge-ground conversation: (i) focusing the diversity issue of knowledge selection for the first time, (ii) correctly modeling it as a sequential latent model and (iii) achieving new state-of-the-art performance with nontrivial margins. \\n\\n[1] P. Mazare, S. Humeau, M. Raison, and A. Bordes. Training Millions of Personalized Dialogue Agents. EMNLP, 2018.\\n[2] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. SQuAD: 100,000+ Questions for Machine Comprehension of Text. EMNLP, 2016.\\n\\n\\n5. I know the Dinan et al. models, at human evaluation time, hardcoded to not pick the same knowledge twice. Do you have a similar restriction? If not, maybe you can at least say that you manage to get rid of the need for that!\\n\\nThank you for your suggestion. We did not use their hardcoding. We will add that statement to our final version.\\n\\n\\n6. How are the examples in figure 3 chosen? Are they generally indicative of what is seen throughout the human evaluation?\\n\\nWe manually select one example for Figure 3. For human evaluation, we randomly sample test examples without knowing which examples are chosen at all. We will add more examples of knowledge selection and utterance prediction in Appendix G.\\n\\n\\n7. Qualitative examples with selected knowledge\\n\\nThank you for your suggestion. We add qualitative examples of selected knowledge in Appendix G.\\n\\n\\n8. Grammatical errors\\n\\nWe update our paper per your suggestion.\"}", "{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank Reviewer 4 for positive and constructive reviews. Below, we respond to each comment in detail. Please see the blue fonts in the newly uploaded draft to check how our paper is updated.\\n\\n\\n1. In Figure 3, please provide the knowledge sentence that was selected.\\n\\nThanks for the suggestion. We add some examples of selected knowledge and predicted utterances in Appendix G.\\n\\n\\n2. Please provide the inter-annotator agreement for human evaluation.\\n\\nWe measured the agreement among the annotators using Fleiss\\u2019 kappa [1]. All kappa values exceeded or were close to 0.2, indicating the slight agreement among annotators. There were some diversity among annotators\\u2019 responses, because of utilizing the 4 point scale in order to avoid having a \\u201ccatch-all\\u201d category (i.e. no middle response scale) in the answer choice [2]. To mitigate such annotator bias and inter-annotator variability, we adjusted human evaluation results via Bayesian calibration [3]. Table 4 shows raw and calibrated results of human evaluation, which consistently validates that annotators prefer our results to those of the baselines.\\n\\n\\t\\t\\t\\t\\tTest Seen\\t\\t\\t\\t\\t\\tTest Unseen\\nMethod\\t\\tEngagingness\\tKnowledgeability\\tEngagingness\\tKnowledgeability\\nPostKS\\t\\t0.12\\t\\t\\t\\t0.17\\t\\t\\t\\t0.12\\t\\t\\t\\t0.09\\nTMN\\t\\t0.22\\t\\t\\t\\t0.19\\t\\t\\t\\t0.16\\t\\t\\t\\t0.17\\nOurs\\t\\t0.20\\t\\t\\t\\t0.20\\t\\t\\t\\t0.21\\t\\t\\t\\t0.17\\nHuman\\t\\t0.22\\t\\t\\t\\t0.22\\t\\t\\t\\t0.23\\t\\t\\t\\t0.31\\n\\n[1] J. L Fleiss. Measuring Nominal Scale Agreement among Many Raters. Psychol. Bull. 1971.\\n[2] D. K. Dalal, N. T. Carter, and C. J. Lake. Middle Response Scale Options are Inappropriate for Ideal Point Scales. J. Bus. Psychol. 2014.\\n[3] I. Kulikov, A. H. Miller, K. Cho, and J. Weston. Importance of Search and Evaluation Strategies in Neural Dialogue Modeling. INLG 2019.\\n\\n\\n3. I think it would be interesting to see what is the copy mechanism actually adding in terms of integration of knowledge vs the WoW MemNet approach.\\n\\nIn newly updated draft, we add quantitative results of \\u201cE2E Transformer MemNet + BERT + PostKS + Copy\\u201d to Table 2 and 3. To make the results more reliable, we run the model three times with different random seeds and report its mean. We also update our model\\u2019s results in the same manner. The \\u201cE2E Transformer MemNet + BERT + PostKS + Copy\\u201d performs the best among baselines, but is not as good as ours, which confirms that sequential latent modeling is critical for improving the accuracy of knowledge selection and subsequently utterance generation. Adding the copy mechanism to the baseline substantially improves the accuracy of utterance generation, but barely improves the knowledge selection accuracy, which also justifies the effectiveness of the sequential latent variable. Additionally, the performance gaps between ours and baselines are larger in Test Unseen. It can be understood that the sequential latent variable can generalize better.\\n\\n\\n4. For Related Work, also cite Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations.\\n\\nThank you. We update our paper as your suggestion.\\n\\n\\n5. How is the performance of the model impacted with longer dialog context vs shorter?\\n\\nTable 5 in Appendix C compares the knowledge selection accuracy of different methods for each turn on the Wizard of Wikipedia. Thanks to the sequential latent variable, our model consistently outperforms other models for all turns in knowledge selection accuracy. Notably, in all models, the accuracy significantly drops after the first turn (which is often easily predictable topic definition sentence), which shows the diversity nature in knowledge selection, as discussed in Section 2.\\n\\n\\n6. The Holl-E dataset was transformed from spans of knowledge to a single knowledge sentence. It would be interesting to see what happens when the knowledge selected is over multiple sentences.\\n\\nWe select the sentence that includes the span as the ground-truth (GT) knowledge sentence. If the span is given over multiple sentences, we select the minimum number of consecutive sentences containing the span and use them as GT. If all of the candidate sentences have zero F1 scores to the span and the response, we tag \\u2018no_passages_used\\u2019 as the GT, which amounts to 5% of GT labels. All of the details are updated in Section 4.1.\\n\\n\\n7. The knowledge pool currently consists of 67.57 sentences on average. How will this method scale as the amount of knowledge sentences grows?\\n\\nDue to the use of BERT (or Transformer) as the sentence encoder, our memory complexity is O(nm^2) where n is the number of candidate sentences in knowledge pool and m is the length of the longest sentence. For example, when training with 1 dialogue batch on NVIDIA TITAN RTX GPU, our model scales up to n=95 and m = 32. But at test time, it is highly scalable up to n>=500 because of no need for backpropagation.\\n\\n\\n8. Grammatical errors\\n\\nThank you. We update our paper as your suggestion.\"}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"The paper looks at the problem of knowledge selection for open-domain dialogue. The motivation is that selecting relevant knowledge is critical for downstream response generation.\\nThe paper highlights the one-to-many relations when selecting knowledge which makes the problem even more challenging. It tries to address this by taking into account the history of knowledge selected at previous turns.\\nThe paper proposes a Sequential Latent Model which represents the knowledge history as some latent representation. From this methodology they select a piece of knowledge at the current turn and use it to decode an utterance. The model is trained in a joint fashion to learn which knowledge to select and on generating the response. As the two are strongly correlated. Additionally there is an auxiliary loss to help correctly identify if the knowledge was correctly selected. Additionally a copy mechanism is introduced to try to copy words from the knowledge during decoding.\\nThe experiments are run on the Wizard of Wikipedia dataset where there are annotations for which knowledge sentence is selected and on Holl-E, where they transform the dataset to have a single sentence tied to a response.\\nFor automatic metrics there is significant improvement over baselines for correctly selecting a piece of knowledge and generating a response. Additionally there is human evaluation that also shows significant improvement. Their model also seems to generalize well to domains that were not seen during training time over baselines models.\\n\\nThe contribution of the paper is the novel approach to selecting knowledge for open-domain dialogue. This work is significant in that by improving knowledge selection we see a subsequent improvement in response generation quality which is the overall downstream task within this problem space.\\nI believe this paper should be accepted because of the significant and novel approach of modeling previous knowledge sentences selected. The linking of this knowledge selection model to topic tracking as stated in the paper is of clear importance, as ensuring topical depth and topical transition are two key aspects for open-domain dialog.\\n \\nFeedback on the paper\\nIn Figure 3, please provide the knowledge sentence that was selected.\\nPlease provide the inter-annotator agreement for human evaluation.\\nI think it would be interesting to see what is the copy mechanism actually adding in terms of integration of knowledge vs the WoW MemNet approach. Are those two truely comparable because one does not have copy?\\nFor Related Work, also cite Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations\\n\\nSmall grammatical errors\\n\\\"Recently, Dinan et al. (2019) propose to tackle\\\" -> \\\"Recently, Dinan et al. (2019) proposed to tackle\\\"\\n\\\"which subsequently improves the knowledge-grounded chit-chat.\\\" -> \\\"which subsequently improves knowledge-grounded chit-chat.\\\"\\n\\n\\nSome questions for the authors in terms of future direction\\nHow is the performance of the model impacted with longer dialog context vs shorter?\\n\\nThe Holl-E dataset was transformed from spans of knowledge to a single knowledge sentence. It would be interesting to see what happens when the knowledge selected is over multiple sentences.\\n\\nThe knowledge pool currently consists of 67.57 sentences on average. How will this method scale as the amount of knowledge sentences grows?\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Post author response edit: The authors did a good job of addressing many of the concerns of reviewers. I believe with these new results (esp to reviewer 4), they will have a stronger version for the camera ready. I'm bumping up my recommendation for this reason.\\n\\n\\n\\nThe authors propose a novel architecture for selecting knowledge in knowledge-grounded multi-turn dialogue. Their knowledge selection module uses a sequential latent variable scheme, and is claimed to be able to both handle diversity of knowledge selection in conversation as well as leverage the information from the response. The proposed model yields state of the art on two relevant benchmark datasets in terms of perplexity and F1, and scores higher in human evaluations as well.\\n\\nThe paper is relatively well-written, and the authors offer extensive insight into their approach, providing relevant equations and diagrams where necessary. The approach is well-motivated, and the experiments indicate that the model indeed helps on all evaluation fronts. A variety of baselines are considered and are shown to be inferior, in nearly every metric. I did not spend a lot of effort to try to understand their factorization, but the intuition makes sense, and their use of gumbel softmax provides a clear avenue to fix some of the hard-backprop issues apparent in the original Dinan et al. paper. I also appreciate the addition of the knowledge loss to the PostKS baseline: it\\u2019s a good effort to make the baseline as good as possible.\\n\\nA few things bother me with the paper. The primary one is it concerns me a bit that the BERT pretraining does not improve significantly over the E2E transformer memnet (with just bert vocabulary). Unless I\\u2019m missing something, that model contained NO pretraining, so I would expect massive improvements. A sanity check there would be checking ppl with gold knowledge: if that doesn\\u2019t significantly improve, then I suspect the authors have something really weird about the pretraining or fine tuning. However, It also appears to me that replacing the GRU with a transformer in PostKS might be unfair: Transformers are way more data hungry than RNNs, and so both variants should be tried (though I would be okay with the loser being relegated to a footnote or appendix).\\n\\nThe human evaluations are not as convincing as the authors propose them to be, especially the difference in the \\u201cTest Seen\\u201d case. It is unclear to me why the authors believe that their \\u201cmodel\\u2019s merit would be more salient\\u201d in a multi-turn setting, and I think such an experiment would be good to show - or, at the very least, an indication that such an experiment was tried but results were not considered due to reasons X,Y, Z, etc. Overall, the rough improvement that is being provided in the first-stage of the two stage setting seems rather minor (23% -> 26% accuracy; 2.21 -> 2.35 human eval), and that the task remains extremely difficult\\n\\nQuestions\\n* I know the Dinan et al. models, at human evaluation time, hardcoded to not pick the same knowledge twice. Do you have a similar restriction? If not, maybe you can at least say that you manage to get rid of the need for that!\\n* As mentioned earlier, I would be curious to see multi-turn human evaluations. I understand this is expensive and a large ask.\\n* How are the examples in figure 3 chosen? Are they generally indicative of what is seen throughout the human evaluation?\\n* It would be useful to see a qualitative example of the model\\u2019s knowledge selection process when comparing to other models, rather than just the utterance generation (which is not the novel contribution of the paper).\\n\\nNits\\n* Small grammatical errors dealing with subject-verb agreement (plurals mostly).\\n* Using \\u201c-\\u201d instead of n/a in tables would make it mildly easier to see digest and see where metrics don\\u2019t make sense.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper presents a sequential latent variable model for knowledge selection in dialogue generation. More specifically, the authors extended the posterior attention model (Shankar and Sarawagi, 2019) to the latent knowledge selection problem. The proposed model achieved higher performances than previous state-of-the-art knowledge-grounded dialogue models on Wizard of Wikipedia and Holl-E datasets.\\n\\nThis work presents a reasonable ideas with new state-of-the-art results in both quantitative and qualitative evaluations.\\nAnd overall the paper reads well.\", \"but_i_think_it_could_be_further_improved_with_the_following_points\": [\"Could you describe the updates from the previous sequential latent variable models more clearly? It would help to further highlight the contribution of this work. Now it might not be very clear enough for those who are not familiar with the previous work.\", \"In the introduction, the authors claim the following three advantages of the proposed method: reduced scope of knowledge candidates, better utilization of response information, and weakly-supervised inference with no labels.\", \"But I'm not very convinced whether the experimental results indicate the aspects clearly enough. More detailed analysis should be added to support the contributions.\", \"The current experiments mainly focus on end-to-end dialogue generation performances. But it would be also interesting to see more detailed aspects of knowledge-selection itself in both quantitative and qualitative manners. I guess this analysis can be done based on the sampled or selected knowledge from the attention distribution.\", \"Could you possibly add some ablation studies to show the effectiveness of each component? Especially, I'm curious about the results of the proposed model without knowledge loss.\"]}" ] }
SJlpYJBKvH
Measuring the Reliability of Reinforcement Learning Algorithms
[ "Stephanie C.Y. Chan", "Samuel Fishman", "Anoop Korattikara", "John Canny", "Sergio Guadarrama" ]
Lack of reliability is a well-known issue for reinforcement learning (RL) algorithms. This problem has gained increasing attention in recent years, and efforts to improve it have grown substantially. To aid RL researchers and production users with the evaluation and improvement of reliability, we propose a set of metrics that quantitatively measure different aspects of reliability. In this work, we focus on variability and risk, both during training and after learning (on a fixed policy). We designed these metrics to be general-purpose, and we also designed complementary statistical tests to enable rigorous comparisons on these metrics. In this paper, we first describe the desired properties of the metrics and their design, the aspects of reliability that they measure, and their applicability to different scenarios. We then describe the statistical tests and make additional practical recommendations for reporting results. The metrics and accompanying statistical tools have been made available as an open-source library. We apply our metrics to a set of common RL algorithms and environments, compare them, and analyze the results.
[ "reinforcement learning", "metrics", "statistics", "reliability" ]
Accept (Spotlight)
https://openreview.net/pdf?id=SJlpYJBKvH
https://openreview.net/forum?id=SJlpYJBKvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "HfeiuWl9H", "rkxiwsoYsS", "SJxbtZsYiH", "HkelP-jYjB", "HkeVVWjtoB", "H1eChesYjr", "BJxox6QRYr", "SkxBZirTFS", "HkloOhgEKr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734257, 1573661539089, 1573659001206, 1573658967557, 1573658924020, 1573658806405, 1571859698521, 1571801852988, 1571191922576 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1857/Authors" ], [ "ICLR.cc/2020/Conference/Paper1857/Authors" ], [ "ICLR.cc/2020/Conference/Paper1857/Authors" ], [ "ICLR.cc/2020/Conference/Paper1857/Authors" ], [ "ICLR.cc/2020/Conference/Paper1857/Authors" ], [ "ICLR.cc/2020/Conference/Paper1857/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1857/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1857/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"Main content:\\n\\nThis paper provides a unified way to provide robust statistics in evaluating the reliability of RL algorithms, especially deep RL algorithms. Though the metrics are not particularly novel, the investigation should be useful to the broader community as it compares seven specific evaluation metrics, including 'Dispersion across Time (DT): IQR across Time', 'Short-term Risk across Time (SRT): CVaR on Differences', 'Long-term Risk across Time (LRT): CVaR on Drawdown', 'Dispersion across Runs (DR): IQR across Runs', 'Risk across Runs (RR): CVaR across Runs', 'Dispersion across Fixed-Policy Rollouts (DF): IQR across Rollouts' and 'Risk across Fixed-Policy Rollouts (RF): CVaR across Rollouts'. The paper further proposed ranking and also confidence intervals based on bootstrapped samples, and compared against continuous control and discrete actions algorithms on Atari and OpenAI Gym.\\n\\n--\", \"discussion\": \"The reviews clearly agree on accepting the paper, with a weak accept coming from a reviewer who does not know much about this subarea. Comments are mostly just directed at clarifications and completeness of description, which the authors have addressed.\\n\\n--\", \"recommendation_and_justification\": \"This paper should be accepted due to its useful contributions toward doing a better job of measuring performance of RL.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Example analysis of cross-environment phenomena\", \"comment\": \"To provide an example of a per-environment analysis, and to emphasize the importance of doing so, we have also added the following text to the results in Section 5.5:\\n\\n\\\"To see metric results evaluated on a per-environment basis, please refer to Appendix F. Rank order of algorithms was often relatively consistent across the different environments evaluated. However, different environments did display different patterns across algorithms. For example, even though SAC showed the same or better Dispersion across Runs for most of the MuJoCo environments evaluated, it did show slightly worse Dispersion across Runs for the HalfCheetah environment (Fig 7a). This kind of result emphasizes the importance of inspecting reliability (and other performance metrics) on a per-environment basis, and also of evaluating reliability and performance on theenvironment of interest, if possible.\\\"\"}", "{\"title\": \"Thank you for your positive comments\", \"comment\": \"Thank you for your positive comments. We agree that this is an important area of study, and we hope that our work can have a beneficial effect on the field. As you have noted, we have taken a lot of care to construct these metrics and the surrounding procedures in a way that is rigorous and well motivated. Because this is a methods paper, we hold it as particularly important to present the methods in a digestible format, so we appreciate your recognition of this as well.\"}", "{\"title\": \"Thank you for your insightful comments [part 2]\", \"comment\": \"> 3. Another main concern is the effect of exploration strategy: All these metrics can be highly affected by different exploration strategy in different environments. For example if an environment has a chain like structure, then given the exploration strategy you may have an extremely high CVaR or IQR. How do authors think they can strip off this effect? (Running all algorithm with the same exploration strategy is not sufficient, since the interplay of learning algorithm and exploration may be important)\\n \\nIt is definitely true that exploration strategies affect reliability. In our analysis, the exploration strategies were fixed according to the method used in the original papers (the continuous control experiments) or in the Dopamine package release (the discrete control experiments). For many algorithms, it is difficult to separate out the exploration strategy from the algorithm itself. However, one could certainly imagine a scenario in which a user lets the governing hyperparameters (e.g. action selection noise or greediness) be free parameters are optimized. In this case, we expect that evaluation should be performed on the optimized algorithm.\\n \\n> 4. Generalizability: How do authors think these metrics are generalizable. For example if algorithm A has better metrics than algorithm B on open AI Gym task for continuous control, how much we expect the same ranking applies while learning on a new environment. I am asking this, because to me, some of these metrics are very environment dependent, and being reliable in some environments may not imply reliability in other environments.\\n \\nThe values of these metrics are definitely environment dependent. As far as we understand, this is an inherent part of evaluating RL algorithms, whether on performance or reliability, because RL behavior depends on the specifics of the environment. For practical use cases, we expect users to evaluate on the environments in which they plan to deploy their algorithms. For researchers evaluating new algorithms, we hope that they evaluate on a range of environments (both for performance and reliability); this provides a fuller picture of an algorithm\\u2019s reliability, and also makes extrapolation to novel tasks more justifiable. Motivated by your and Reviewer 1\\u2019s comments, we have added per-environment evaluations of the metrics in Appendix F. Interestingly, the ordering of algorithms is actually relatively consistent across environments, though this is certainly not always the case.\\n \\n> Code and modularity of it: The main contribution of this paper will be shown when other researchers start using it and report the metric, if the code is hard to use, the contribution of the paper is hard to be significant. \\n \\nWe strongly agree that ease of use is a critical component of this project, in addition to the more theoretical concerns that have been addressed in the paper. To this end, we have taken many steps to encourage adoption and to ensure ease of use (described above in our answer to Question 1). Please let us know if we can clarify further.\"}", "{\"title\": \"Thank you for your insightful comments [part 1]\", \"comment\": \"> I believe the paper is discussing a very important issue, and some possible solutions to it, even if not perfect it's an important step toward paying more attention to maybe similar metrics. I am in favor of the paper in general, but I have some concerns.\\n \\nThank you for recognizing the importance of this work. Thank you also for your insightful questions and comments. We respond to each comment individually below.\\n \\n> 1. My main concern is that why authors think that community will adopt these metrics and report them? I like how authors have proposed different metrics, but having one or two easy to compute metric is much more likely to be adopted, than 6 different metrics, which I\\u2019m not sure how easy it is to use the python package? It\\u2019s of main importance, because if community don\\u2019t use these metrics in the future, the contribution of the paper is minimal. \\n \\nWe agree that this is an important question. An empirical answer is that we have already received strong interest in these metrics from RL researchers and engineers who are developing RL for real-world practical application. Reliability is a high priority for them, and these metrics (and the accompanying framework, including statistical tests and confidence intervals) fill a need that was previously unmet, to provide rigorous and quantitative measurement of reliability on different dimensions. We are already starting to work with these teams on integrating the metrics into daily tests etc.\\n \\nIn previous presentations of this work, we have also received strong interest and enthusiasm from RL researchers. We believe that, sociologically, pure researchers will be incentivized to adopt these metrics too. As the field moves towards developing more reliable algorithms, as is already happening, researchers will look for ways to measure their gains in reliability. For example, Haarnoja et al would have benefited from using these metrics to demonstrate the reliability of SAC, especially given the tools to rigorously compare against other algorithms with statistical tests. Reviewers may also ask for evaluation of these metrics, given the growing awareness of RL reliability.\\n \\nFurthermore, we have taken great care to design our package to be as easy to use as possible, and we expect that this will greatly ease adoption. Installation will be straightforward as a Python package. The entire pipeline (evaluation of the metrics, computation of statistical tests and confidence intervals, and generation of plots) will be easy to run with just a few commands. The small number of parameters minimizes the cognitive load on users. The package will be compatible with a number of common data formats, including both Tensorflow and PyTorch outputs. These metrics will be well-integrated with a popular RL framework, which will also encourage adoption. It will also be compatible with a number of other popular RL libraries. The package will also be open source so that users can inspect the code and easily adapt it to new use cases.\\n \\nWe agree that having multiple metrics adds complexity to our framework; this is in fact something we have internally discussed. Ultimately, however, we believe that it is important to include these different metrics for because they measure reliability for different use cases. Upon release of the package, we plan to include documentation that provides clear examples of different usages for different scenarios.\\n \\n> 2. There is no question of the importance of reliability of RL algorithms, but we need to be careful that RL algorithms are not optimizing for metics like CVaR, so maybe a better learning algorithm (in the sense of expectation learning) might not have better reliability metrics because it is not the main objective. So following this, how would authors think their metrics can be used to design a more reliable algorithms? For example there is good literature on CVaR learning for safe policies. Do you think there exists a proxy for metrics you introduced that can be used to for the objective of the optimization?\\n \\nWe believe that it is indeed possible and a valuable idea to adopt some version of these metrics into the optimization function, analogous to prior work incorporating risk measures on cumulative returns. This would be a very interesting avenue for future work, and we would definitely be interested in investigating further.\\n \\nIn our analysis, we evaluate the metrics on algorithms that were optimized for mean performance. As you pointed out, there may be variants of these algorithms that perform better on reliability, given a different objective function. We use this method of optimization because this is what practitioners typically use, and we wanted to present an analysis of algorithms as they are typically used. However, we certainly hope that this work will motivate researchers to inspect such metrics while tuning hyper parameters and designing objective functions.\"}", "{\"title\": \"Thank you for your comments and your thoughtful consideration of our work\", \"comment\": \"> This paper provides a unified way to provide robust statistics in evaluating RL algorithms in experimental research. Though I don't believe the metrics are particularly novel, I believe this work would be useful to the broader community and was evaluated on a number of environments. I do have a few concerns, however, about experimental performance per environment being omitted from both the main paper and the appendix.\\n \\nThank you for your comments and for your recognition of the value of this work. We appreciate your thoughtful consideration of our paper, and we respond below to your suggestions in detail.\\n \\n> I think this is a valuable work and the ideas/metrics are useful, though I'm not sure I would call them novel (CVar and the like have been seen before). I think the value comes in the unification of the metrics to give more robust pictures of algorithmic performance. \\n \\nThank you for your recognition of the usefulness of these metrics. We agree that developing a unified framework for measuring reliability is of value to the community, and this has been a strong motivator for us in doing this work. With regard to CVaR in RL, our understanding is that it has previously been applied to the cumulative returns within an episode, but we are not aware that it has been applied to the variables pointed to in the paper, which measure distinct aspects of risk and reliability. We believe that our framework additionally introduces explicit delineations of different dimensions of reliability, definitions of quantitative metrics for measuring on these dimensions, best practices for pre- and post-processing, and rigorous statistical tests and confidence intervals that allow aggregation across tasks while respecting the measurements as being repeated measures on different algorithms and tasks. Notwithstanding, we agree that the word \\u201cnovel\\u201d may be misunderstood, and we have removed it from our paper. \\n\\n> The details of all of these evaluations and individual performance should be provided in the appendix, however, it seems only MuJoco curves were included. Moreover, it says that a blackbox optimizer was used to find hyperparameters, but these hyperparameters were not provided in the appendix or anywhere else as far as I can tell. I think it's important for a paper which recommends evaluation methodology in particular to be more explicit regarding all details within the appendix. I hope to see additional details in future revisions -- including per-environment performance.\\n \\nWe have added hyperparameters for both the continuous control and discrete control algorithms in Appendix E. This includes the search space used for the blackbox optimizer. For the Atari training curves, we previously had a pointer in Appendix D linking to the relevant part of the Dopamine project, but this was far too hidden so we have added a pointer in the main text instead. Please let us know if there are any other details that you believe we should include. We strongly agree that, given this is a methodology paper, we should hold ourselves to a high standard in this regard.\\n \\nWe have also added per-environment metric results in Appendix F. Please see the next comment for more details.\\n \\n> I believe clustering results across environments can be potentially misleading. Say that we have an environment where the algorithm always fails but is very consistent and an environment where it excels. These are blended together in the current Figures. While it requires more space, I believe it is important to separate these two. I am concerned that a recommendation paper like this one will set a precedent for only including the combined metrics of algorithmic performance across environments, masking effects. I would suggest splitting out results per environment as well and pointing out particular cross-environment phenomena. \\n \\nThank you for making this important point. We have added per-environment results in Appendix F, and a pointer to those results in the main results in Section 5.5. We have also added the following text in Section 3 to explicitly encourage per-environment investigations as part of our recommendations: \\u201cPer-environment analysis -- The same algorithm can have different patterns of reliability for different environments. Therefore, we recommend inspecting reliability metrics on a per-environment basis, as well as aggregating across environments as described above.\\u201d\\n \\n> There is a missing discussion of prior work on statistical testing of RL evaluation: \\n> [...]\\n \\nThank you for pointing us to these papers, which are very relevant for RL practitioners who wish to be rigorous in their experiments and analysis. We have included a discussion of this work in the introduction.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper provides a unified way to provide robust statistics in evaluating RL algorithms in experimental research. Though I don't believe the metrics are particularly novel, I believe this work would be useful to the broader community and was evaluated on a number of environments. I do have a few concerns, however, about experimental performance per environment being omitted from both the main paper and the appendix.\", \"comments\": [\"I think this is a valuable work and the ideas/metrics are useful, though I'm not sure I would call them novel (CVar and the like have been seen before). I think the value comes in the unification of the metrics to give more robust pictures of algorithmic performance.\", \"The details of all of these evaluations and individual performance should be provided in the appendix, however, it seems only MuJoco curves were included. Moreover, it says that a blackbox optimizer was used to find hyperparameters, but these hyperparameters were not provided in the appendix or anywhere else as far as I can tell. I think it's important for a paper which recommends evaluation methodology in particular to be more explicit regarding all details within the appendix. I hope to see additional details in future revisions -- including per-environment performance.\", \"I believe clustering results across environments can be potentially misleading. Say that we have an environment where the algorithm always fails but is very consistent and an environment where it excels. These are blended together in the current Figures. While it requires more space, I believe it is important to separate these two. I am concerned that a recommendation paper like this one will set a precedent for only including the combined metrics of algorithmic performance across environments, masking effects. I would suggest splitting out results per environment as well and pointing out particular cross-environment phenomena.\"], \"there_is_a_missing_discussion_of_prior_work_on_statistical_testing_of_rl_evaluation\": [\"Colas, C\\u00e9dric, Olivier Sigaud, and Pierre-Yves Oudeyer. \\\"A Hitchhiker's Guide to Statistical Comparisons of Reinforcement Learning Algorithms.\\\" arXiv preprint arXiv:1904.06979 (2019).\", \"Colas, C\\u00e9dric, Olivier Sigaud, and Pierre-Yves Oudeyer. \\\"How many random seeds? statistical power analysis in deep reinforcement learning experiments.\\\" arXiv preprint arXiv:1806.08295 (2018).\"], \"edit\": \"Score boosted after significant updates to the paper.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"*Summary*\\n\\nAuthors proposed a variety of metrics to measure the reliability of an RL algorithm. Mainly looking at Dispersion and Risk across time and runs while learning, and also in the evaluation phase. \\nAuthors have further proposed ranking and also confidence intervals based on bootstrapped samples. They also compared the famous continuous control and discrete actions algorithms on Atari and OpenAI Gym on the metrics they defined.\\n\\n*Decision*\\n\\nI believe the paper is discussing a very important issue, and some possible solutions to it, even if not perfect it's an important step toward paying more attention to maybe similar metrics. I am in favor of the paper in general, but I have some concerns.\\n\\n1. My main concern is that why authors think that community will adopt these metrics and report them? I like how authors have proposed different metrics, but having one or two easy to compute metric is much more likely to be adopted, than 6 different metrics, which I\\u2019m not sure how easy it is to use the python package? It\\u2019s of main importance, because if community don\\u2019t use these metrics in the future, the contribution of the paper is minimal. \\n\\n2. There is no question of the importance of reliability of RL algorithms, but we need to be careful that RL algorithms are not optimizing for metics like CVaR, so maybe a better learning algorithm (in the sense of expectation learning) might not have better reliability metrics because it is not the main objective. \\nSo following this, how would authors think their metrics can be used to design a more reliable algorithms? For example there is good literature on CVaR learning for safe policies. Do you think there exists a proxy for metrics you introduced that can be used to for the objective of the optimization?\\n\\n3. Another main concern is the effect of exploration strategy: All these metrics can be highly affected by different exploration strategy in different environments. For example if an environment has a chain like structure, then given the exploration strategy you may have an extremely high CVaR or IQR. How do authors think they can strip off this effect? (Running all algorithm with the same exploration strategy is not sufficient, since the interplay of learning algorithm and exploration may be important)\\n\\n4. Generalizability: How do authors think these metrics are generalizable. For example if algorithm A has better metrics than algorithm B on open AI Gym task for continuous control, how much we expect the same ranking applies while learning on a new environment. I am asking this, because to me, some of these metrics are very environment dependent, and being reliable in some environments may not imply reliability in other environments.\\n\\n\\n*Note*:\", \"code_and_modularity_of_it\": \"The main contribution of this paper will be shown when other researchers start using it and report the metric, if the code is hard to use, the contribution of the paper is hard to be significant.\\n\\n==== Post Rebuttal ====\\nThanks for the responses authors posted, I think there is a good chance that the community will benefit from this experimental metrics in the future, so I increase my rating to accept.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"In this paper, the authors study an important problem in the area of reinforcement learning (RL). Specifically, the authors focus on how to evaluate the reliability of RL algorithms, in particular of the deep RL algorithms. The paper is well motivated by providing convincing justification of evaluating the RL algorithms properly. In particular, the authors define seven specific evaluation metrics, including 'Dispersion across Time (DT): IQR across Time', 'Short-term Risk across Time (SRT): CVaR on Differences', 'Long-term Risk across Time (LRT): CVaR on Drawdown', 'Dispersion across Runs (DR): IQR across Runs', 'Risk across Runs (RR): CVaR across Runs', 'Dispersion across Fixed-Policy Rollouts (DF): IQR across Rollouts' and 'Risk across Fixed-Policy Rollouts (RF): CVaR across Rollouts', from a two-dimension analysis shown in Table 1.\\n\\nMoreover, the authors apply the proposed evaluation metrics to some typical RL algorithms and environments, and provide some insightful discussions and analysis.\\n\\nOverall, the paper is well presented though it is somehow different from a typical technical paper.\"}" ] }
H1enKkrFDB
Stable Rank Normalization for Improved Generalization in Neural Networks and GANs
[ "Amartya Sanyal", "Philip H. Torr", "Puneet K. Dokania" ]
Exciting new work on generalization bounds for neural networks (NN) given by Bartlett et al. (2017); Neyshabur et al. (2018) closely depend on two parameter- dependant quantities: the Lipschitz constant upper bound and the stable rank (a softer version of rank). Even though these bounds typically have minimal practical utility, they facilitate questions on whether controlling such quantities together could improve the generalization behaviour of NNs in practice. To this end, we propose stable rank normalization (SRN), a novel, provably optimal, and computationally efficient weight-normalization scheme which minimizes the stable rank of a linear operator. Surprisingly we find that SRN, despite being non-convex, can be shown to have a unique optimal solution. We provide extensive analyses across a wide variety of NNs (DenseNet, WideResNet, ResNet, Alexnet, VGG), where applying SRN to their linear layers leads to improved classification accuracy, while simultaneously showing improvements in genealization, evaluated empirically using—(a) shattering experiments (Zhang et al., 2016); and (b) three measures of sample complexity by Bartlett et al. (2017), Neyshabur et al. (2018), & Wei & Ma. Additionally, we show that, when applied to the discriminator of GANs, it improves Inception, FID, and Neural divergence scores, while learning mappings with low empirical Lipschitz constant.
[ "Generelization", "regularization", "empirical lipschitz" ]
Accept (Spotlight)
https://openreview.net/pdf?id=H1enKkrFDB
https://openreview.net/forum?id=H1enKkrFDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "ayGUQHhSG5t", "oJQLjNF21Y", "6a6g97cBtW", "S1xLQnknoS", "SJgMTtk3oS", "H1xE9zwssr", "rklejkzciB", "BJlmSVqKsB", "ByeLvkDtor", "HJlyvpIYjH", "BkeTRc8Kir", "Bygq5FLFsS", "rJl2zKrmoH", "BkepTNsk5H", "rylcn0iiYB", "rJgwtQhPFH" ], "note_type": [ "official_comment", "comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1587992035018, 1576898146615, 1576798734226, 1573809182005, 1573808570502, 1573773964412, 1573687192407, 1573655610707, 1573642078399, 1573641559509, 1573640917228, 1573640593812, 1573243155609, 1571955908807, 1571696305650, 1571435391417 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper1855/Authors" ], [ "~Thanh_Tung_Hoang1" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1855/Authors" ], [ "ICLR.cc/2020/Conference/Paper1855/Authors" ], [ "ICLR.cc/2020/Conference/Paper1855/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1855/Authors" ], [ "ICLR.cc/2020/Conference/Paper1855/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1855/Authors" ], [ "ICLR.cc/2020/Conference/Paper1855/Authors" ], [ "ICLR.cc/2020/Conference/Paper1855/Authors" ], [ "ICLR.cc/2020/Conference/Paper1855/Authors" ], [ "~Micah_Goldblum1" ], [ "ICLR.cc/2020/Conference/Paper1855/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1855/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1855/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Apologies for late reply and answer to your questions\", \"comment\": \"Apologies for our late reply and thank you very much for your questions. They are all valid, and we are happy to answer them. Please do let us know if something isn\\u2019t clear.\", \"q1\": \"In this paper [1] (also will be presented at ICLR 2020) the author showed that spectral norms based generalization bounds negatively correlate to generalization. Minimizing the stable rank thus might not improve generalization as expected\\u2026\", \"ans\": \"Thank you very much for pointing this paper out. Even though we do not have a mathematical form linking gradient norm and low stable rank directly, empirically we did observe that low stable rank leads to discriminators with low gradient norm at the vicinity of real and generated samples (please refer Figs 15 and 16 in the Appendix). The empirical Lipschtiz also offers a very similar argument (Fig 5 in the main text, and Appendix E.2).\\n\\n[1] \\\"Fantastic Generalization Measures and Where to Find Them\\\"\", \"q2\": \"Optimal value of stable rank\", \"q3\": \"For the generalization of GANs, this paper [2] shows that the gradient of the optimal discriminator goes toward 0\\u2026\", \"https\": \"//openreview.net/forum?id=ByxPYjC5KQ\\n\\n[3] \\u201cSpectral Norm Regularization for Improving the Generalizability of Deep Learning\\u201d, Yoshida and Miyato, 2017.\"}", "{\"title\": \"The correctness of the intuition and some other questions\", \"comment\": \"Hi there,\\nThank you for a nice paper on the generalization of neural net and GANs. However, I find the intuition that \\\"lower stable rank leads to better generalization\\\" is not well justified. In this paper [1] (also will be presented at ICLR 2020)\\nthe author showed that spectral norms based generalization bounds negatively correlate to generalization. Minimizing the stable rank thus might not improve generalization as expected.\\n\\n\\\"Many norm-based measures not only perform poorly, but negatively correlate with\\ngeneralization specifically when the optimization procedure injects some stochasticity. In particular, the generalization bound based on the product of spectral norms\\nof the layers (similar to that of Bartlett et al. (2017)) has very strong negative\\ncorrelation with generalization.\\\" [1]\\n\\nAnother problem with your method is that the optimal stable rank (the stable rank that results in the best generalization) cannot be computed exactly so it must be chosen empirically. \\n\\nFor the generalization of GANs, this paper [2] shows that the gradient of the optimal discriminator goes toward 0 as the two distributions become closer. So to improve generalization, the gradient of the discriminator should be pushed toward 0. The paper also shows that any discriminator with Lipschitz constant greater than 0 does not guarantee good generalization.\\n\\n[1] \\\"Fantastic Generalization Measures and Where to Find Them\\\"\", \"https\": \"//openreview.net/forum?id=ByxPYjC5KQ\"}", "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"The authors propose stable rank normalization, which minimizes the stable rank of a linear operator and apply this to neural network training. The authors present techniques for performing the normalization efficiently and evaluate it empirically in a range of situations. The only issues raised by reviewers related to the empirical evaluation. The authors addressed these in their revisions.\", \"title\": \"Paper Decision\"}", "{\"title\": \"New Stopping criterion and more architectures for CIFAR10 experiments (clean cases)\", \"comment\": \"Thank you for your response and suggestions. We have now incorporated your suggestions in the following way.\\n\\n1. #epoch as the stopping criterion. We understand that you would prefer to see the training loss as a stopping criterion as well to check if the method is sensitive to stopping criterion. We have now uploaded results where we used the training accuracy as the stopping criterion on both CIFAR100 (using ResNet110, WideResnet-28, Densenet100, Alexnet, and VGG19 in Figure 6) and CIFAF10(using ResNet110, WideResnet-28, Densenet100, and Alexnet in Figure 9). We used the accuracy where we fixed 99% train accuracy as the criterion and SRN performs better than SN and vanilla consistently with this new stopping criterion.\\n\\n2. Experiments on CIFAR10(clean labels): Upon your suggestion, we have now also added WideResNet-28 and Densenet100. The experiments took a bit longer to finish which is why we couldn't add them in the last revision. SRN-50 and SRN-30 are both better than SN and Vanilla on Densenet-100, WideResNet-28, and ResNet-110 (Figure 10). As you noted, it is only SRN-30 with Alexnet that performs suboptimally compared to SN. SRN is better in all the other cases here.\\n\\nWe hope these new experiments will address your remaining concerns and you will reconsider your score. \\n\\nThank you.\"}", "{\"title\": \"Revision #2 uploaded with suggestions from Reviewer#3 with more stopping criterion and model\", \"comment\": [\"Upon suggestions from Reviewer 3, we have now made the following updates in the latest revisiion.\", \"Figure 6 (CIFAR100) and Figure 9 (CIFAR10) now use the training accuracy as a stopping criterion. We report the test accuracy when the training accuracy first reaches the stopping criterion.\", \"Figure 10 now includes WideResnet28 and Densenet100 in addition to Resnet110 and Alexnet.\"]}", "{\"title\": \"Thanks for your response\", \"comment\": \"Thanks for your response. Unfortunately, some of my concerns about the experiments are not addressed:\\n\\n1- #epoch as the stopping criterion: I am still worried about this choice. You have did not add any results for cross-entropy as the stopping criterion nor provided any evidence that your result is not sensitive to the choice of stopping criterion.\\n\\n2- Experiments on CIFAR10 and CIFAR100 (clean labels): Thanks for adding experiments on CIFAR100. I was hopping that you would repeat the CIFAR10 experiments (clean cases) for CIFAR100 but only two architecture are reported for CIFAR100. Even in these two cases, SRN30 and SRN30 are not clearly better than spectral normalization. This is also the case in the experiments with weight decay where spectral normalization ends up outperforming SRN30 and SRN50 in terms of the test error.\"}", "{\"title\": \"Updated revision\", \"comment\": [\"We would like to thank all the reviewers for their insightful comments and suggestions to improve our paper. We have now posted a revision with the following main revisions.\", \"New Experiments on CIFAR10 with Resnet110 and Alexnet on CIFAR10 both for clean and random data. (Figure 8)\", \"New Experiments on (clean) CIFAR100 with low learning rate, and with and without weight decay using ResNet110. (Table 7)\", \"Shortened the paper a bit, corrected typos and other minor corrections.\"]}", "{\"title\": \"convolution heuristic (my bad).\", \"comment\": \"I guess I should have known that references to Miyato's paper implied that you were using an empirical heuristic to treat convolutional layers as linear transforms. Instead I wrongly guessed that the block-sparse linear matrix corresponding to a convolution would add another projection step to Thm 1, so you did not do convolutional layers. My naive expectation is that accuracy and even speed won't change much moving from heuristic to exact spectral values, but it would be nice to confirm at some point.\"}", "{\"title\": \"Reply (part 2)\", \"comment\": \"Role of partitioning index: Note our final algorithm does use partitioning index of $k = 1$. Our problem formulation is more general than that as it allows varying k while obtaining optimal solution. We provide this formulation with a hope that it will be useful for other fields as well where varying k is feasible or more important. However, in the case of deep learning, varying k is computationally expensive as it will require obtaining top k singular vectors. We do agree that we could explore until k = 3, but that itself would have increased the number of experiments 3 times, and also the point that we are trying to make, which is to show that normalizing parameter dependent quantities found in recent generalization bounds-- \\u201cstable rank\\u201d and \\u201cspectral normalization\\u201d-- improves generalization, wouldn\\u2019t change.\"}", "{\"title\": \"Additional Experiments and response to Reviewer #3\", \"comment\": \"Thank you very much for reading the paper in detail , finding it interesting, and providing questions and suggestions. Below we answer all your questions and hopefully it will convince you to change your score. To summarize, there are few concerns that you raised: (1) stopping criterion based on cross entropy; (2) empirical evaluation; (3) accuracy of the models compared to sota in the literature; (4) new dataset; and (5). role of the partitioning index\\n\\n\\n>> Stopping criterion: We are not sure how using the training loss as a stopping criterion would be useful or insightful. Nonetheless we think this poses two problems. \\n\\n i) Different models trained for the same experiment, es-specially the ones on randomized label, takes a long time for $\\\\textbf{all of them}$ to reach the same training accuracy/loss. From our experiments, some instances will never reach the training accuracy of 99% (and hence we will never know what is the right time to stop) whereas some of them will reach it within 500 epochs. So, it doesn\\u2019t give us a stopping criterion which can be executed efficiently. We believed a fairer stopping criterion is to stop them after a reasonably large number of epochs. \\n\\n ii) As we are working with the generelization error of a class of models, we assume our model class to be the set of ResNets (or WRNs, Densenets etc) with the given architecture trained for N epochs. This is a $\\\\textbf{valid hypothesis class and is used commonly in practice}$. It is very uncommon to use a stopping criterion that depends on the training data and in this case, using one that depends on the validation data will not work as the validation loss on random data will stay constant. \\n\\n>> New dataset: Thank you for the suggestion. We are have reported $\\\\textbf{ResNet-110 and Alexnet results on CIFAR10}$ now in Appendix D.1. In response to Reviewer 1\\u2019s comment, we have also performed $\\\\textbf{more experiments on ResNet-110 with low learning rates}$, with and without weight decay on clean CIFAR-100. These results show consistent advantage in favour of SRN as a regularizor across models, datasets, and learning hyperparameters. \\n\\n\\n>> Accuracy of the models used: The accuray obtained for ResNet-110 on CIFAR100 for our network with $\\\\textbf{1.9M parameters is 72.5}$ which is better than what ResNet110 obtains on https://github.com/bearpaw/pytorch-classification ~ $\\\\mathbf{71.14\\\\%}$. This is the best that can be obtained with this network configuration and size. Other ResNets that do obtain better accuracy has almost $\\\\textbf{25 times the number of parameters~(45 M parameters)}$ as they have 4 times the number of output filters on each convolution layer, making it computationally very prohibitive. We believe it is fair comparison given that ResNet-110 with 1.9M parameters is a very standard resnet to test algorithms on. For a more wider network, one can look at the results on WideResNet.\\n\\n The implementation of densenet-BC (L=100) at https://github.com/bearpaw/pytorch-classification achives an error of 22.88, densenet121~(ours is densenet100) at https://github.com/weiaicunzai/pytorch-cifar100 achieves an error of 23.99, and densenet(BC, L=100,K=12) at https://github.com/liuzhuang13/DenseNet achieves an error of 22.27. Our error for the vanilla model is at 24.74 which is slightly lower than theirs on densenet due to the absence of dropout in the model, which we did to isolate the effect of SRN. Also note that we report the $\\\\textbf{mean of several runs}$ whereas the accuracies reported here are mostly tuned to be the one that performs best on the validation set. If we reported the best accuracy on our models, the accuracy would be higher but there would be no statistical significance of the result.\\n\\nWe would again stress that both densenet and ResNet $\\\\textbf{perform within 1~2% of the widely reported accuracies}$ of these models and the comparison between the models are fair as they are carried out between the exact same architectures and learning configurations.\", \"empirical_evaluation\": \"We use various ways to empirically show improved generalization. We agree that showing only based on the randomized labels might not be enough and that\\u2019s the primary reason why we use $\\\\textbf{three other}$ criterions to show that SRN does improve generalization. These three additional criteria--(a), (b), and (c)-- capture the generalization properties of the network and we believe that a model performing best on all the four settings (including the randomized as well), while maintaining high accuracy, provides better generatlization. Showing better generelization is quite hard and we aren\\u2019t aware of any other means to empirically show such behaviour and will be extremely thankful if reviewer could point out any other ways of empirically evaluating the same. We will try our best to incorporate that as well in our experiments.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"First of all, we would like to thank the reviewer for reading our paper very carefully, appreciating our technical contribution, providing extremely positive comments, and commenting on the importance of Theorem 1. We also appreciate that the reviewer finds that the introduction did a great job and also that our paper contains enough information to implement the approach. All these comments are extremely encouraging.\\n\\nBelow we provide answers to some of the comments given by the reviewer.\\n\\n>> Some related older introductory approaches could also be quickly mentioned:\\n - linear layers represented as \\\"bottlenecks\\\" to enforce low rank explicitly\\n - or solving in manifold of reduced-rank matrices directly\\n\\n\\u2014 Thank you for the suggestion. We will definitely add a bit of surrounding literature on the following subjects: a) low rank weights and its application (mainly to compression), low rank activations, and optimization on the manifold of low rank (and PSD) matrices.\\n\\n>> For simplicity, they target the same srank r=c*min(m,n) for all layers, even though only the sum of sranks is important. For CNNs with only a few linear layers is there any observable difference by lightly deviating from this? \\n\\n\\u2014 We would like to clarify that when we say linear layers, we also refer to the linear transformations in all layers (i.e. those present in the convolutional layer as well) . So, our method is indeed applied to all the layers (both fully connected and convolutional) of the network. We computed m and n here the same as computed in Miyato et. al. We did not vary the stable rank constraint for each layer, we keep it the same to avoid too many combinations but it could be the objective of a future study. \\n\\n>> ..whether future work will also address how \\\"stable rank\\\" concepts might be extended to the convolutional layers. As a starting point, spectral values of the block-circulant matrices corresponding to convolutions have been described [ Sedghi et al. \\\"Singular Values of Convolutional Layers\\\" ].\\n\\n\\u2014 As mentioned earlier, we did use SRN for the convolutional layer as well. We will modify the text to make it explicit in order to avoid any confusion. Our singular value extraction is similar to the one presented in the spectral norm paper. Certainly, future work would consist of using the algorithm in Sedghi et. al. to properly extract the singular values of Convolutional layers and using different k\\u2019s (partitioning index) to run the experiments.\\n\\nMiyato, Takeru, et al. \\\"Spectral normalization for generative adversarial networks.\\\" arXiv preprint arXiv:1802.05957 (2018).\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"We would like to thank the reviewer for taking time to read the paper carefully and for the encouraging comments. We have fixed the typos in the updated version. We hope our comments below addresses your questions.\\n\\n>> The method looks to be reasonably accessible to implement, although its compute cost is not properly characterized and some details (like the orthogonalization step necessary in power-iteration for more than one SV) seem to be omitted. \\n\\n\\u2014 The method is indeed very simple to implement. It requires a mere addition of a few lines to the existing code of the widely used spectral normalization (SN) algorithm. The exact computational complexity of doing SRN i.e. to obtain $W_f$ in Line 10 in Algorithm 2 is $\\\\mathcal{O}(mn)$ where m,n are the dimensions of the matrix. The SRN and the SN algorithm only requires one singular value. However, to use the other variants (using the partitioning index k) that can be formed from Theorem 1, the cost would scale as $\\\\mathcal{O}(mnk)$ where k is the number of singular values we wish to preserve.\\n\\n>> What is the difference in time per training iteration? The authors should also indicate their hardware setup and overall training time.\\n\\n\\u2014 Running the experiment on a NVIDIA V100 for a ResNet110, amount of time taken for one epoch with a batch size of 128 is as follows - a. Vanilla - 187.9 sec (15.5 hrs in total), b. Spectral - 207.9 sec (17 hrs in total), c. Stable-50 - 227.8 s (19 hrs in total), d. Stable-30 - 234 s (19.5 hrs in total). In brackets is the time taken to run an a whole experiment on clean labels.\\n\\n>> Table 1 confusing as it lacks the test error. \\n\\n\\u2014 The particular example in Table 1 is for randomized labels with low learning rate, with and without weight decay. The test error here is thus simply 0.01 which is the chance of getting the label right by guessing randomly. These experiments are in line with the shattering experiments present in Figure 3 for the harder to generalize setting (low lr). We can see that this is the harder to generalize setting (also noted in Wei et. al. 2019) as the training errors here are indeed much lower than those in Figure 3 eg. 40% error in high lr vs 10% error in low lr for vanilla network. \\n\\nHowever, we are also $\\\\textbf{including the test error of the same configuration on clean labels}$ below. Note that they are almost the same and they are slightly worse than the high learning rate configuration (Figure 2). This again indicates that this is the harder to generalize setting.\\n\\n\\n____________________________________________________________________\\n | Vanilla | Spectral | Stable-50 | Stable-30 |\\nW/o WD | 69.2 \\u00b10.5 |69 \\u00b10.1 | 69.1 \\u00b10.85 | 69.3 \\u00b10.4 |\\nWith WD | 70.4 \\u00b10.3 | 71.35 \\u00b10.25 | 70.6 \\u00b10.1 | 70.6 \\u00b10.1 | \\n\\n\\n>> Concise within 8 pages. \\n\\n\\u2014 We thank the reviewer for this suggestion. We had previously used all the pages available to us to make it detailed and easy to read. However, in the next version we will make it more concise, fix the spelling mistakes and remove other errors like using different names for the same model without mentioning it. While we haven\\u2019t been able to fit it into 8 pages, we have shortened it a bit by moving slightly less necessary things to the appendix. If there is something in particular you feel could be easily moved to the Appendix please do let us know.\"}", "{\"title\": \"An Interesting Connection\", \"comment\": \"Hi Authors,\\nThank you for your interesting paper. I noticed that your work concerning stable rank normalization is related to our paper [1], which showed that effective rank does not correlate with test performance in some cases. Please consider mentioning the relationship with our work in your next version.\\n\\n[1] https://arxiv.org/abs/1910.00359\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes normalizing the stable rank (ratio of the Frobenius norm to the spectral norm) of weight matrices in neural networks. They propose an algorithm that provably finds the optimal solution efficiently and perform experiments to show the effectiveness of this normalization technique.\\n\\nStable rank of the weight matrix is an interesting quantity that shows up in several generalization bounds. Therefore, regularizing such measure could potentially help with generalization. Authors discuss this clearly and they provide an algorithm that provably finds the projection. I enjoyed reading this part of the paper. The only question that I have from this part is the role of partitioning index. It looks like it is not really being used in the experiments later. Is that right? What is the importance of adding it to the paper if it is not being used?\\n\\nMy main issue is with the empirical evaluation of the normalization technique. I am not an expert in GANs so I leave that to other reviewers to judge. Experiments on random labels and looking at different generalization measures are all nice but they are not sufficient for showing that this normalization technique is actually useful in practice. Therefore, I suggest authors to put more emphasize at showing how their regularization can improve generalization in practice. My suggestions:\\n\\n- Authors only provided experiments on CIFAR100 dataset to support their claim on improving generalization. I suggest adding at least one other dataset (CIFAR10, or even better imagenet) to improve their empirical results. \\n\\n- Unfortunately, there are two major issues with the current CIFAR100 results: 1) the accuracies reported for ResNet and DenseNet are too low compare what is reported in the literature. Please resolve this issue. 2) The current result is with training with a fixed number of epochs. Instead, train with a stopping criterion based on the cross-entropy loss on the training set and use the same stopping criterion for all models. Also, add the plots that show training and test errors based on the #epochs.\\n\\n\\nOverall, I think the paper is interesting but the empirical results are not sufficient to support the main claim of the paper (improving generalization). I'm willing to increase my score if authors apply the above suggestions. \\n\\n\\n*************************\", \"after_author_rebuttals\": \"Authors have address my concerns adequately in the last revision and improved the experiment section. Therefore, I increase my score to 6.\"}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Stable Rank Normalization for Improved Generalization in Neural Networks and GANs\", \"summary\": \"This paper proposes to normalize network weights using the Stable Rank, extending the method of spectral normalization. The stable rank is the ratio of the squared frobenius norm to the squared spectral norm (the top singular value). The authors motivate this decision in the context of lipschitz constraints and noise sensitivity. The proposed method (combined with spectral norm as SRN alone does not explicitly subsume SN) is tested for both classification (using a wide range of popular models on CIFAR) and GAN training. Performance (classification accuracy and FID/IS) is measured and several auxiliary investigations are performed into generalization bounds, sample complexity, and sensitivity to hyperparameters.\", \"my_take\": \"This paper motivates and presents an interesting extension of spectral norm, and evaluates it quite well with thorough experiments in a range of settings. The method looks to be reasonably accessible to implement, although its compute cost is not properly characterized and some details (like the orthogonalization step necessary in power-iteration for more than one SV) seem to be omitted. My two main concerns are that the results, while good, are not especially strong (the relative improvement is not very high) and that the paper could be made substantially more concise to fit within the 8 page soft limit (I felt there was plenty of material that could be moved to the appendix). All in all this is a reasonably clear accept to me (7/10) that with some cleanup could be a more solid 8, and I argue in favor of acceptance.\\n\\nNotes\\n\\n-The paper should characterize the runtime difference between SRN and SN. It is presently unclear how computationally intensive the method is. What is the difference in time per training iteration? The authors should also indicate their hardware setup and overall training time.\\n\\n-I found table 1 confusing as it lacks the test error. Are the test errors the same for all these models and the authors are just showing that for certain settings the SRN models have higher training error? If there is a difference in testing error, then this table is misleading, as one cares little about the training error if the test errors vary. If the test errors are approximately the same, then why should I care if the training error is higher? This would just be a way to decrease the stated \\u201cgeneralization gap,\\u201d which is not necessarily indicative of a better model (vis-\\u00e0-vis the commonly held misconception that comparing training error between models is properly indicative of relative overfitting). \\n\\n-Nowhere (that I could spot) in the body of the paper is it explained what \\u201cStable-50\\u201d, \\u201cSRN-50\\u201d, and \\u201cSRN-50%\\u201d are. I assume these all mean the same thing and it refers to the choice of the c hyperparameter, but this should be explicitly stated so that the reader knows which model corresponds to which settings.\\n\\nMinor\\n\\n-The footnotes appear to be out of order, footnote 1 appears on page 9\\n\\n-There are typos such as \\u201csimiplicity,\\u201d please proofread thoroughly.\"}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"While spectral normalization is often used to improve generalization by\\ndirectly bounding the Lipschitz constant of linear layers, recent works have\\nhighlighted alternate methods that aim to reduce generalization error. This\\npaper shows how to implement these \\\"stable rank\\\" normalizations with little\\ncomputational overhead. The authors then apply the method to a wide variety of\\nclassification and GAN problems to show the benefits of stable rank\\nnormalization.\\n\\nThis is a good paper and can be accepted. The added value comes from their\\nThm. 1, where they detail precisely how to project a real matrix onto one of\\nlower srank while preserving the largest k eigenvalues. The spectral preservation\\nk seems to be a new feature of their method. Full proofs and additional results are\\nprovided in appendices. There seems to be enough information to implement\\nthe described methods.\\n\\nThe paper is carefully written and introductory sections do a great job of putting\\nthe problem in perspective. Very few typos (\\\"calssification\\\", run a spell check).\\n\\n--------- Fun to think about ------ here are some extra comments\", \"some_related_older_introductory_approaches_could_also_be_quickly_mentioned\": \"- linear layers represented as \\\"bottlenecks\\\" to enforce low rank explicitly\\n - or solving in manifold of reduced-rank matrices directly\\n\\nFor simplicity, they target the same srank r=c*min(m,n) for all layers, even though only the sum\\nof sranks is important. For CNNs with only a few linear layers is there any observable\\ndifference by lightly deviating from this? Does the first linear layer typically contribute\\nthe lion's share to the sum of sranks?\\n\\nIt is interesting that by only addressing the linear layers of deep CNNs they\\nare able to see consistent improvements. [i.e. 3 linear layers after 101 CNN layers].\\n This makes me wonder whether future work will also address how \\\"stable rank\\\"\\nconcepts might be extended to the convolutional layers. As a starting point, spectral\\nvalues of the block-circulant matrices corresponding to convolutions have been\\ndescribed [ Sedghi et al. \\\"Singular Values of Convolutional Layers\\\" ].\"}" ] }
ryestJBKPB
Graph Neural Networks for Soft Semi-Supervised Learning on Hypergraphs
[ "Naganand Yadati", "Tingran Gao", "Shahab Asoodeh", "Partha Talukdar", "Anand Louis" ]
Graph-based semi-supervised learning (SSL) assigns labels to initially unlabelled vertices in a graph. Graph neural networks (GNNs), esp. graph convolutional networks (GCNs), inspired the current-state-of-the art models for graph-based SSL problems. GCNs inherently assume that the labels of interest are numerical or categorical variables. However, in many real-world applications such as co-authorship networks, recommendation networks, etc., vertex labels can be naturally represented by probability distributions or histograms. Moreover, real-world network datasets have complex relationships going beyond pairwise associations. These relationships can be modelled naturally and flexibly by hypergraphs. In this paper, we explore GNNs for graph-based SSL of histograms. Motivated by complex relationships (those going beyond pairwise) in real-world networks, we propose a novel method for directed hypergraphs. Our work builds upon existing works on graph-based SSL of histograms derived from the theory of optimal transportation. A key contribution of this paper is to establish generalisation error bounds for a one-layer GNN within the framework of algorithmic stability. We also demonstrate our proposed methods' effectiveness through detailed experimentation on real-world data. We have made the code available.
[ "Graph Neural Networks", "Soft Semi-supervised Learning", "Hypergraphs" ]
Reject
https://openreview.net/pdf?id=ryestJBKPB
https://openreview.net/forum?id=ryestJBKPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "mfmGtUPx9", "H1ezioD2jS", "B1l8cvghsH", "HkgWCIxnor", "B1l_Erl2sS", "Bygmcc07cB", "BygAYy6y5H", "rkllgc3atH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734195, 1573841818001, 1573812109678, 1573811913437, 1573811504155, 1572231818600, 1571962758021, 1571830247867 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1854/Authors" ], [ "ICLR.cc/2020/Conference/Paper1854/Authors" ], [ "ICLR.cc/2020/Conference/Paper1854/Authors" ], [ "ICLR.cc/2020/Conference/Paper1854/Authors" ], [ "ICLR.cc/2020/Conference/Paper1854/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1854/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1854/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes and evaluates using graph convolutional networks for semi-supervised learning of probability distributions (histograms). The paper was reviewed by three experts, all of whom gave a Weak Reject rating. The reviewers acknowledged the strengths of the paper, but also had several important concerns including quality of writing and significance of the contribution, in addition to several more specific technical questions. The authors submitted a response that addressed these concerns to some extent. However, in post-rebuttal discussions, the reviewers chose not to change their ratings, feeling that quality of writing still needed to be improved and that overall a significant revision and another round of peer review would be needed. In light of these reviews, we are not able to recommend accepting the paper, but hope the authors will find the suggestions of the reviewers helpful in preparing a revision for another venue.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Summary of the Rebuttal\", \"comment\": \"We thank all the reviewers for their reviews.\\nAll the reviewers expressed concerns on the presentation (paper writing). We have addressed the concerns and uploaded a revised version of our submission. We give a summary of our rebuttal below.\\n\\n\\n$\\\\textbf{Reviewers #2 and #3 suggested evaluation on additional datasets}.$ We have evaluated our proposed method on ACM and arXiv datasets in Table 2 of our paper. The table demonstrates the superiority of our proposed method on the additional datasets as well.\\n\\n\\n$\\\\textbf{Reviewer #2 wanted to know the efficiency of a key step in our method}.$ The time complexity of the key step in our method is linear in the number of directed hyperedges. We use Simple GCN to efficiently perform the graph convolution operation. Emprically, we have also shown superior performance on the large-scale arXiv dataset\\n\\n\\n$\\\\textbf{Reviewer #2 had questions on the applicability of our theoretical analysis to multiple layers}.$ Our analysis can be trivially applied to multiple layers ($d$ hops) by considering the graph corresponding to the adjacency $\\\\mathcal{A}=A^d$ where $A$ is the adjacency of the input graph. This is again motivated by the Simple GCN formulation in which powers of the adjacency are used to define the graph convolution.\\n\\n\\n$\\\\textbf{Reviewer #1 expressed a concern on the model being fragile}.$. To address this concern, we have conducted more ablation studies. The results are shown in table 4 (section A.4.1). It clearly shows that $1$ layer of DHN and $2$ layers of GNN give the best performance.\\n\\n\\n$\\\\textbf{Reviewer #3 had concerns on the novelty of the paper}.$ To the best of our knowledge, we are the first to explore GNNs for soft semi-supervised learning. We propose a novel method for directed hypergraphs, demonstrate its improved performance, and provide five benchmark directed hypergraph datasets for soft semi-supervised learning. On the theoretical front, we modified the \\u201cgradient\\u201d in the Wasserstein space to satisfy the Lipschitz condition required in the algorithmic stability framework and this is not seen in existing literature.\"}", "{\"title\": \"Our response to Reviewer #3\", \"comment\": \"Thanks for the review.\\n\\n$\\\\textbf{On an improved writing}:$\\nThanks for pointing this out. Reviewer #1 had similar concerns regarding paper writing in section 3.3 and references. We have improved the writing in the revised version.\\n\\n\\n\\n$\\\\textbf{On the novelty of the paper:}$\\nTo the best of our knowledge, we are the first to explore GNNs for soft semi-supervised learning. We propose a novel method for directed hypergraphs. On the empirical side, we provide five benchmark datasets for soft semi-supervised learning on directed hypergraphs. \\n\\nOn the theoretical front, the main novelty is to provide bounds for learning problem \\u201cvalued in the Wasserstein space\\u201d. This requires some technicality as the Wasserstein space is an abstract metric space without linear structure. \\n\\nSpecifically, we have to modify the \\u201cgradient\\u201d in the Wasserstein space as the straightforward version does not satisfy the Lipschitz condition required in the algorithmic stability framework. This modification is not seen in existing literature to the best of our knowledge. It can be thought of as a generalisation of the \\u201cgradient clipping\\u201d operation in [Hardt et al., ICML'16]. \\n\\n\\n\\n$\\\\textbf{On more benchmark datasets:}$\\nFollowing the reviewer\\u2019s suggestion, we experimented on two additional benchmark datasets (ACM and arXiv) to evaluate the proposed method. The results are shown in Table 2 of our revised paper. As we can see, our method is consistently superior to baselines. \\n\\n\\n\\n[Hardt et al., ICML'16] Train faster, generalize better: Stability of stochastic gradient descent\"}", "{\"title\": \"Our response to Reviewer #1\", \"comment\": \"Thanks for the review.\\n\\n$\\\\textbf{On the design being fragile:}$\\nWe have conducted more ablation studies on the layers of our method and the results are shown in table 4 (section A.4.1). \\nAs we can see, the optimal configuration for our method is 1 DHN layer and 2 GNN layers. \\n2 DHN layers degrade the performance and we believe this is because of the oversmoothing issue [Li et al., AAAI'18] caused by an additional matrix multiplication by the incidence matrix.\\n\\n\\n\\n$\\\\textbf{On an improved writing:}$\\nThanks for the comments on presentation. We have corrected all the suggested changes in the revised version. We have also explicitly said what `\\\"suitable graph\\u201d means (clique expansion for HGNN and mediator-based Laplacian for HyperGCN). \\n\\n\\n\\n[Li et al., AAAI'18] Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning\"}", "{\"title\": \"Our response to Reviewer #2\", \"comment\": \"Thanks for the review.\\n\\n$\\\\textbf{On the efficiency of the method after the transformation:}$\\nThe time complexity of GNN after the transformation is linear in the number of directed hyperedges. The simple GCN formulation [Wu et al., ICML\\u201919] enables us to efficiently apply our method to large-scale applications.\\n\\n\\n\\n$\\\\textbf{On a larger challenging dataset:}$\\nFollowing the reviewer\\u2019s suggestion, we experimented on the large-scale arXiv dataset [Clement et al]. The results are shown in Table 2 of our revised paper. As we can see, our proposed soft-DHN outperforms the baselines because it exploits directions among hyperedges (while the baselines do not).\\n\\n\\n\\n$\\\\textbf{On the theoretical analysis:}$:\\nOur analysis can be applied for a deeper network (with depth $d$) by considering the graph corresponding to the adjacency $B = A^d$ where $A$ is the adjacency of the given graph. This is again motivated by the Simple GCN formulation in which powers of A are used to define graph convolution. Also, as suggested by the reviewer, we have briefly reviewed and summarised the equations in section 4 of our revised paper and improved the writing.\\n\\n\\n[Wu et al., ICML\\u201919] Simplifying Graph Convolutional Networks\\n[Clement et al.] On the Use of ArXiv as a Dataset, ICLR 2019 workshop RLGM\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors propose a soft semi-supervised learning approach on a hypergraph. On the one hand, the vertex labels should be not only numerical or categorical variables but also probability distributions. On the other hand, hypergraphs provide a much more flexible mean to encode the real-world complicated relationship compared to the essential pairwise association. Specifically, the authors carefully obtain the generalization error bounds for a one-layer graph neural network. The experiments support the theory and provide the empirical verification of the proposed method. The appendix offers plenty of details that are helpful for the reader to understand both the theoretical and practical perspectives of this paper. Also, the code looks good to me. The structure of the provided codebase is clear and well documented.\", \"i_have_some_questions_and_suggestions_for_the_authors\": \"1. In the method section 3.3, the authors said that \\u201cA key idea of our approach is to treat each hyperedge as a vertex of the graph\\u201d After this transformation, the graph could be super sparse. So I\\u2019d like to know more about the efficiency of the proposed method. Because the datasets in this paper could not be considered as a giant graph. The efficiency could be much more critical for large-scale applications. Besides, such conversion could be one of the most significant technical novelty in this paper, which makes me worry about the methodology contribution of this submission.\\n\\n2. In the theoretical analysis section 4, there are lots of references for the existing lemmas and theorems. It could be much better if the authors could briefly review and summarize these equations before applying them. By the way, I appreciate the detailed appendix at the same time. But such a summary could complete the paper in a more self-contained way. Also, the authors should improve the writing of paper at the same time.\\n\\n3. In the experiments section 5, I wonder if the authors would consider some more challenging datasets with the larger graph to evaluate the proposed method. Also, the theoretical analysis is about one-layer networks, which looks technical sound to me. However, in practice, we could not only use the one-layer network for graph classification, even for a small graph.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Contributions:\\n1. This paper proposes a semi-supervised graph neural network method for hypergraphs.\\n2. A generalization error bound was proposed adapted to the semi-supervised setting with Wasserstein's loss.\\n3. Empirical results demonstrate the effectiveness of the proposed method.\", \"the_algorithmic_contribution_of_this_paper_is_clear\": \"it proposes a new network architecture that (1) initialize latent features H^{(0)}_E for hyperedges from a \\\"hyperedge graph\\\" and (2) learn latent features for each node and hyperedge using a GCN type network. Since the latent features are formulated as discrete distributions, a Wasserstein distance can be applied for training with the sinkhorn approximation algorithm.\\n\\nI think the weakness of this paper is two folds, which makes it not ready to publish. First, the paper claims that the performance gain results from the exploitation of directed hyperedges. This is reasonable if I barely see the results for Soft-DHN. However, I find the design of the hyperedge GNN to be fragile regarding the number of layers from the result in Table 4. Normally it is reasonable to use a two-layer GNN, but the result is very bad when doing so (see Soft-DHN 2 layers result in Sec. A.4.1). Also, the result of the proposed model is still good when not applying a hypergraph GNN. So I'm confused about where the performance gain comes from.\\n\\nAnother weakness is the paper writing. Below are a few comments:\\n[Page 3, Sec. 3.1] By saying \\\"t\\\\neq\\\\Phi\\\", do you mean \\\"t\\\\neq\\\\emptyset\\\"?\\n[Page 3, Sec. 3.2] Please introduce the notation (M, C) right after it first appears in the third line of Sec. 3.2.\\n[Page 3, Sec. 3.2] The notation n,m are confusing. Do you mean m=|E|?\\n[Page 3, Sec. 3.2] I cannot see how Z=h(\\\\mathcal{H},X_V,X_E) maps each vertex to a probability distribution. Do you mean each row of Z is a probability distribution for each vertex?\\n[Page 4, Sec. 3.2] Please define E_D. Is E_D the same as E_d?\\n[Page 4, Sec. 3.3] Be explicit by saying \\\"approximate the hypergraph by a suitable graph\\\", what do you mean by \\\"suitable graph\\\"? Does it mean use cliques in place of hyperedges?\\n[Page 4, Sec. 3.3] t=0,...,\\\\tau-1\\n\\nSince the writing significantly affects the paper readability, and the core contribution of the paper seems incremental, I will vote for a reject to this paper.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This work explores hypergraph-based SSL of histograms. DHN enables soft SSL learning based on the existing tools from optimal transportation. The idea of treating hyperedges as vertexes of another graph is novel and the theoretical analysis is sound.\\n\\nHowever, the paper has the following issues:\\n\\n1) The writing is poor, especially about the reference, and the description of how and why DHN works (section 3.3).\\n\\n2) The novelty is restricted. It seems that the only innovation is introducing the information from the hyperedges into $H_E^{(0)}$.\\n\\n3) Though the experiments on Cora and DBLP have revealed the superiority of DHN, the authors still need a more thorough empirical evaluation on some challenging benchmarks to draw the conclusion.\\n\\nI'm willing to increase my score if the concerns are addressed.\"}" ] }
BylsKkHYvH
Why Not to Use Zero Imputation? Correcting Sparsity Bias in Training Neural Networks
[ "Joonyoung Yi", "Juhyuk Lee", "Kwang Joon Kim", "Sung Ju Hwang", "Eunho Yang" ]
Handling missing data is one of the most fundamental problems in machine learning. Among many approaches, the simplest and most intuitive way is zero imputation, which treats the value of a missing entry simply as zero. However, many studies have experimentally confirmed that zero imputation results in suboptimal performances in training neural networks. Yet, none of the existing work has explained what brings such performance degradations. In this paper, we introduce the variable sparsity problem (VSP), which describes a phenomenon where the output of a predictive model largely varies with respect to the rate of missingness in the given input, and show that it adversarially affects the model performance. We first theoretically analyze this phenomenon and propose a simple yet effective technique to handle missingness, which we refer to as Sparsity Normalization (SN), that directly targets and resolves the VSP. We further experimentally validate SN on diverse benchmark datasets, to show that debiasing the effect of input-level sparsity improves the performance and stabilizes the training of neural networks.
[ "Missing Data", "Collaborative Filtering", "Health Care", "Tabular Data", "High Dimensional Data", "Deep Learning", "Neural Networks" ]
Accept (Poster)
https://openreview.net/pdf?id=BylsKkHYvH
https://openreview.net/forum?id=BylsKkHYvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "CQaoL3CA_j", "SkxqwIb3or", "rkl98aE9or", "BJgXpqjYsS", "HJxNcFitiH", "rylbIKoFsH", "Hkg_4YiFiH", "Hkx5MFotjH", "SJeshdoFoH", "Bkl49OsYoS", "Bkl5QuoYoH", "SkeTlustsB", "Bkei_wiFoB", "S1g-T-zX5B", "r1eWppVRFr", "rkgNqJ7iKS", "r1lmuyH9tr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734138, 1573815906405, 1573698898207, 1573661371188, 1573661068287, 1573661000582, 1573660975734, 1573660945803, 1573660851035, 1573660812455, 1573660705952, 1573660660651, 1573660530713, 1572180409049, 1571863993166, 1571659660264, 1571602282740 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1853/Authors" ], [ "ICLR.cc/2020/Conference/Paper1853/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1853/Authors" ], [ "ICLR.cc/2020/Conference/Paper1853/Authors" ], [ "ICLR.cc/2020/Conference/Paper1853/Authors" ], [ "ICLR.cc/2020/Conference/Paper1853/Authors" ], [ "ICLR.cc/2020/Conference/Paper1853/Authors" ], [ "ICLR.cc/2020/Conference/Paper1853/Authors" ], [ "ICLR.cc/2020/Conference/Paper1853/Authors" ], [ "ICLR.cc/2020/Conference/Paper1853/Authors" ], [ "ICLR.cc/2020/Conference/Paper1853/Authors" ], [ "ICLR.cc/2020/Conference/Paper1853/Authors" ], [ "~Jaeyoon_Yoo1" ], [ "ICLR.cc/2020/Conference/Paper1853/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1853/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1853/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"This paper investigates the problem of using zero imputation when input features are missing. The authors study this problem, propose a solution, and evaluate on several benchmark datasets. The reviewers were generally positive about the paper, but had some questions and concerns about the experimental results. The authors addressed these concerns in the rebuttal. The reviewers are generally satisfied and believe that the paper should be accepted.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Additional Comment of Response to Reviewer #1 [Part 3/4]\", \"comment\": \"We additionally investigate how the model outputs change with the number of known entries of the input when each normalization is applied (as we show in Figure 1). Since BN and LN do not specifically target VSP in the first place, they do not completely solve the bias caused by input sparsity levels. In particular, this phenomenon becomes more pronounced when L2 weight decay is not applied (or weight decay parameter lambda is small), hence we have large absolute values of weights. The details can be found in Appendix H.1.2.\"}", "{\"title\": \"Official Blind Review #3\", \"comment\": \"Thank you for the responses. Adding the comparison with multiple recent imputation methods will surely significantly improve the impact of the paper.\"}", "{\"title\": \"Response for Jaeyoon Yoo\", \"comment\": \"Thank you for your comments. Please refer to the answer to R1 or R3.\\n\\nThere is a minor note about the dropout method in the GMMC [2] paper you mentioned. The dropout method uses a single drop (missing) probability uniformly across all instances of the dataset. On the other hand, our algorithm normalizes each data instance with its own missing rate. The global drop probability does not solve the variable sparsity problem, so it shows poor performance as seen in Appendix H. For example, on Movielens 100K dataset (AutoRec with item vector encoding), the RMSE of dropout and SN is as follows: 0.9268 \\u00b1 0.0261 (dropout) vs 0.8809 \\u00b1 0.0011 (zero imputation w/ SN). For other various datasets and models, SN also shows similar or better performance compared to the dropout.\\n\\n[2] Marek \\u0301Smieja, \\u0141ukasz Struski, Jacek Tabor, Bartosz Zieli \\u0301nski, and Przemys\\u0142aw Spurek. Processing of missing data by neural networks. In Advances in Neural Information Processing Systems, pp.2719\\u20132729, 2018.\"}", "{\"title\": \"Response to Reviewer #3 [Part 1/4]\", \"comment\": \"We thank the reviewer for thoughtful and constructive feedback.\\n\\n[1. Powerful backbone architecture on collaborative filtering datasets]\\n- We use AutoRec (Sedhain et al., 2015) and its variant CF-NADE (Zheng et al., 2016) without any intention simply because many modern nn-based models are in fact variants of AutoRec. But, following the reviewers\\u2019 valuable suggestion, we consider CF-UIcA [3], one of the current state-of-the-arts, as a new backbone on collaborative filtering datasets, and consistently achieve even stronger performances (in terms of RMSE):\\n - Movielens 100K: 0.8945 \\u00b1 0.0024 (w/o SN) vs. 0.8793 \\u00b1 0.0017 (w/ SN)\\n - Movielens 1M: 0.8223 \\u00b1 0.0016 (w/o SN) vs. 0.8178 \\u00b1 0.0007 (w/ SN)\\n\\nNote that we do not test for Movielens 10M because the authors of CF-UIcA did not provide the results for it due to the complexity of the model. \\n\\n[3] Du, C., Li, C., Zheng, Y., Zhu, J., & Zhang, B. (2018, April). Collaborative filtering with user-item co-autoregressive models. In Thirty-Second AAAI Conference on Artificial Intelligence.\"}", "{\"title\": \"Response to Reviewer #3 [Part 2/4]\", \"comment\": \"[2. Comparison with other missing handling techniques]\\n- First of all, the main contribution of our paper is to provide a deeper understanding and the corresponding solution about the issue that the zero imputation, the simplest and most intuitive way of handling missing data, degrades the performance in training neural networks. Hence, we only considered the vanilla zero imputation as our baseline in the submission since we do not claim that our corrected zero imputation (with SN) is the best for all tasks. \\n\\nHowever, some reviewers wanted to see direct comparisons against other state-of-the-art imputation techniques such as GAIN [1] and GMMC [2]. Hence, we performed additional comparisons against them on tasks considered in our paper as well as available tasks in [1] and [2]. Interestingly (and thanks to the reviewers who raised this issue), our corrected zero imputation (with SN), even with its simplicity, shows at least comparable or significantly better performances over all baselines, on all considered tasks. Here only two cases (Movielens 100K for high dimensional/missing rate case; NHIS dataset for low dimensional/missing rate case) are shown as examples and the rest are described in Appendix H:\\n\\n(Movielens 100K using item vector encoding) \\n----------------------------------------------------------------\\n Model | Test RMSE\\n----------------------------------------------------------------\\n Zero Imputation w/o SN | 0.8835 \\u00b1 0.0003\\n Zero Imputation w/ SN | 0.8809 \\u00b1 0.0011\\n----------------------------------------------------------------\\n Zero Imputation w/ BN | 0.9205 \\u00b1 0.0081\\n Zero Imputation w/ LN | 0.9396 \\u00b1 0.0141\\n Dropout | 0.9268 \\u00b1 0.0261\\n Mean Imputation | 0.9206 \\u00b1 0.0012\\n Median Imputation | 0.9196 \\u00b1 0.0017\\n kNN | 0.9133 \\u00b1 0.0011\\n MiCE | 0.9209 \\u00b1 0.0022\\n SoftImpute | 0.8867 \\u00b1 0.0007\\n GMMC | 0.9109 \\u00b1 0.0166\\n GAIN | 1.0354 \\u00b1 0.0101\\n----------------------------------------------------------------\\n(NHIS dataset diabetes identification task) \\n----------------------------------------------------------------\\n Model | Test AUROC\\n----------------------------------------------------------------\\n Zero Imputation w/o SN | 0.9121 \\u00b1 0.0097\\n Zero Imputation w/ SN | 0.9283 \\u00b1 0.0011\\n----------------------------------------------------------------\\n Zero Imputation w/ BN | 0.9026 \\u00b1 0.0105\\n Zero Imputation w/ LN | 0.9127 \\u00b1 0.0056\\n Dropout | 0.9101 \\u00b1 0.0054\\n Mean Imputation | 0.9117 \\u00b1 0.0075\\n Median Imputation | 0.8975 \\u00b1 0.0060\\n kNN | 0.9107 \\u00b1 0.0075\\n MiCE | 0.9224 \\u00b1 0.0021\\n SoftImpute | 0.9224 \\u00b1 0.0019\\n GMMC | 0.9109 \\u00b1 0.0045\\n GAIN | 0.9091 \\u00b1 0.0067\\n----------------------------------------------------------------\\nNote that the evaluation metrics are different for above two cases (RMSE for Movielens and AUROC for NHIS). In the most low dimensional and low missing rate cases, the problem is relatively easy, so all imputation methods work comparably well. \\n\\nWe still believe that each imputation technique has its own advantages and disadvantages, and we do not claim that our corrected zero imputation (with SN) is always the best. However, we do believe that these new experiments show SN is a sufficiently competitive technique.\\n\\n[1] Jinsung Yoon, James Jordon, and Mihaela Van Der Schaar. Gain: Missing data imputation using generative adversarial nets. In Proceedings of the 35th International Conference on Machine Learning-Volume 71, 2018\\n[2] Marek \\u0301Smieja, \\u0141ukasz Struski, Jacek Tabor, Bartosz Zieli \\u0301nski, and Przemys\\u0142aw Spurek. Processing of missing data by neural networks. In Advances in Neural Information Processing Systems, pp.2719\\u20132729, 2018.\"}", "{\"title\": \"Response to Reviewer #3 [Part 3/4]\", \"comment\": \"[3. MCAR assumption]\\n- We also fully agree that MCAR assumption is the one that are generally not well established in real cases. But, this assumption drastically simplifies our statements (as we know, theoretical analysis always requires some simplified assumptions and hence there's some gap with reality). Without this assumption, we can't get such neat statements since we have to worry about some worst cases in our analysis, but this does not mean SN is ineffective even in theory; we can still see that SN can reduce the dependency on sparsity level to some extent, although in a much more complex form. In order to make up for having such a simplified assumption, we experimentally show that variable sparsity problems actually exist in various real-world datasets even where the MCAR assumption does not hold, and that SN can relieve this problem.\"}", "{\"title\": \"Response to Reviewer #3 [Part 4/4]\", \"comment\": \"[4. Does your model assume all input values are numerical but not categorical?]\\n- There is no restriction about the type of inputs in our analysis and the construction of our algorithm. In fact, CF-NADE and CF-UIcA for collaborative filtering datasets in our experiments, only allow categorical values for their inputs where SN successfully achieves the performance improvement. Another example of using SN for categorical input is density estimation tasks (binarized MNIST) in Section 4.5.\"}", "{\"title\": \"Response to Reviewer #1 [Part 1/4]\", \"comment\": \"We thank the reviewer for thoughtful and constructive feedback.\\n\\n[1. Comparison with other missing handling techniques]\\n- First of all, the main contribution of our paper is to provide a deeper understanding and the corresponding solution about the issue that the zero imputation, the simplest and most intuitive way of handling missing data, degrades the performance in training neural networks. Hence, we only considered the vanilla zero imputation as our baseline in the submission since we do not claim that our corrected zero imputation (with SN) is the best for all tasks. \\n\\nHowever, some reviewers wanted to see direct comparisons against other state-of-the-art imputation techniques such as GAIN [1] and GMMC [2]. Hence, we performed additional comparisons against them on tasks considered in our paper as well as available tasks in [1] and [2]. Interestingly (and thanks to the reviewers who raised this issue), our corrected zero imputation (with SN), even with its simplicity, shows at least comparable or significantly better performances over all baselines, on all considered tasks. Here only two cases (Movielens 100K for high dimensional/missing rate case; NHIS dataset for low dimensional/missing rate case) are shown as examples and the rest are described in Appendix H:\\n\\n(Movielens 100K using item vector encoding) \\n----------------------------------------------------------------\\n Model | Test RMSE\\n----------------------------------------------------------------\\n Zero Imputation w/o SN | 0.8835 \\u00b1 0.0003\\n Zero Imputation w/ SN | 0.8809 \\u00b1 0.0011\\n----------------------------------------------------------------\\n Zero Imputation w/ BN | 0.9205 \\u00b1 0.0081\\n Zero Imputation w/ LN | 0.9396 \\u00b1 0.0141\\n Dropout | 0.9268 \\u00b1 0.0261\\n Mean Imputation | 0.9206 \\u00b1 0.0012\\n Median Imputation | 0.9196 \\u00b1 0.0017\\n kNN | 0.9133 \\u00b1 0.0011\\n MiCE | 0.9209 \\u00b1 0.0022\\n SoftImpute | 0.8867 \\u00b1 0.0007\\n GMMC | 0.9109 \\u00b1 0.0166\\n GAIN | 1.0354 \\u00b1 0.0101\\n----------------------------------------------------------------\\n(NHIS dataset diabetes identification task) \\n----------------------------------------------------------------\\n Model | Test AUROC\\n----------------------------------------------------------------\\n Zero Imputation w/o SN | 0.9121 \\u00b1 0.0097\\n Zero Imputation w/ SN | 0.9283 \\u00b1 0.0011\\n----------------------------------------------------------------\\n Zero Imputation w/ BN | 0.9026 \\u00b1 0.0105\\n Zero Imputation w/ LN | 0.9127 \\u00b1 0.0056\\n Dropout | 0.9101 \\u00b1 0.0054\\n Mean Imputation | 0.9117 \\u00b1 0.0075\\n Median Imputation | 0.8975 \\u00b1 0.0060\\n kNN | 0.9107 \\u00b1 0.0075\\n MiCE | 0.9224 \\u00b1 0.0021\\n SoftImpute | 0.9224 \\u00b1 0.0019\\n GMMC | 0.9109 \\u00b1 0.0045\\n GAIN | 0.9091 \\u00b1 0.0067\\n----------------------------------------------------------------\\nNote that the evaluation metrics are different for above two cases (RMSE for Movielens and AUROC for NHIS). In the most low dimensional and low missing rate cases, the problem is relatively easy, so all imputation methods work comparably well. \\n\\nWe still believe that each imputation technique has its own advantages and disadvantages, and we do not claim that our corrected zero imputation (with SN) is always the best. However, we do believe that these new experiments show SN is a sufficiently competitive technique.\\n\\n[1] Jinsung Yoon, James Jordon, and Mihaela Van Der Schaar. Gain: Missing data imputation using generative adversarial nets. In Proceedings of the 35th International Conference on Machine Learning-Volume 71, 2018\\n[2] Marek \\u0301Smieja, \\u0141ukasz Struski, Jacek Tabor, Bartosz Zieli \\u0301nski, and Przemys\\u0142aw Spurek. Processing of missing data by neural networks. In Advances in Neural Information Processing Systems, pp.2719\\u20132729, 2018.\"}", "{\"title\": \"Response to Reviewer #1 [Part 2/4]\", \"comment\": \"[2. Your algorithm is only explained with neural net framework, how can we extend it to the other machine learning models?]\\n\\n- In this paper, we analyze the variable sparsity problem with the focus of neural networks. Our theoretical analysis can be seamlessly applied to certain non-neural network models (e.g. shallow linear regression is a special case of our analysis without hidden layers). However, we need further research to confirm whether VSP occurs for all machine learning models in general. \\n\\nWe believe that VSP for each model needs to be studied separately. Even within the neural network framework we are focusing on, there are different results depending on the type of activation functions and other details as you can see in Theorem 1-3.\"}", "{\"title\": \"Response to Reviewer #1 [Part 3/4]\", \"comment\": \"[3. Does Batch Normalization make similar effect to SN?]\\n\\n- We thank for the interesting suggestion. We performed experimental comparison against both BN and Layer Normalization (LN) (since BN could stabilize statistics of hidden layer but it does not consider instance wise characteristics). Please see Appendix H for the results and discussions.\\n\\nIn our new experiments, SN significantly outperforms both LN and BN in most cases (or yields at least comparable performance in all cases). Note that, in certain settings, LN and BN perform even worse than vanilla zero imputation with a large margin. For instance, on Movielens 100K (AutoRec, item vector encoding), RMSE of vanilla zero imputation, LN, BN as follows: 0.8835 \\u00b1 0.0003 (zero imputation w/o SN) vs. 0.9396 \\u00b1 0.0141 (LN) vs. 0.9205 \\u00b1 0.0081 (BN). Thus neither BN or LN seem to be effective in solving the VSP problem as SN. \\n\\nIn all our previous experiments, we inadvertently did not consider Batch Normalization (BN) simply because BN is not widely used in dealing with tabular datasets despite its universality on vision tasks.\"}", "{\"title\": \"Response to Reviewer #1 [Part 4/4]\", \"comment\": \"[4. Please provide labels for the x-axes in the figures.]\\n- We apologize for the confusion. We fixed this issue on our revised paper.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for thoughtful and constructive feedback.\\n\\n[Powerful backbone architecture on collaborative filtering datasets]\\n- We use AutoRec (Sedhain et al., 2015) and its variant CF-NADE (Zheng et al., 2016) without any intention simply because many modern nn-based models are in fact variants of AutoRec. But, following the reviewers\\u2019 valuable suggestion, we consider CF-UIcA [3], one of the current state-of-the-arts, as a new backbone on collaborative filtering datasets, and consistently achieve even stronger performances (in terms of RMSE):\\n - Movielens 100K: 0.8945 \\u00b1 0.0024 (w/o SN) vs. 0.8793 \\u00b1 0.0017 (w/ SN)\\n - Movielens 1M: 0.8223 \\u00b1 0.0016 (w/o SN) vs. 0.8178 \\u00b1 0.0007 (w/ SN)\\n\\nNote that we do not test for Movielens 10M because the authors of CF-UIcA did not provide the results for it due to the complexity of the model. \\n\\n[3] Du, C., Li, C., Zheng, Y., Zhu, J., & Zhang, B. (2018, April). Collaborative filtering with user-item co-autoregressive models. In Thirty-Second AAAI Conference on Artificial Intelligence.\"}", "{\"title\": \"need to compare with other imputation method\", \"comment\": \"Hi, it's interesting paper for handling missing data.\\n\\nBut, it needs more thorough comparison\\n\\nAs far as I understand correctly, w/o SN is zero imputation and w/ SN is your variant of zero imputation.\\n\\nThen, there should be comparison between yours and other imputation methods.\\n\\nOne example is Processing of missing data by neural networks, 2018 NIPS.\\n\\n\\\"dropout\\\" in the paper seems to correspond to SN, but it showed almost always worse performance even than k-nn imputation.\\n\\nAlthough I mentioned only one, there are many other imputations to handle missing data, so they also should be covered.\\n\\nThanks\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper studies a very interesting phenomena in machine learning called VSP, that is the output of the model is highly affected via the level of missing values in its input. The authors demonstrate the existence of such phenomena empirically, analyze the root cause for it theoretically, and propose a simple yet effective normalization method to tackle the problem. Several experiments demonstrate the effectiveness of this method.\\n\\nIn general I think the paper is descent and elegant. It is motivated from real-world pain-point, gives a rigorous study towards the root cause, and the proposed method is very effective. To the best of my knowledge there is no prior work looking deep into this area and this paper does bring new insights to the community. As a result I would vote for its acceptance.\\n\\nOne issue is that I find the backbone methods in experiments are somehow out-of-date. For example, AutoRec (2015) and CF-NADE (2016). I admit that I\\u2019m not an expert in the field of recommendation but still think that more recent, and powerful baseline algorithms should be applied on to further demonstrate the true effectiveness of Sparsity Normalization.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper provides a novel solution to the variable sparsity problem, where the output of neural networks biased with respect to the number of missing inputs. The authors proposed a sparsity normalization algorithm to process the input vectors to encounter the bias. In experiments, the authors evaluated the proposed sparsity normalization model on multiple datasets: collaborative filtering datasets, electric medical records datasets, single-cell RNA sequence datasets and UCI datasets. Results show that the proposed normalization method improves the prediction performance and the predicted values of the neural network is more uniformly distributed according to the number of missing entries.\\n\\nThe paper describes a clear and specific machine learning problem. Then the authors demonstrate a simple normalization strategy is capable of fixing the issue of biased prediction. The paper has a well-organized structure to convey the motivation. Therefore, my opinion on this paper leans to an acceptation. My questions are mainly on the experiment section:\\n\\n1) As shown in Table 2, there are various new collaborative filtering methods proposed after 2015, why the authors chose to extend AutoRec (Sedhain et al., 2015) but not other new methods?\\n\\n2) In the experiments, you compare your model with zero imputation (Please correct me if w/o SN is not zero imputation). However, I think it is a common practice in machine learning that we perform imputation with mean or median values. I'm interested in knowing whether filling with mean/median values work with these datasets.\\n\\n3) In section 4.5, you mentioned that \\\"SN is effective\\neven when MCAR assumption is not established\\\". However, I'm still not clear about the reason. I believe many machine learning datasets have NMAR (not missing at random) type of missing data, but not MCAR. So this is an important issue for me.\\n\\n4) Does your model assume all input values are numerical but not categorical?\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Zero imputation is studied from a different view by investigating its impact on prediction variability and a normalization scheme is proposed to alleviate this variation. This normalization scales the input neural network so that the output would not be affected much. While such simple yet helpful algorithms are plausible there are number of remaining issues:\\n1-\\tZero imputation, as authors mentioned, is not an acceptable algorithm for imputation and improving on that via the normalization proposed in the paper cannot be counted as an exciting move in this area unless an extensive comparison shows it\\u2019s benefits over the many other existing techniques. I am interested to see how would the results be if you compare this simple algorithm with more complicated ones like GAIN or MisGAN. It is argued in the paper that with high dimensional data, your algorithm is more acceptable, but how would it be with in other cases?\\n2-\\tYour algorithm is only explained with neural net framework, how can we extend it to the other machine learning models?\\n3-\\tIs batch normalization used in your experiments? Scaling the activation in one layer to reduce its impact on the next layer is somehow similar to what happens in batch normalization, and I am wondering if BN makes any similar effect?\\n4-\\tPlease provide labels for the x-axes in the figures.\\n\\n------------------------------------------\", \"after_rebuttal\": \"Thanks for adding the extra experiments.\\nLooking at Table 9 in appendix, I am bit surprised to see that sometimes mean imputation works better than MICE (GAIN usually works good with large data). Maybe it attributes to the missing features. How did you choose to apply 20% missingness? randomly?\"}" ] }
Byg5KyHYwr
Self-Imitation Learning via Trajectory-Conditioned Policy for Hard-Exploration Tasks
[ "Yijie Guo", "Jongwook Choi", "Marcin Moczulski", "Samy Bengio", "Mohammad Norouzi", "Honglak Lee" ]
Imitation learning from human-expert demonstrations has been shown to be greatly helpful for challenging reinforcement learning problems with sparse environment rewards. However, it is very difficult to achieve similar success without relying on expert demonstrations. Recent works on self-imitation learning showed that imitating the agent's own past good experience could indirectly drive exploration in some environments, but these methods often lead to sub-optimal and myopic behavior. To address this issue, we argue that exploration in diverse directions by imitating diverse trajectories, instead of focusing on limited good trajectories, is more desirable for the hard-exploration tasks. We propose a new method of learning a trajectory-conditioned policy to imitate diverse trajectories from the agent's own past experiences and show that such self-imitation helps avoid myopic behavior and increases the chance of finding a globally optimal solution for hard-exploration tasks, especially when there are misleading rewards. Our method significantly outperforms existing self-imitation learning and count-based exploration methods on various hard-exploration tasks with local optima. In particular, we report a state-of-the-art score of more than 20,000 points on Montezumas Revenge without using expert demonstrations or resetting to arbitrary states.
[ "imitation learning", "hard-exploration tasks", "exploration and exploitation" ]
Reject
https://openreview.net/pdf?id=Byg5KyHYwr
https://openreview.net/forum?id=Byg5KyHYwr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "Iur9nqT9Nl", "HyeWXOo3iH", "HkxBme2ijr", "HJxc0JhjsH", "rJxv9CsijH", "rylzEpoojB", "SyxtVdWg9S", "rJgVvrnCKS", "ryexs85aYS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734107, 1573857304920, 1573793820981, 1573793746342, 1573793423013, 1573793066393, 1571981361167, 1571894619745, 1571821207865 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1851/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1851/Authors" ], [ "ICLR.cc/2020/Conference/Paper1851/Authors" ], [ "ICLR.cc/2020/Conference/Paper1851/Authors" ], [ "ICLR.cc/2020/Conference/Paper1851/Authors" ], [ "ICLR.cc/2020/Conference/Paper1851/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1851/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1851/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper addresses the problem of exploration in challenging RL environments using self-imitation learning. The idea behind the proposed approach is for the agent to imitate a diverse set of its own past trajectories. To achieve this, the authors introduce a policy conditioned on trajectories. The proposed approach is evaluated on various domains including Atari Montezuma's Revenge and MuJoCo.\\n\\nGiven that the evaluation is purely empirical, the major concern is in the design of experiments. The amount of stochasticity induced by the random initial state alone does not lead to convincing results regarding the performance of the proposed approach compared with baselines (e.g. Go-Explore). With such simple stochasticity, it is not clear why one could not use a model to recover from it and then rely on an existing technique like Go-Explore. Although this paper tackles an important problem (hard-exploration RL tasks), all reviewers agreed that this limitation is crucial and I therefore recommend to reject this paper.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Significance needs to be demonstrated rather than suggested\", \"comment\": \"This reviewer agrees with the authors on the significance of the challenge of hard-exploration problems in stochastic where neither state-reset functionality nor human demonstrations are available. Please do keep working in this area and continue to recruit others to work on this challenge.\\n\\nWe need this field to move beyond task formulations that admit unsatisfactory-feeling solutions (Go-Explore). They way to get there is not to ignore or avoid using the exploits they used, but to shift our attention to tasks where those exploits no longer work. After Go-Explore, work that attempts to address the exploration challenge needs to somehow get in contact with a challenge that this algorithm can't address. Not all experiments need to be run in the extra-challenging domain, but at least some should to ensure we are addressing the real problem rather than just what remains after the initial exploit pathways have been removed.\"}", "{\"title\": \"Response to Review #2 (Part 1/2)\", \"comment\": \"Dear Reviewer #2:\\n\\nThank you for the comments. \\n\\n>>> Why trajectory-conditioned policy over just goal-conditioned policy? The note in the related work section doesn't paint a clear enough picture.\\n\\nThe trajectory-conditioned policy can be thought of as an instance of the goal-conditioned policy, though our \\u201cgoal\\u201d is the trajectory with rich intermediate information about how to achieve the final goal state, thus making it easier for the agent to reach the goal. \\n\\nLet\\u2019s consider other formulations of goal-conditioned policy. If the goal is only the single final state (Kulkarni et al., 2016), it may be difficult to visit the goal state far away from the initial state, especially when the goal is only visited by the agent for just a few times. A good example is a game like Montezuma\\u2019s Revenge or Pitfall where there could be many dangers and obstructions along the way to the goal state (e.g., thousands of steps away from the initial state). If the goal includes intermediate information (e.g., a sequence of a small number of sub-goals) aiding the agent towards the final goal state (Liu et al., 2018), the problem becomes easier but it may be still nontrivial to reach the individual sub-goals for long-horizon problems. In addition, learning such a goal-conditioned policy may still require substantial amounts of samples. Our trajectory-conditioned policy is one instance of including the intermediate but more dense information in the goal, making it easier to imitate the previous trajectory with dense imitation reward, and we empirically show that it works well on various domains. We agree there could be alternative (potentially simpler) design choices for our trajectory-conditioned policy, which we will explore in future work.\\n\\n>>> This reviewer moves to reject the paper primarily for not balancing the high complexity of the solution to the lower difficulty of the problem. Pure-exploration algorithms (Go-Explore), not burdened by interleaving policy learning, achieve far superior scores.\\n\\nWe respectfully disagree that the problems we investigated in this paper, especially Montezuma\\u2019s Revenge and Pitfall, are of \\u201clower difficulty\\u201d due to the existence of Go-Explore.\\n\\nFirst, as we mentioned in Related Work, it is worth noting that the success of Go-Explore heavily relies on the assumption that the environment can be reset to an arbitrary state and the environment is completely deterministic in the exploration phase. We argue that this assumption is infeasible in real-life environments where a high-fidelity simulator may not be available (such as complex robotic tasks) and takes an unfair advantage over the \\u201creset-free\\u201d methods. When there is no direct state-reset function and there is stochasticity in the environment, memorizing the past action sequence will not lead the agent to the state of interest (we added these experiments in Appendix K). Therefore in this setting, a more sophisticated \\u201cpolicy learning\\u201d is necessary to enable revisiting states of interest. One contribution of our paper is to remove the reliance on the assumption by learning a trajectory-conditioned policy for visiting diverse regions. Therefore, our method could work well in environments without a simulator.\"}", "{\"title\": \"Response to Review #2 (Part 2/2)\", \"comment\": \"Second, it is true that \\u201cPure-exploration algorithms (Go-Explore), not burdened by interleaving policy learning, achieve far superior scores\\u201d on Montezuma\\u2019s Revenge with the direct state-reset function and deterministic environments. However, on Montezuma\\u2019s Revenge without such strong assumptions, which has been well-known as a hard-exploration game for a long time and has been studied by many previous works (Ostrovski et al., 2017; Tang et al., 2017; Burda et al., 2018; Pohlen et al., 2018), to our best knowledge, our work is the first to successfully train the agent to consistently proceed to the second floor and achieve a score over 20,000 without the help of human expert demonstrations. Perhaps our method is not quite simple, but there is no simpler method yet for Montezuma\\u2019s Revenge (a notorious sparse-reward problem with moderate stochasticity) without relying on any of state-reset function or human expert demonstrations that can achieve a competitive score to ours.\\n\\nIn conclusion, we are studying a more difficult problem than Go-Explore. For this problem, no previous work (including Go-Explore, as discussed in Appendix H) could perform better than our method. Therefore, we believe the existence of Go-Explore should not be the ground for a rejection or undervaluing our work. \\n\\nPlease note that our paper does not aim solely at solving Montezuma\\u2019s Revenge. We would like to study hard-exploration tasks with sparse and misleading rewards. We selected Montezuma\\u2019s Revenge, which is a notoriously difficult hard-exploration environment in the literature, as one instance. To fully support our conclusion, we also conducted environments on other interesting domains, including Apple-Gold, Deep Sea and MuJoco maze. We expect that our method could work for real-world tasks such as robotic manipulation, but we leave as future work.\\n\\nWe would like to ask the reviewer to reconsider the significance and difficulty of the problems we are studying. We strongly believe that the hard-exploration problem (such as Montezuma\\u2019s Revenge) without a state-reset function, without human expert demonstration and with stochasticity (both in terms of initial state and consequences of taken actions) is a very difficult problem. We are not aware of any publications approaching it with comparable success. We appreciate your time and reconsideration.\", \"references\": \"Kulkarni, T. D., Narasimhan, K., Saeedi, A., & Tenenbaum, J. (2016). Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In Advances in neural information processing systems (pp. 3675-3683).\\nLiu, E. Z., Keramati, R., Seshadri, S., Guu, K., Pasupat, P., Brunskill, E., & Liang, P. (2018). Learning Abstract Models for Long-Horizon Exploration.\\nOstrovski, G., Bellemare, M. G., van den Oord, A., & Munos, R. (2017, August). Count-based exploration with neural density models. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 2721-2730). JMLR. org.\\nTang, H., Houthooft, R., Foote, D., Stooke, A., Chen, O. X., Duan, Y., ... & Abbeel, P. (2017). # Exploration: A study of count-based exploration for deep reinforcement learning. In Advances in neural information processing systems (pp. 2753-2762).\\nBurda, Y., Edwards, H., Storkey, A., & Klimov, O. (2018). Exploration by random network distillation. arXiv preprint arXiv:1810.12894.\\nPohlen, T., Piot, B., Hester, T., Azar, M. G., Horgan, D., Budden, D., ... & Hessel, M. (2018). Observe and look further: Achieving consistent performance on atari. arXiv preprint arXiv:1805.11593.\"}", "{\"title\": \"Response to Review #3\", \"comment\": \"Dear Reviewer #3:\\n\\nThank you for the clear and constructive feedback.\\n\\nFor question 1, as shown in Appendix L, we conducted the experiments with the traditional exploration mechanism of the epsilon-greedy strategy. We combined epsilon-greedy policy with PPO (Schulman et al. 2017) framework and DQN (Mnih et al. 2015) framework and searched many values of the hyper-parameters for the epsilon scheduling. Even though we have put much effort to push the experiments with random exploration, the performance is much worse than DTSIL on these hard-exploration tasks. Especially for Montezuma\\u2019s Revenge, the epsilon-greedy policy with random exploration achieves a score less than 100, which is consistent with the experimental results from previous works (Mnih et al. 2015, Schulman et al. 2017).\\n\\nAlso, we would like to emphasize that our main baseline method, i.e. count-based exploration, is also one of the classic, well-performing exploration mechanisms (Strehl & Littman 2005, Kolter & Ng 2009). However, we found that a simple combination of count-based exploration with standard RL method (e.g., PPO) still requires encountering many similar high-reward episodes to learn good behavior and empirically performs worse than our proposed method. We believe that it is because our method can better leverage a few samples of high-reward trajectories by learning to explore the variants of those past trajectories.\\n\\nFor question 2, we added Appendix J for the ablation study of the hyper-parameter $\\\\Delta t$. The only constraint is that $\\\\Delta t$ should be less than m (length of demonstration segment to imitate as input into the policy; we set m=10 for all our experiments by considering computational constraints). In general, we found that allowing the agent some flexibility of imitation by setting $\\\\Delta t$ close to m works well. For easy domains, $\\\\Delta t=2,4,8$ does not show much difference in policy performance. For more difficult domains, $\\\\Delta t=8$ works better because we provide imitation rewards more leniently to the agent to encourage imitation of the demonstration. In summary, $\\\\Delta t=8$ performs well for all of our primary experiment environments.\\n\\nWe hope that our response above will address your concerns and thank you again for the suggestions.\", \"references\": \"Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... & Petersen, S. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529.\\nSchulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.\\nStrehl, A. L., & Littman, M. L. (2005, August). A theoretical analysis of model-based interval estimation. In Proceedings of the 22nd international conference on Machine learning (pp. 856-863). ACM.\\nKolter, J. Z., & Ng, A. Y. (2009, June). Near-Bayesian exploration in polynomial time. In Proceedings of the 26th Annual International Conference on Machine Learning (pp. 513-520). ACM.\"}", "{\"title\": \"Response to Review #1\", \"comment\": \"Dear Reviewer #1:\\n\\nThanks for your detailed and helpful feedback. We have simply moved the implementation details in Section 4 to the Appendix so that the entire paper can fit into 8 pages without much tweaking of style-format.\\n\\nAbout the environment setting, as explained at the start of section 4.2, 4.3 and listed in Table 3, Appendix E, we considered the stochastic environments for all the primary experiments, including Apple-Gold, Montezuma\\u2019s Revenge, Pitfall, and MuJoco. The initial state of the agent in the experiment is randomized. The mechanism of initial random no-ops is one of the standard ways to introduce stochasticity in the Atari environment (Machado et al., 2018). The mechanism of random initial location from a Gaussian distribution in MuJoco maze is the same as in standard MuJuco tasks (Brockman et al.,2016).\\n\\nIn Appendix C.1, we showed DTSIL outperforms the baselines and achieves near-optimal episode reward with different forms of stochasticity in the Apple-Gold domain (i.e. random initial location of the agent, sticky action, and random initial location of the treasure). In the Apple-Gold domain, as we introduced in Figure 1, the agent achieves reward +1 when collecting an apple, gets reward +10 when collecting the treasure, but gets reward -0.05 when taking a step in the rocky region. Therefore, with the time limit of 45 steps, the optimal trajectory is to go through the rocky region for 30 steps and reach the treasure to get the total episode reward of 8.5. Different from the baselines, DTSIL successfully finds the optimal path and converges to good behavior.\\n\\nIn order to show the difficulty in policy learning in these stochastic environments, we added an additional baseline for DTSIL in Appendix K. Specifically, we stored and repeated the action sequence from the demonstration trajectory. When the environment is deterministic, repeating the action sequence should perfectly lead the agent to imitate the demonstration and reach the final state of interest. However, as shown in Figure 21 and 22 in Appendix K, on environments with random initial states (which is a standard type of moderate-degree stochasticity in the literature), memorizing the action sequence is not sufficient. The success ratio in imitation is much lower than DTSIL. Thus the agent could not revisit the novel regions as efficiently as DTSIL to discover better trajectories and converge to a better total episode reward.\\n\\nWe hope this information will address your concerns about the deterministic environments and thank you again for your comments.\", \"references\": \"Machado, M. C., Bellemare, M. G., Talvitie, E., Veness, J., Hausknecht, M., & Bowling, M. (2018). Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research, 61, 523-562.\\nBrockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., & Zaremba, W. (2016). Openai gym. arXiv preprint arXiv:1606.01540.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Note: the style-formatting of this paper has been heavily tweaked, and so the evaluation should be calibrated for a 9-page paper.\\n\\nThis paper proposes an approach for diverse self-imitation for hard exploration problems. The idea is leverage recently proposed self-imitation approaches for learning to imitate good trajectories generated by the policy itself. By encouraging diversity in the pool of trajectories for self-imitation, the idea is to encourage faster learner -- this basic concept is also used in approaches like prioritized experience replay, albeit at the entire trajectory level rather than individual state/action level. \\n\\nThe authors view this approach as a generalization of Go-Explore, since it does not rely on having a reset mechanism. However, I think this discussion has a lot of subtle nuances pertaining to the stochasticity of the environment (which the authors acknowledge). For instance, if the environment is deterministic, then why not just do something like Go-Explore, since state-reset is just memorizing a deterministic action sequence? \\n\\nThe empirical results are very strong, achieving state-of-the-art results for any approach not reliant on a reset mechanism. All the primary experiments appear to be for deterministic environments. The results on stochastic environments (in the Appendix) seem pretty weak (but please correct me if I'm mistaken here). So one major question is whether Go-Explore is a scientifically appropriate benchmark to compare with for this setting. \\n\\nIn summary, I'm willing to be convinced that this is an interesting and scientifically novel result. I have some concerns as expressed above.\\n\\n\\n**** After Author Response ****\\nThanks for the response. I'm willing to raise my score to weak accept. \\n\\nI think the authors did a reasonable job addressing my specific questions. Some further reflection revealed to me that there is a huge opportunity to scientifically investigate how stochasticity impacts the proposed algorithm. For instance, one could conduct a systematic study (say of the Apple domain) where one varies the degree of stochasticity and measures how the performance the proposed algorithm changes, perhaps relative to Go-Explore on the purely deterministic version of the environment. It seems a bit of a cop-out to say that Go-Explore is not applicable, and misses out a huge opportunity for real scientific understanding.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"The authors identify and address the problem of sub-optimal and myopic behaviors of self-imitation learning in environments with sparse rewards. The authors propose DTSIL to learn a trajectory-conditioned policy to imitate diverse trajectories from the agent\\u2019s own past experience. Unlike other self-imitation learning methods, the proposed method not only leverages sub-trajectories with high rewards, but lower-reward trajectories to encourage agent exploration diversity. The authors claim the proposed method to be more likely to find a global optimal solution.\\n\\nOverall, this paper is well-written with comprehensive experimental results. The proposed trajectory-conditioned policy sounds, since rewarded trajectory carries significant information of the goal in the exploration problem. Extensive experimental results demonstrated the effectiveness of the proposed DTSIL. However, I have a few concerns below, that prevent me from giving a direct acceptance. \\n\\n1. The proposed DTSIL changes the original MDP with sparse reward to an MDP with denser reward, which allows the training process to explore more in the \\u201cspace\\u201d closer to the collected high reward trajectories. Such \\u201cexploration\\u201d sounds promising. However, it would be nice to compare it with traditional reinforcement learning (e.g., with \\\\epsilon-greedy policy for random exploration)?\\n\\n2. In appendix D, the authors discussed what the parameter \\\\delta_t controls, however, it is unclear how \\\\delta_t should be chosen in implementation. The authors did not explain how \\\\delta_t was selected in their experiments. Choosing the right \\\\delta_t may be hard, but it would be nice to introduce what \\u201cheuristics\\u201d the authors used and suggest to readers.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper addresses the challenge of hard exploration tasks. The approach taken is to apply self-imitation to a diverse selection of trajectories from past experience -- practice re-doing the strangest things you've ever done. This is claimed to drive more efficient exploration in sparse-reward problems, leading to SOTA results for Montezuma's Revenge without certain common aides.\\n\\nThe approach is incompletely motivated. Why trajectory-conditioned policy over just goal-conditioned policy? The note in the related work section doesn't paint a clear enough picture. The trajectory buffer management strategy feels complex. Why this use strategy specifically? Could a simpler design be ruled out? In 2019 (post Go-Explore), it's not clear Montezuma's revenge poses a significant exploration challenge -- exploration doesn't even need to be interleaved with learning. Why are these three the right domains to show off these techniques?\\n\\nThis reviewer moves to reject the paper primarily for not balancing the high complexity of the solution to the lower difficulty of the problem. Pure-exploration algorithms (Go-Explore), not burdened by interleaving policy learning, achieve far superior scores. If the authors want to escape the shadow of this kind of technique which cheats by some framings of RL, more appropriate demonstration environments must be selected.\"}" ] }
SJecKyrKPH
ICNN: INPUT-CONDITIONED FEATURE REPRESENTATION LEARNING FOR TRANSFORMATION-INVARIANT NEURAL NETWORK
[ "Suraj Tripathi", "Chirag Singh", "Abhay Kumar" ]
We propose a novel framework, ICNN, which combines the input-conditioned filter generation module and a decoder based network to incorporate contextual information present in images into Convolutional Neural Networks (CNNs). In contrast to traditional CNNs, we do not employ the same set of learned convolution filters for all input image instances. And our proposed decoder network serves the purpose of reducing the transformation present in the input image by learning to construct a representative image of the input image class. Our proposed joint supervision of input-aware framework when combined with techniques inspired by Multi-instance learning and max-pooling, results in a transformation-invariant neural network. We investigated the performance of our proposed framework on three MNIST variations, which covers both rotation and scaling variance, and achieved 0.98% error on MNIST-rot-12k, 1.12% error on Half-rotated MNIST and 0.68% error on Scaling MNIST, which is significantly better than the state-of-the-art results. Our proposed model also showcased consistent improvement on the CIFAR dataset. We make use of visualization to further prove the effectiveness of our input-aware convolution filters. Our proposed convolution filter generation framework can also serve as a plugin for any CNN based architecture and enhance its modeling capacity.
[ "Transformation-invariance", "Reconstruction", "Run-time Convolution Filter generation" ]
Reject
https://openreview.net/pdf?id=SJecKyrKPH
https://openreview.net/forum?id=SJecKyrKPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "SOOwYIQXMF", "rkgVT2jyir", "Hygt0c_Jir", "B1g0BlpjKB", "rylH3T2dKB" ], "note_type": [ "decision", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734078, 1573006523748, 1572993744616, 1571700805795, 1571503533276 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1850/Authors" ], [ "ICLR.cc/2020/Conference/Paper1850/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1850/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1850/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes a CNN that is invariant to input transformation, by making two modifications on top of the TI-pooling architecture: the input-dependent convolutional filters, and a decoder network to ensure fully transformation invariant. Reviewer #1 concerns the limited novelty, unconvincing experimental results. Reviewer #2 praises the paper being well written, but is not convinced by the significance of the contributions. The authors respond to Reviewer #2 but did not change the rating. Reviewer #3 especially concerns that the paper is not well positioned with respect to the related prior work. Given these concerns and overall negative rating (two weak reject and one reject), the AC recommends reject.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Justification\", \"comment\": \"Why decoder is needed?\\n-> Dimension of extracted max pool features is different from the representative image. Therefore, a reconstruction decoder is employed. Another solution would have been to map the representative image to a lower dimension space and use it directly to calculate L-2 distance with max pool features.\\n-> Another need for having a decoder is that we can visualize the reconstructed image and check the decoder's validity. \\n\\nRepresentative image?\\n->There is no limitation that the representative image should only be a single image. Based on the complexity of the class, multiple representative images can be easily utilized in the network during training time. \\n->The representative image helps the network to extract abstract features from the input image which are necessary to reconstruct a transformation free corresponding image. This idea can be easily extended to text-based models where the representative text can be decided based on the problem statement. (For instance, if we consider the problem of sentiment analysis which takes a text utterance as input and predicts its polarity to be angry, happy, etc., then, in this case, a representative text can be a vector which is close to the input text's corresponding sentiment in the embedding space.)\\n\\n If so, why not just rotate (rescale) the filters? \\n->Given an input image, we want our filter generator module to generate the best filters based on the input image rather than explicitly changing the filters or features because during the test time we will not be knowing which transformations should be applied to the extracted features or filters. Therefore, a framework that automatically extracts the best input-conditioned features is needed. \\n->Now the image can have rotation, translation or their combination, it is not feasible to explicitly handle these transformations by employing corresponding transformations in the feature or filter space. Therefore, a framework that automatically extracts the best input-conditioned features is needed.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The proposed method in this paper tries to make the CNN robust to the input image transformation by learning to generate convolutional filters.\\nThe proposed architecture has two main parts. \\n1) Filter Generation: Given an input image, a set of predefined transformations are applied to the image. After extracting features from the input transformed images with the Siames network, a set of Convolutional filter are estimated. The idea is that these input-dependent filters can compensate all of transformation in the image.\\n2) Classification and reconstruction part: The generated convolutional filters are applied to the image and after extracting deeper features a representation vector is computed. This representation vector will be used for classification and reconstruction of the input image to make sure that it has all of the necessary information.\", \"positive_points\": \"1) The writing is clear.\", \"negative_points\": \"The proposed method is not novel. The proposed method will be robust to the transformations that are used during training but it cannot generalize to other useen transformations.\\n2) The experimental results are weak, the authors should compare their method on more difficult datasets like ImageNet dataset. \\n3) The authors should compare their proposed method with the \\\"Spatial transformer networks, NIPS 2015.\\\" in detail. \\n\\nIn conclusion, my recommendation for this paper is \\\"weak reject\\\".\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed an Input-conditioned Convolutional Neural Network (ICNN) to automatically impose transformation-invariance. The contribution of the manuscript is two-folds\\n(a) After transforming the input using a pre-determined set of transformations, a set of input-conditioned filter generators are used (and trained) to cater to different input contents.\\n(b) A decoder is used after the max-pooling layer (of the Siamese network.) And an L-2 reconstruction loss (with respect to a chosen class representative) is added to the cross-entropy loss for classification.\\n\\nOverall the paper is well written, and it is fairly easy to read. However, I am not totally convinced that the two contributions of the paper are significant to transformation-invariant representations, and my reasonings are follows\\n\\n1. Why is a decoder needed in the architecture? If the objective is to achieve transformation-invariance, one can easily compare the L-2 distance between the max-pooled feature maps of a given input to that of the class representative. Why bother using a decoding architecture?\\n2. Choosing a \\\"class representative\\\" in the CNN seems very restrictive. Why if the underlying task is not image classification? Besides, I am very curious about the experiment on the CIFAR-10 dataset: do the constructed images of all test samples look like the one chosen class representative in the training data? (i.e., compared to figure 3)\\n3. The input-conditioned filter generation seems a little confusing. Is this what you want to achieve? Say if the pre-determined transformations are rotation (scaling), then the input-conditioned filter should be generated as rotated (scaled) version of the same filters? If so, why not just rotate (rescale) the filters? There are lots of group-equivariant CNNs that have been proposed before for such effect. Besides, I am confused why fractionally-strided convolutions are used for filter generation?\", \"other_comments\": \"1. The reference for fractionally-strided convolutions should be fixed.\\n2. Why there is no bias term in convolutional modules (page 4, second paragraph?)\\n3. What does ICNN short for? The first appearance of the abbreviation in the abstract needs more explanation.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"*Paper summary*\\n\\nThe authors build an input transformation invariant CNN using the TI-Pooling architecture of (Laptev et al., 2016). They make some modifications to this, namely 1) they replace the convolutional filters, with input-dependent convolutional filters, and 2) they add a decoder network from the final representation, which reconstructs an input transformation rectified image, to encourage the final representation to be fully transformation invariant.\\n\\n*Paper decision*\\n\\nI have decided to assign this paper a reject because of two main reasons. \\n\\n*Supporting arguments*\\n\\nOne reason is that the base architecture is not novel. This in itself is not a key issue, but I would expect the authors to have done some in depth analysis or experimentation otherwise to compensate for this. I regret, the authors may just not have known that the ideas were already explored in the literature. The second reason is that the work is not well placed in context with prior works. This is both evident in the lack of referenced works (see below for a list) and the lack of sufficient baselines, against which they compare. For instance, if the authors had considered \\u201cLearning Steerable Filters for Rotation Equivariant CNNs\\u201d by Weiler et al. (2018) they would have known that their MNIST-rot-12k results are not state of the art as they state. In Weiler et al., the authors report 0.714 test set error on MNIST-rot-12k compared to ICNN\\u2019s 0.98.\\n\\nThis all said, I think the paper is well-written and very clear. The structure is straightforward and the experiments seem repeatable from the descriptions made. The stated aims of the paper are also clear: to learn input transformation invariant CNNs using input-conditioned filters.\\nUnfortunately a lot of supporting material and prior work has been missed. I list a lot of them here. \\n\\nWorks on input-conditioned filters and invariance. These are the most important\\n\\n-Dynamic Steerable Frame Networks, Jacobsen et al., 2017\\n-Dynamic Steerable Blocks in Deep Residual Networks., Jacobsen et al., 2017\", \"works_on_input_conditioned_filters\": \"-HyperNetworks, Ha et al., 2016\\n-Dynamic Filter Networks, de Brabandere et al., 2016\", \"works_on_invariance\": \"-Invariance and neural nets, Barnard and Casasent, 1991\\n-Group Equivariant Convolutional Networks Cohen and Welling (2015)\\n-Harmonic Networks: Deep Translation and Rotation Equivariance: Worrall et al. (2017)\\n-Steerable CNNs, Cohen and Welling (2017)\\n-Spherical CNNs, Cohen et al. (2018)\\n-CubeNet: Equivariance to 3D Rotation and Translation, Worrall and Brostow (2018)\\n-Learning steerable filters for rotation equivariant CNNs, Weiler et al. (2018)\\n-Gauge Equivariant Convolutional Networks and the Icosahedral CNN, Cohen et al. (2019)\\n\\n*Questions/notes for the authors*\\n\\n- Please address the missing references\\n- Are the input-conditioned filters conditional on position in the activations, or are they shared across all spatial locations of the image? This is not clear from the text.\\n- The image reconstruction reminds me of Transforming Auto-encoders (Hinton et al., 2016) and Interpretable Transformations with Encoder-Decoder Networks (Worrall et al., 2017). How is your setup different?\"}" ] }
SkeKtyHYPS
Data Augmentation in Training CNNs: Injecting Noise to Images
[ "Murtaza Eren Akbiyik" ]
Noise injection is a fundamental tool for data augmentation, and yet there is no widely accepted procedure to incorporate it with learning frameworks. This study analyzes the effects of adding or applying different noise models of varying magnitudes to Convolutional Neural Network (CNN) architectures. Noise models that are distributed with different density functions are given common magnitude levels via Structural Similarity (SSIM) metric in order to create an appropriate ground for comparison. The basic results are conforming with the most of the common notions in machine learning, and also introduces some novel heuristics and recommendations on noise injection. The new approaches will provide better understanding on optimal learning procedures for image classification.
[ "deep learning", "data augmentation", "convolutional neural networks", "noise", "image processing", "SSIM" ]
Reject
https://openreview.net/pdf?id=SkeKtyHYPS
https://openreview.net/forum?id=SkeKtyHYPS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "NcnZ6gab-Y", "rJxOAIYJcr", "BylWZ0s6YH", "ryxfxYLKYr", "HJe6hPSv_r", "rylesyBjPr" ], "note_type": [ "decision", "official_review", "official_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1576798734051, 1571948240434, 1571827192760, 1571543274464, 1570359221470, 1569570712130 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1849/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1849/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1849/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1849/Authors" ], [ "ICLR.cc/2020/Conference/Paper1849/Authors" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper studies the effect of various data augmentation methods on image classification tasks. The authors propose the structural similarity as a measure of the magnitude of the various types of data augmentation noise they consider and argue that it is outperforms PSNR as a measure of the intensity of the noise. The authors performed an empirical analysis showing that speckle noise leads to improved CNN models on two subsets of ImageNet. While there is merit in thoroughly analysing data augmentation schemes for training CNNs, the reviewers argued that the main claims of the work were not substantiated and the raised issues were not addressed in the rebuttal. I will hence recommend rejection of this paper.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper aims at analyzing the effect of injecting noise to images as data augmentation in training CNN for the image classification task. Based on the SSIM metric (which is shown to be a better metric than PSNR), different noise level on a set of different kinds of noise are explored. Experimental results on two sub-datasets of ImageNet suggest that Speckle noise would lead to better CNN models.\\n\\nEven though the simulations appear seemingly convincing, and the conclusion is somewhat interesting to me: speckle noise is recommended which contradicts the general usage of Gaussian noise. The results is too specific to both the model chosen resnet18v2 and also in the chosen dataset. Besides, my bigger concern is that the contribution of this work is highly limited, since there are a bunch of data augmentation techniques: cropping, flipping, color space transformation, rotation, noise injection, etc. Given this broad selection of data augmentation, as far as I know, noise injection is not the most effective nor popular one. In fact, random cropping is the mostly used one that established past a few benchmark CNN models in the imageNet classification task, e.g, ResNet, DenseNet, etc. As such, it would be more convincing if it can be shown that proper noise injection can boost the recognition performance on the ImageNet task.\", \"minors\": \"Abstract line 8, and also introduces -> and also introduce\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper studies the different types of noises that could be added to the training image dataset while training an CNN model for classification. They study 5 different types of noise functions: Gaussian, Speckle, Salt and Pepper, Poisson, Occlusion.\", \"pros\": \"1. Rigorously studying how to augment training data for CNN is important.\", \"cons\": \"1. The primary question is novelty . - what is the research contribution of this dataset? They are running experiments of 5 different known noise functions, on two image datasets, on a single deep learning models. There are no fundamental research questions or hypothesis. This is a mere running of few experiments - known methods and known approaches. \\n\\n2. Are the results generalizable? Answer is no! The results on shown on two subsamples of Imagenet datasets for only ResNet 18 model. Maybe for this combination speckle noise (and not Gaussian, as pointed out in the comments by the authors) is better. How can the assure that for a different dataset, model, task combination the same speckle noise would perform better ? \\n\\n3. Improvement suggestion: What I would ideally look in this topic, is a method to automatically study the properties of the training data images (study the distribution) and conditional on this distribution recommend the best noise type and noise intensity. Thus, the whole story of noise injection could be made dynamic for a dataset, model, task combination\\n\\n4. Writing of the paper could be improved: 1. The need for noise based augmentation of is well known (Section 1). 2. The different kinds of noise functions are mostly text book knowledge (Section 2). 3. The different image quality metrics written here - MSE, PSNR, SSIM are also text book knowledge (Section 3). Overall, the first 4 pages of the paper are redundant and could be compressed into 1 page. Would like to read more on the experiments, analysis, and maybe automation of noise selection techniques in different kinds of tasks - segmentation, text classification, seq2seq etc.\\n\\n5. Choose very naive noise (or augmentation) functions: In general, the approach of augmentation of training data has evolved so much in the literature, that adding noise is hardly in practice. \\n1. \\\"The Effectiveness of Data Augmentation in Image Classification using Deep Learning\\\" - Style Transformation\\n2. \\\"Improving Deep Learning using Generic Data Augmentation\\\" - Geometric and Affine Transformation\\nThus, the study on data augmentation should be performed across all these different transformation functions on training data and using only noise function is naive and is incomplete.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper studies the effect of various data augmentation methods on image classification tasks. The Authors propose the Structural Similarity (SSIM) as a measure of the magnitude of the various types of data augmentation noise they consider. The Authors argue that SSIM is superior to PSNR as a measure of the intensity of the noise, across various noise types.\\n\\nThe idea of using SSIM as a unified measure for noise in the context data augmentation in images is novel AFAIK and is neat IMO, because as the authors point, SSIM provides a more perceptually-driven distance measure between images than RMS or PSNR. One of the results of the paper is that a SSIM value of 0.8 is a good rule of thumb for choosing the magnitude of the noise irrespectively to the type of noise. This is a useful and interesting result.\\n\\nNevertheless, at this stage I am inclined to reject the paper, because I feel that the main claim is not sufficiently substantiated. The argument that SSIM provides a more universal (less noise-type-dependent) measure of strength than, say, PSNR in the context of data augmentation is not substantiated in the experiments. While intuitively the claim makes sense to me, it is hard to draw this conclusion when SSIM is not compared to any other metric (RMS, PSNR) as measure of strength for data augmentation. \\n\\nWhy is a larger Kurtosis detrimental for measuring the strength of data argumentation? Is there any evidence that links low Kurtosis to a better measure data augmentation strength? This is another question that could be addressed if SSIM were compared to other data augmentation strength metrics.\\n\\nIMHO the paper could have been made much stronger if it had the analog of Figure 5 for other measures of the noise (e. g. PSNR, RMS). If the results showed that SSIM is superior to them, I would learn a useful and insightful lesson form the paper. In its current form, I feel that the Authors made the first step in a very interesting direction but did not go far enough to substantiate their claims.\"}", "{\"comment\": \"Dear reviewers and readers,\\n\\nAs edit option for our paper still appears to be disabled (any recommendations regarding to this issue is highly appreciated), we are publishing an important correction and a small additional study via comment option.\\n\\nThe correction is due to a typo in the fourth paragraph of the 5th section (Discussion), that starts with the words \\\"For the rest of the noise types...\\\": Our final recommendation for the type of noise to be injected among Gaussian, speckle and Poisson noise models is NOT Gaussian, but speckle. The reasoning can be visually seen especially in the Figure 8(b), where speckle noise provides considerably better robustness than Gaussian noise. Furthermore, in Figures 4 and 5 neural models trained with speckle noise performs better than their Gaussian counterparts on six occasions, while the contrary is true only for three occasions (one case is nearly equal). This can be explained by speckle noise being feature-selective, targeting the high-intensity regions more than the low-intensity ones and thus allowing the neural network to better generalize for minor features. Although these observations have been made before the completion of the study, we are sorry for the crucial typo that will also be corrected in the original paper as soon as he editing is enabled. If reviewers approve, it is also possible to add this short explanation.\\n\\nSmall additional study is made upon the recommendation of a colleague, regarding to a conclusion reached by Koziarski & Cyganek (2017) that \\\"noise as a form of regularization on top of other regularization techniques, namely weight decay and dropout, does not improve the classification accuracy\\\". In the mentioned study, the properties of the noise and other regularization techniques applied to reach this conclusion are not disclosed, therefore we have made a series of experiments to determine the robustness and accuracy of the dropout-regularized CNN models with and without the noise injection procedure as advised in our study. Initial reasoning behind not using such techniques was to comply with the ResNetV2 architecture of He et al. (2016).\\n\\nWe have chosen an aggresive dropout level of 0.5, and applied one of the three noise models; namely speckle, s&p and occlusion noise, at 0.8 MSSIM (see Table 2 from the study). For both datasets, we run the experiments for four different cases (w/o and w/noise, w/o and w/dropout), and trained each model for five times. The resulting loss metrics for each epoch is aggregated by the minimum value among five trials. The test set is composed of noise-free images of original datasets and their slightly noisy (0.9 MSSIM) counterparts for each noise type, in order to test for the model robustness with accuracy at the same time. The plots for each dataset can be seen in the following anonymous link: https://imgur.com/a/REothM6\\n\\nThe results are conforming with our discussions in the study, therefore there is no need for any change in the main structure, and this additional work can be seen as a confirmation of our findings. Noise injection, when appropriately constructed, can also be a good regularization technique especially with relatively difficult datasets (observe that the effect of noise injection is much greater in Imagewoof dataset which is substantially more challenging than Imagenette). The code for this additional work can be reached via: https://gofile.io/?c=2nXLAV\\n\\nAs we understand the policy of ICLR regarding to the follow-up studies, these results will be added to the paper depending on the decision of the reviewers. \\n\\nReferences\\n\\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (eds.), Computer Vision\\u2013 ECCV 2016, pp. 630\\u2013645, Cham, 2016. Springer International Publishing. ISBN 978-3-319-46493-0.\\n\\nMicha Koziarski and Boguslaw Cyganek. Image recognition with deep neural networks in presence of noise: dealing with and taking advantage of distortions. Integrated Computer-Aided Engineering, 24:1\\u201313, 08 2017. doi: 10.3233/ICA-170551.\", \"title\": \"Correction of a crucial typo and a small additional study\"}", "{\"comment\": \"Due to the doubts of anonymity with regards to Google Drive shares, the link in the submission is deactivated. Code can be accessed via this new anonymous link: https://gofile.io/?c=GeXNVQ\\n\\nSorry for the inconvenience.\", \"title\": \"Change of code address\"}" ] }
S1xKYJSYwS
VAENAS: Sampling Matters in Neural Architecture Search
[ "Shizheng Qin", "Yichen Zhu", "Pengfei Hou", "Xiangyu Zhang", "Wenqiang Zhang", "Jian Sun" ]
Neural Architecture Search (NAS) aims at automatically finding neural network architectures within an enormous designed search space. The search space usually contains billions of network architectures which causes extremely expensive computing costs in searching for the best-performing architecture. One-shot and gradient-based NAS approaches have recently shown to achieve superior results on various computer vision tasks such as image recognition. With the weight sharing mechanism, these methods lead to efficient model search. Despite their success, however, current sampling methods are either fixed or hand-crafted and thus ineffective. In this paper, we propose a learnable sampling module based on variational auto-encoder (VAE) for neural architecture search (NAS), named as VAENAS, which can be easily embedded into existing weight sharing NAS framework, e.g., one-shot approach and gradient-based approach, and significantly improve the performance of searching results. VAENAS generates a series of competitive results on CIFAR-10 and ImageNet in NasNet-like search space. Moreover, combined with one-shot approach, our method achieves a new state-of-the-art result for ImageNet classification model under 400M FLOPs with 77.4\% in ShuffleNet-like search space. Finally, we conduct a thorough analysis of VAENAS on NAS-bench-101 dataset, which demonstrates the effectiveness of our proposed methods.
[ "vaenas", "search space", "nas", "weight", "methods", "matters", "sampling matters", "neural network architectures" ]
Reject
https://openreview.net/pdf?id=S1xKYJSYwS
https://openreview.net/forum?id=S1xKYJSYwS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "hn1-7_MRN", "BklVk9d2oB", "ByxyBt_hsH", "Hkl-_u_3ir", "BJliF3yE9B", "Bkeu4_xZqB", "SylMS-x2tB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798734023, 1573845468306, 1573845303081, 1573845097444, 1572236419312, 1572042799758, 1571713338468 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1848/Authors" ], [ "ICLR.cc/2020/Conference/Paper1848/Authors" ], [ "ICLR.cc/2020/Conference/Paper1848/Authors" ], [ "ICLR.cc/2020/Conference/Paper1848/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1848/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1848/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes to represent the distribution w.r.t. which neural architecture search (NAS) samples architectures through a variational autoencoder, rather than through a fully factorized distribution (as previous work did).\\n\\nIn the discussion, a few things improved (causing one reviewer to increase his/her score from 1 to 3), but it became clear that the empirical evaluation has issues, with a different search space being used for the method than for the baselines. There was unanimous agreement for rejection. I agree with this judgement and thus recommend rejection.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Author Response to Official Blind Review #1\", \"comment\": \"Thank you very much for your constructive comments.\", \"q1\": \"For CIFAR-10 experiments, the authors mentioned in the appendix... Note the searched densely connected cells in Figures 4 & 5 in Appendix A.4 are clearly not part of the search space for many of the baselines.'\", \"a1\": \"We admit that the search space in our experiments is different from DARTS, as we mentioned in Appendix A.4. However, it is necessary to note the difference between the search space that we used in our paper and the DARTS search space. The difference between our search space and darts search space is that: in our search space, the i-th node of one cell has i-1 processors, while in DARTS search space, the i-th node of one cell has 2 processors. However, in our search space, 'None' is an available operation. Choosing 'None' means delete the corresponding edge in the cell. Thus, the DARTS search space is a ***subset*** of ours. In other words, our search space is much larger than the search space used in DARTS paper. The reason we adopt this search space it that, it is easier to verify the effectiveness of a sampling module on a larger search space.\\n\\nHowever, we understand your concern about the unfair comparison. Due to the time limit, we only finish some experiments with the DARTS search space on CIFAR-10 and we report the following results:\\n97.46% with 2.92M parameters, searched by one-shot based VAENAS.\\nThe preliminary experiments show that our VAE sampling is still effective in the DARTS search space. We are doing more experiments and we will update the results in a future revision.\", \"q2\": \"Unfair comparison of using ShuffleNet search space for ImageNet experiments.\", \"a2\": \"For the ShuffleNet-like search space, we claim that our contribution is to find an architecture to achieve SOTA results on ImageNet under 400M FLOPs. Regarding the fairness, we include the baseline (results that are searched by one-shot) in the new revision. The baseline is 77.1% with 360M FLOPs, compare to 77.4% with 365M FLOPs with our methods.\", \"q3\": \"What's the gains come from the LSTMs or the VAEs?\", \"a3\": \"Regarding the concern on performance gain from LSTM, we did experiments on VAE use MLPs (Multi-Layer Perception) as encoder and decoder. In our search space, we got 97.60% with 4.6M on the CIFAR-10 dataset. The preliminary experiments indicate that the performance gain from LSTM is limited.\\nSecondly, we do not claim that our generative sampler based on VAEs is superior to other samplers. Our main contribution is that we design a sampling module based on VAE that can supplementary to the existing weight sharing framework (one-shot and gradient-based). We use VAE because it is lightweight and easy to train.\"}", "{\"title\": \"Author Response to Official Blind Review #2\", \"comment\": \"Thank you very much for your constructive comments.\", \"q1\": \"Would the 0.1% absolute improvement over the second best in Table 2 be considered significant enough to justify the effectiveness of the proposed approach? '\", \"a1\": \"The 0.1% improvement is not significant. However, it is a comparison between VAENAS-OS with AmoebaNet-C, which is searched with a computation cost of 3150 GPU days. While the search cost of recent NAS methods is less than 1 GPU day, including our method, it is unfair to make a comparison between our methods with AmoebaNet-C.\", \"q2\": \"Would the 1.1% absolute improvement over the second best in Table 3 be considered significant enough to justify the effectiveness of the proposed approach? '\", \"a2\": \"The 1.1% accuracy improvement in Table 2 is significant. Improving the accuracy of ImageNet classification is hard, especially when it comes to efficient models (< 600M FLOPs).\\n\\nNotably, we present a controlled experiment to demonstrate how effective is VAE sampling module, and the results are shown in Table 5. The results on both CIFAR10 and ImageNet with gradient-based/one-shot is significant. We think this can justify that our sampling module is effective.\", \"q3\": \"Training budget and fairness regarding random sample?\\n\\nRegarding the fairness of the comparison experiments, we decouple the training time (computation overhead) into VAE training stage and sub-networks evaluation stage.\\n\\nWe argue that the computation overhead of VAE sampling module itself is lightweight. Our VAE module was composed of one LSTM as an encoder, one LSTM as a decoder, and several FC layers, so the parameter size of VAE module is less than 50 KB. To be specific, we train super-network for 100 epochs, and we train VAE for 50 epochs every 10 super-network epochs. According to our test, training VAE with 50 epochs takes 0.3% training time to train a one-shot super-network with one epoch. It is also equivalent to evaluate 3-5 architectures on a super-network.\\n\\nAdmittedly, we need to evaluate a number of sub-network to train our VAE module. In our experiment setting, we sample 1,500 sub-networks from the search space and do the evaluation on super-network accordingly. Since we train VAE every 10 epochs, there will be a total of 15,000 sub-networks we evaluated during searching. \\n\\nHowever, using the same number of evaluated architectures, random sampling has no way to be more effective than VAE sampling. Our motivation for this paper, and neural architecture search in general, is that random sampling is extremely inefficient in a large search space. \\n\\nOur experiment in Section 5.3 also shows that to get the same performance architectures, random search has to sample much more than VAE. For example, as shown in Table 4, to get one 93.86% accuracy architecture on the NAS-Bench-101 benchmark dataset, random search sampled 500 architectures but VAE module sampled less than 50 samples.\", \"q4\": \"Are the numbers in Table 2 and Table 3 swapped? \\u2018\", \"a4\": \"We checked Table 2 and Table 3 and the numbers in Table 2 and Table 3 are correct. We also updated a new version.\"}", "{\"title\": \"Author Response to Official Blind Review #3\", \"comment\": \"Thank you very much for your constructive comments.\\n\\n' I wonder whether the VAE based approach will consistently converge to a good local minimum\\u2026\\u2026at least the variations of testing errors.\\u2019\", \"q1\": \"Whether the VAE based approach will consistently converge to a good local minimum?\", \"a1\": \"We performed some experiments to verify that if the VAE sampling based approach can consistently find good-performing architectures. Due to limit time constraints, we run four independent searches by VAENAS-OS on CIFAR-10. The average accuracy is 97.56% and the standard deviation is 0.122%. This result is robust in our view.\", \"q2\": \"Why VAENAS-G can increase the diversity of architectures in gradient-based NAS methods?\", \"a2\": \"We observed that gradient-based methods have the premature convergence problem, which was also observed by concurrent papers [1, 2]. We conjecture that premature convergence phenomenon happens because gradient-based methods have the tendency to converge to the architecture that has better performance at the early search phase, and then sample this architecture repeatedly at the search phase. As a result, only a small number of architectures are actually trained by gradient-based NAS. Therefore, we claim that our method can increase the diversity of architectures trained by gradient-based methods, such that these NAS methods can make better decisions and search for architectures with higher performance. By \\\"increase the diversity\\\", we mean the VAE sampling module helps the gradient-based method to train with more diversified architectures.\", \"q3\": \"Why VAENAS-G helps to search for larger models in gradient-based NAS methods?\", \"a3\": \"We observed that the gradient-based method could be stuck on identity operation [3], result in finding small architectures (with many identity operations). With the help of the sampling module, the gradient-based method can potentially ignore the identity operation because other operations like 3x3 convolution general have better performance than identity. However, the original gradient-based method fails to find large networks because they don't have the opportunity to train them in the first place.\\nWe presented the experiment results in Table 5 to show that, with the help of VAE sampling module, the gradient-based methods are able to find larger architectures (more parameters or larger FLOPs).\", \"q4\": \"Detailed comments on notations and tables.\", \"a4\": \"Thank you for pointing out our mistakes. We fixed these errors in the new revision. We will provide the error variation in future revision.\\n\\n\\n\\n[1] BETANAS: Balanced Training and Selective Drop for Neural Architecture Search. https://openreview.net/forum?id=HyeEIyBtvr\\n[2] Stabilizing DARTS with Amended Gradient Estimation on Architecture Parameters. https://openreview.net/forum?id=BJlgt2EYwr \\n[3] Chen Xin, et al. Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation. In ICCV 2019\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"This paper proposes to use the variational auto-encoder (VAE) to sample the network architectures. The VAE is applied to both one-short and gradient descent scenarios, and shows consistent improvement on different NAS tasks. The proposed method is reasonable, but I have two major concerns:\", \"I wonder whether the VAE based approach will consistently converge to a good local minimum. It will be very helpful if the authors could provide robust analysis or at least the variations of testing errors.\", \"I understand the motivation of VAE + one shot, but I am not very convinced by VAE + gradient-based. In the last paragraph in Section 4, the paper claims (1) VAENAS-G can increase the diversity of architectures, which can be also achieved by sampling the data set. Also, it claims (2) VAENAS-G helps to search for large models, which I do not see experimental supports.\"], \"detailed_comments\": [\"Algorithm 1: it is confusing to use S_K and S_k without explanation\", \"Table 1: most of the other methods provide test error with standard variations. To be fair, I'd see VAENAS's test error variation.\", \"Tables 2 and 3: Why is VAENASNet (table 3) different from VAENAS-G and VAENAS-OS?\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed to use VAE to learn a sampling strategy in neural architecture search. The main idea is to use the currently high-performing networks to train a VAE from which the sampled architectures for the next iteration will likely supply both high-performing networks and better diversity coverage. The experiments are extensive, including results under various settings.\\n\\nThe idea is straightforward and reasonable. I do not work on neural architecture search myself, so I'm not sure how significant the experimental results are. Would the 0.1% (1.1%) absolute improvement over the second best in Table 2 (Table 3) be considered significant enough to justify the effectiveness of the proposed approach? \\n\\nI'm a little concerned about the fairness of the comparison experiments. A fairly heavy computation overhead is required to train the VAE models in the proposed method. Instead of taking this overhead, wouldn't it be easier to randomly sample more architectures? Intuitively, if we spend the cost of training a VAE model instead on sampling more architectures, the end effects could be the same. \\n\\nAre the numbers in Table 2 and Table 3 swapped?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"============ comments after rebuttal\\nI would like to thank the authors for addressing some of my concerns. I believe the new results under Q2 and Q3 are useful additions to strengthen the paper.\\n\\nAs for the authors' comments for Q1, I'd like to point out that a \\\"larger\\\" search space is not necessarily more difficult (a more meaningful metric would be the average accuracy of random architectures). It is still possible that the current search space is putting the proposed method at advantage, especially if having dense connections is a useful prior.\\n\\nGiven the above, I would like to increase my score from 1 to 3 (weak reject).\\n\\n============ previous comments\\nNeural architecture search can be formulated as learning a distribution of promising architectures (the sampling policy). Such a distribution is usually represented in a fully factorized fashion (e.g., as a set of multinational distributions as in DARTS). This paper proposes to model the architecture distribution using a VAE instead, where the encoder and decoder are implemented using LSTMs. The authors argue that the increased flexibility of the sampling policy leads to improved performance on CIFAR-10, NASBench and ImageNet.\\n\\nThe idea of representing the architecture distribution using VAEs is very natural, which in principle could offer better coverage over interesting regions in the search space as compared to traditional factorized distribution representation (which has a single mode only).\\n\\nWhile the method itself is interesting, I do not think it has been properly backed up by controlled experiments. This is largely due to the fact that the authors are comparing their method against baselines in fundamentally different search spaces. For instance:\\n* For CIFAR-10 experiments, the authors mentioned in the appendix: \\\"Different from DARTS, in our search space, one node could have more than two predecessors in one cell\\\". This makes the search space very different from the existing ones as used by NASNet/AmoebaNet/DARTS/SNAS, and it hence remains unclear to what degree the resulting architecture has benefited from the increased in-degrees per node. Note the searched densely connected cells in Figure 4 & 5 in Appendix A.4 are clearly not part of the search space for many of the baselines.\\n* For ImageNet experiments, the authors are using a ShuffleNet-like search space which has fundamentally different building blocks than other architecture search baselines (commonly built on top of inverted bottleneck layers). It is unclear to what degree the 77.4 top-1 accuracy @ 365 MFlops results have benefited from this different search space.\\n\\nWithout fair comparisons in a controlled setup, it is impossible for readers to draw any solid conclusion about the true empirical advantages of the method. I'm therefore unable to recommend acceptance for the paper at the moment, but am willing to raise my score if the authors can properly address those issues in the rebuttal.\", \"additional_question\": \"How can we isolate it to tell whether the gains come from the LSTMs or the VAEs? Is there any intuition why incorporating a generative sampler based on VAEs is potentially superior to method like ENAS (which involves LSTMs decoders only)?\"}" ] }
S1g_t1StDB
Self-Educated Language Agent with Hindsight Experience Replay for Instruction Following
[ "Geoffrey Cideron", "Mathieu Seurin", "Florian Strub", "Olivier Pietquin" ]
Language creates a compact representation of the world and allows the description of unlimited situations and objectives through compositionality. These properties make it a natural fit to guide the training of interactive agents as it could ease recurrent challenges in Reinforcement Learning such as sample complexity, generalization, or multi-tasking. Yet, it remains an open-problem to relate language and RL in even simple instruction following scenarios. Current methods rely on expert demonstrations, auxiliary losses, or inductive biases in neural architectures. In this paper, we propose an orthogonal approach called Textual Hindsight Experience Replay (THER) that extends the Hindsight Experience Replay approach to the language setting. Whenever the agent does not fulfill its instruction, THER learn to output a new directive that matches the agent trajectory, and it relabels the episode with a positive reward. To do so, THER learns to map a state into an instruction by using past successful trajectories, which removes the need to have external expert interventions to relabel episodes as in vanilla HER. We observe that this simple idea also initiates a learning synergy between language acquisition and policy learning on instruction following tasks in the BabyAI environment.
[ "Language", "reinforcement learning", "instruction following", "Hindsight Experience Replay" ]
Reject
https://openreview.net/pdf?id=S1g_t1StDB
https://openreview.net/forum?id=S1g_t1StDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "HeE6xIIgJA", "rJgNUxNioS", "S1g3tu7sor", "rklMdumoor", "SkxjbO7ijr", "BkgdyumisB", "HkgvFvmior", "rkl5CIQsiH", "r1ld3LQsor", "ByxCLQLRFS", "BJlpIE-6tr", "Hyx1kN-K_r" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733994, 1573761099907, 1573759108022, 1573759081965, 1573758978847, 1573758943899, 1573758847496, 1573758673785, 1573758639996, 1571869526458, 1571783765280, 1570472919491 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1847/Authors" ], [ "ICLR.cc/2020/Conference/Paper1847/Authors" ], [ "ICLR.cc/2020/Conference/Paper1847/Authors" ], [ "ICLR.cc/2020/Conference/Paper1847/Authors" ], [ "ICLR.cc/2020/Conference/Paper1847/Authors" ], [ "ICLR.cc/2020/Conference/Paper1847/Authors" ], [ "ICLR.cc/2020/Conference/Paper1847/Authors" ], [ "ICLR.cc/2020/Conference/Paper1847/Authors" ], [ "ICLR.cc/2020/Conference/Paper1847/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1847/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1847/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"Two reviewers are borderline and one recommends rejection. The main criticism is the simplicity of language, scalability to a more complex problem, and questions about experiments. Due to the lack of stronger support, the paper cannot be accepted at this point. The authors are encouraged to address the reviewer's comments and resubmit to a future conference.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Paper Updates\", \"comment\": [\"We want to thank the reviewers again for their comments and questions.\", \"Following the reviewer feedback, we made the following updates to the paper:\", \"Minor changes in the introduction following Reviewer 2 comments\", \"Explicit a validation procedure in Section 3 and Appendix to avoid ill-trained goal instructor\", \"Explicit that the accuracy metric can be replaced with language score, e.g., BLEU, RED, etc. in Section 4.2 and Appendix\", \"Add a new baseline (reward shaping) in Figure 4.\", \"Describe the baseline in section 4.3 - Baseline / Results.\", \"Update Section 4.3 - Limitations:\", \"Explicit HER vs. THER in the sparse reward setting\", \"Mention potential issues with ill-trained generators\", \"Add Ablation study: when should we activate the instruction generator vs. the number of collected positive trajectories. Figure 9 in the Appendix. Note that other plots are running (3000 positive trajectories)\", \"Add IRL for instruction following in the Related Work section\", \"Add Conditioned Language Policy in the Related Work section\", \"Mention that THER can be extended to other modality in the conclusion\", \"We hope these responses satisfactorily address the raised concerns.\"]}", "{\"title\": \"References\", \"comment\": \"[1] Pennington, Jeffrey, Richard Socher, and Christopher Manning. \\\"Glove: Global vectors for word representation.\\\" Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 2014.\\n[2] https://github.com/facebookresearch/fastText\\n[3] Wang, Xin, et al. \\\"Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.\\n[4] Fried, Daniel, et al. \\\"Speaker-follower models for vision-and-language navigation.\\\" Advances in Neural Information Processing Systems. 2018.\\n[5] Schaul, Tom, et al. \\\"Universal value function approximators.\\\" International Conference on Machine Learning. 2015.\\n[6] Andrychowicz, Marcin, et al. \\\"Hindsight experience replay.\\\" Advances in Neural Information Processing Systems. 2017.\\n[7] Chen, Howard, et al. \\\"Touchdown: Natural language navigation and spatial reasoning in visual street environments.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.\\n[8] Das, Abhishek, et al. \\\"Neural modular control for embodied question answering.\\\" arXiv preprint arXiv:1810.11181 (2018).\\n[9] Jiannan Xiang et al. \\\"Not All Actions Are Equal: Learning to Stop in Language-Grounded Urban Navigation.\\\" Visually Grounded Interaction and Language Workshop. 2019\"}", "{\"title\": \"Official Comment\", \"comment\": \"First of all, thank you for reading the paper thoroughly and taking the time to review it.\\n\\n\\n1) As you mentioned, there is a potential bootstrapping problem when using incorrect mapping (Reviewer #3 speaks about a negative feedback loop). If the mapping is not functional, we can fear that the policy cannot learn the correct behavior, or start degenerating. One of the paper contributions was to observe that this situation does not occur in our experiments (even in a highly noisy setting). Following the rebuttal, we also run THER by triggering the mapper from the beginning to invalid trajectories in our buffer. Yet, we did not observe a substantial loose of performance (cf Appendix).\", \"one_potential_intuition_would_be_the_following\": \"if the instructions are invalid, the agent would follow an \\\"average\\\" policy, e.g., pick random objects; the goal-space would either be ignored or turned into a random representation. As those are non-degenerate cases, the agent may quickly recover as soon as the mapper becomes partially correct. Therefore, THER may only slow down the training in the worst-case scenario.\\nWe also want to emphasize that THER can be activated any time during the training, and a simple solution consists of waiting until the mapper reached an acceptable performance. In this spirit, we updated the paper to explicitly assess the mapper quality by using a validation accuracy, and triggering it once a threshold is reached (using only collected samples, no external data is necessary to validate the generator, cf. Algorithm in the Appendix). Besides, we can use any language metric (BLEU, RED, SPiCE, METEOR etc.) to replace the parser accuracy (We updated Section 4.2, second paragraph). Therefore, we could automatically assess a minimum level of language generation before triggering the instructor generator. In simple cases, the generator is learnt quickly and can be used right from the start and in harder setup, the generator kicks in when the validation accuracy is high enough, avoiding potential mislabeling at early stages.\\n\\n\\nIf the language was more complex, the goal generator might not be able to learn the language from scratch. However, we can ease language learning by using pretrained word embedding, e.g. GLoVe [1] or Fasttext [2], or using annotated trajectory. For instance, [3] and [4] successfully train an instruction generator in the Room2Room dataset with only 21k instructions. \\n\\n\\n\\\"HER does not have this problem as it has an oracle goal mapping function,\\\"\\n-> This is a fair point that we may not have emphasized enough in the paper: HER works in the absence of signals while THER only alleviate the sparse reward problem setting. We thus make it explicit in the limitation paragraph.\\n\\n\\n2) In the UFVA MDP setting, $f(s,g)$ is given by the environment [5,6]. More precisely, the terminal state is an absorbing state whose transition entails the final reward. In practice, we can use the reward function as a stopping criterion, and in this paper, we implement it as follow $f(g,s) = r$. In a more realistic scenario, f is often hard-coded, e.g. the agent is close enough from the objective point. There has also been emergent literature in vision-and-language navigation tasks to tackle this issue [7,8]. For instance, the agent has to learn when to stop to answer questions on his environment in embodied question answering [9]. In those scenarios, an additional stopping module is learned from data.\\n\\n\\nAgain, thank you very much for your comments, we hope to have answered some of your concerns such as the UFVA setting, invalid bootstrapping, and a practical solution to handle natural language. We are open to discussion or additional changes if you think they can improve the paper further.\"}", "{\"title\": \"References\", \"comment\": \"[1] Jaderberg, Max, et al. \\\"Reinforcement learning with unsupervised auxiliary tasks.\\\" arXiv preprint arXiv:1611.05397 (2016).\\n[2] Chaplot, Devendra Singh, et al. \\\"Embodied Multimodal Multitask Learning.\\\" arXiv preprint arXiv:1902.01385 (2019).\\n[3] Wang, Xin, et al. \\\"Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.\\n[4] Fried, Daniel, et al. \\\"Speaker-follower models for vision-and-language navigation.\\\" Advances in Neural Information Processing Systems. 2018.\\n[5] Bellemare, Marc, et al. \\\"Unifying count-based exploration and intrinsic motivation.\\\" Advances in Neural Information Processing Systems. 2016.\\n[6] Haber, Nick, et al. \\\"Learning to play with intrinsically-motivated, self-aware agents.\\\" Advances in Neural Information Processing Systems. 2018.\\n[7] Sahni, Himanshu, et al. \\\"Visual Hindsight Experience Replay.\\\" arXiv preprint arXiv:1901.11529 (2019).\\n[8] Jiang, Yiding, et al. \\\"Language as an Abstraction for Hierarchical Deep Reinforcement Learning.\\\" arXiv preprint arXiv:1906.07343 (2019).\\n[9] Ng, Andrew Y., Daishi Harada, and Stuart Russell. \\\"Policy invariance under reward transformations: Theory and application to reward shaping.\\\" ICML. Vol. 99. 1999.\\n[10] Anderson, Peter, et al. \\\"Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\\n[11] Chen, Howard, et al. \\\"Touchdown: Natural language navigation and spatial reasoning in visual street environments.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.\"}", "{\"title\": \"Official Comment (1/2)\", \"comment\": \"First of all, thank you for reading the paper thoroughly and taking the time to review it.\\n\\nIntroduction\\n----------------------\\n\\nFirst of all, thank you for your recommendations to improve the introduction readability, and we updated the paper accordingly.\", \"we_kept_the_questions\": \"\\\"are we eventually [...] and language acquisition?\\\". At the beginning of the introduction, we wanted to emphasize that interleaving RL and language may either conflict or help the learning process in the general case. Although instruction following has its specific motivations (\\\"you can communicate novel objectives to your agent\\\"), it is also a natural test-bed to couple language and RL, and study this interconnection.\\n\\nAs suggested, we motivated the use of language goal descriptors over state-space goal descriptors (as used with regular HER) by the following example: \\\"More generally, language adds a level of semantics, which allows generating textual objective that could not be encoded with spatial observations as in regular HER, e.g., \\\"fetch a ball that is not blue\\\" or \\\"pick any red object\\\" \\\".\\n\\nWe agree that the line \\\"triggers [...] policy learning\\u201d is misleading, and we removed it.\\n\\n\\nExperiments\\n----------------------\\n \\nWhy noisy-HER work? -> An accuracy of 80 percent per attribute implies that an instruction is completely wrong 41\\\\% of the time (1 - 0.8^4). This means that 59\\\\% of the remaining time, at least 1 attribute is correct. As shown by the reward shaping experiment (described below), rewarding the agent when the picked object shares some attributes with the correct one triggers learning and allow the policy to outperform a random one. As few objects are present in the scene, all four attributes are not necessary to discriminate between objects (sometimes 1 or 2 are sufficient, see Fig.8 in the Appendix), allowing the policy to pick the correct object from time to time.\\n\\nThe synthetic noisiness distribution vs. real distribution -> As you mentioned, the distribution differs between the noisy-HER experiments and the final experiments. For instance, the final experiment can repeat words, discard a property, or contain different values of the same property. Yet, these cases do not occur in noisy-HER. It may be hard to compare both distributions rigorously, and we are not sure how informative it could be: we mostly design noisy-HER as a proof of concept before trying our approach.\\n\\nNegative feedback loop (+Rev#2) -> Thank you for your remark, and we added a new paragraph and experiments to assess this point. Besides, we also described more rigorously how to train the THER mapper to limit this potential risk in the paper. \\nAs you suggested, we may fear that activating a poor quality mapper could hurt the agent training, and even create a negative feedback loop that would favor degenerate policies. We first thus launched our experiments by applying the mapper from the beginning. We observe that the agent always manages to learn a valid policy, and it was pretty robust to invalid instructions in our setting. \\nAs it may differ in other environments, we also updated the paper to explicitly evaluate the mapper quality by using a validation accuracy and triggering it once a threshold is reached (cf. Algorithm in the Appendix). In practice, we can save a percentage of the positive trajectories, and iteratively create a validation dataset. Besides, we can use any language metric (BLEU, RED, SPiCE, METEOR, etc.) to replace the parser accuracy when dealing with natural language (We updated Section 4.2, second paragraph).\\n\\n\\nApplying external expert HER -> Some environment, e.g., Room2Room [10], Touchdown [11], comes with a static dataset, and there are no instruction generators (or oracles). Therefore, it is not possible to apply HER, and a mapping function must be learned, as advocated in this paper.\"}", "{\"title\": \"Official Comment (2/2)\", \"comment\": \"Baselines -> Thank you for raising this point. Although we design the experimental protocol to assess THER, we acknowledge that other baselines can give additional intuition, and we thus add another baseline (cf. Figure 4). As first suggested, we cannot benchmark Jiang et al. [8] models as they are tackling a slightly different problem: they perform *internal* instruction following operations inside the network, which is different from the UVFA setting that we are considering. Besides, the language models use pre-computed instructions, which cannot be generalized to unseen goals. The authors even mention that they leave instruction generator to future works, and \\\"could not leverage the structure of language\\\" (page 16). \\nWe also consider auxiliary losses or inductive neural biases; although complementary, we believe that they are too orthogonal to our approach to be relevant in the current paper.\\nAs a result, we designed a reward shaping baseline where the agent has a reward of 0.25 for every matching property in the instruction when it picks an object. This baseline enforces a hand-crafted curriculum and dramatically reduces the reward sparsity, which is the key property we want to evaluate. In the experiments, this strong baseline only slightly outperforms DQN+HER, and DQN+THER is only a few percentage points away. Yet, such reward shaping requires human expertise and may alter the optimal policy [9], while THER has a close score without these drawbacks. In total, we have three baselines DQN, DQN+reward\\\\_shaping, and DQN+HER.\\n\\n\\nVicious Circle -> As mentioned earlier, we recommend to use a validation accuracy to train the mapper, and avoid over-fitting or reduced efficiency. As the mapper complement the Q-learner, it can be triggered anytime to kick-start the learning. It is thus more flexible than auxiliary losses, or inductive network biases that could also negatively impact the training (e.g., pixel prediction [1], hard-coded neural hierarchy - PACMAN [2]). We updated the Algorithm and pointed out this pipeline in the paper in the limitation paragraph.\\n\\n\\nTemporally extended tasks -> As discussed with reviewer #1, we believe that THER can be applied to temporally extended tasks, e.g., Room2Room [9], Touchdown [11], etc. Yet, we also believe that such environments would require a full paper on their own to be correctly evaluated. Nonetheless, there have been some hints that THER could be successfully applied in practice by warm starting the instruction generator with human demonstrations: [5] and [6]. THER can also be coupled with intrinsic motivation methods [5,6] to deal with longer trajectories if we want to avoid human annotation. \\n\\n\\nLanguage vs. state-based generalization. -> We believe that language has natural generalization properties, which are not present in state-space goal descriptors. For instance, \\n - Language goal-descriptors is more compact, are easily interpretable, and have compositionality properties which may be absent of the state-space goal descriptors as described in the introduction.\\n - Language goal-descriptors remove potential distractors from the state/observation space: an observation may contain objects that are irrelevant to the goal at hand. \\n - Language goal-descriptors allow for more complex goals that cannot be represented in the state-space: e.g., negation (pick a ball that is not blue), basic reasoning (pick the biggest ball), missing properties.\\n - Language goal-descriptors are agnostic to the input space modality, while state-space goal descriptors depend on the input (which may impact learning quality, reduce potential transfer learning, require more engineering, etc.). In robotics, the input may be 2D coordinates, 3D coordinates, or RGB inputs, whereas the goal remains the same. Finally, [7] tried to generate visual goals along trajectories (which is a state-based ), but the authors had to implement a complex GAN pipeline to obtain a state-based goal generator, and the mapper could not generalize well to unknown scenarios.\\n\\n\\n Related Work\\n----------------------\\n \\nAs suggested, we update the related work section with:\\n - Fu, Justin, et al. \\\"From language to goals: Inverse reinforcement learning for vision-based instruction following.\\\" arXiv preprint arXiv:1902.07742 (2019). -> we added a few notes on IRL for instruction following\\n - Jiang, Yiding, et al. \\\"Language as an Abstraction for Hierarchical Deep Reinforcement Learning.\\\" \\n - Co-Reyes, John D., et al. \\\"Guiding policies with language via meta-learning.\\\" arXiv preprint arXiv:1811.07882 (2018). \\n\\nThank you very much for pointing out those relevant works.\\n\\n\\n Conclusion\\n----------------------\\nWe hope that we have correctly answered your questions, e.g., adding another baseline and preventing vicious circles. We remain open to discussion and to other changes toward improving paper quality. Again, we thank you for your extensive feedback.\"}", "{\"title\": \"References\", \"comment\": \"[1] Sahni, Himanshu, et al. \\\"Addressing Sample Complexity in Visual Tasks UsingHER and Hallucinatory GANs.\\\" arXiv preprint arXiv:1901.11529 (2019).\\n[2] Chan, Harris, et al. \\\"ACTRCE: Augmenting Experience via Teacher's Advice For Multi-Goal Reinforcement Learning.\\\" arXiv preprint arXiv:1902.04546 (2019).\\n[3]Anderson, Peter, et al. \\\"Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\\n[4] Chen, Howard, et al. \\\"Touchdown: Natural language navigation and spatial reasoning in visual street environments.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.\\n[5] Wang, Xin, et al. \\\"Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.\\n[6] Fried, Daniel, et al. \\\"Speaker-follower models for vision-and-language navigation.\\\" Advances in Neural Information Processing Systems. 2018.\"}", "{\"title\": \"Official Comment\", \"comment\": \"First of all, thank you for reading the paper thoroughly and taking the time to review it!\\n\\nStrengths\\n-----------------\\n\\nModality Agnostic. -> As you mentioned, the method is agnostic to the goal modality. We did not highlight this point on purpose as two workshop papers already deal with a goal mapping function in vision [1] and language [2]. Thus, we did not want to claim our procedure as a novel contribution. However, we acknowledge that this paper formalizes the training pipeline independently of the modality. After reading your comment, we thus decided to mention this point in at the end of Section 3 and the conclusion. Thank you for emphasizing this point. \\n\\n\\nWeaknesses\\n---------------------\\n\\nWe acknowledge the environment's apparent simplicity. However, we voluntary set it up this way to evaluate the impact of our method. More precisely, we increase the environment size, the number of objects per room, and the number of attributes to break a heavily-tuned DQN. For instance, BabyAI only has two attributes (color and shape), which was too limited to assess language compositionality. However, we did not use walls as they mostly increase the exploration difficulty, which is not the core issue of the paper. In other words, we designed the environment to analyze THER properties while developing as much intuition as possible. Would you recommend to make this point more explicit?\\n\\n\\nWe agree that THER requires a small number of positive samples to work. However, even the best RL algorithm would not work if it does not have reward signals, i.e., positive trajectories! It is a standard RL setting to be able to collect some positive samples with a random policy. In this paper, we show that THER reduces the number of positive trajectories to kickstart a DQN agent. \\n\\n\\nWe also share your view on assessing THER on more challenging tasks in the long run, e.g., Room2Room [3], Touchdown [4], etc. They are a natural extension to BabyAI, and they open interesting research problems such as complex language instructions, photo-realistic perceptions, or relying on static datasets. Thus, we believe that such environments would require a full paper on their own to be correctly evaluated. Nonetheless, there have been some hints that THER could be successfully applied in practice: [5] and [6] successfully train an instruction generator in the Room2Room dataset with 21k instructions. We could then warm-start the instruction generator (and the policy) before finetuning the agent. We updated the related section to reflect those research directions.\\nIn the end, this paper focuses on working on a synthetic environment to have a good understanding of the algorithm mechanism before scaling up to more complex settings. Therefore, we believe that it would still be impactful in its current form. \\n\\n\\nWe thank you for suggesting an additional ablation in Figure 4; we thus assessed different mapper accuracy levels in the Appendix (Some experiments are still running, and the paper will be progressively updated). In a few words, there is little impact toward triggering the mapper early during the training, and THER is pretty robust to invalid early instructions in this setting. Thus, there is little insensitive to wait for a perfect mapper, as it only delays the training. \\n\\nAs it may differ in different environments, we also updated the paper to explicitly evaluate the mapper quality by using a validation accuracy and triggering it once a threshold is reached (cf. Algorithm in the Appendix + footnote in Section 3). In practice, we can save a percentage of the positive trajectories, and iteratively create a validation dataset. Besides, we can use any language metric (BLEU, RED, SPiCE, METEOR, etc.) to replace the parser accuracy when dealing with natural language (We updated Section 4.2, second paragraph). We believe this procedure to be task-agnostic, rigorous, and easily scalable to complex scenarios.\\n\\n\\nConclusion\\n-----------------\\n\\nAgain, thank you very much for your comments, we hope to have answered some of your concerns such as the potential scalability of the issue and running an ablation study. We are open to discussion or additional changes if you think they can improve the paper further.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"What is the specific question/problem tackled by the paper?\\nThis paper tackles the problem of learning language-conditioned policies from reinforcement learning. Unlike most language-conditioned navigation work which relies on human demonstrations (e.g. in the room2room environment), this work only learns from the agent\\u2019s experience using a generalization of hindsight experience replay.\", \"method_overview\": \"THER (textual HER) generalizes HER to cases where goals are not in the same space as states. To deal with this gap, THER learns a mapping from state space to goal space using successful trajectories. This mapping is then used to relabel unsuccessful trajectories with a guess of what goal was reached. This intuitive approach allows the text-conditioned agent to reach 40% at a 2D navigation task when conditioned on text such as \\u201cPick the large red circle\\u201d.\", \"strengths\": \"The method is well motivated and would be useful. The ablations of showing how many successful trajectories are needed to learn the mapping (~1000-5000), how many time steps are needed to reach 1000 successes (~400k steps), and how accurate the mapping needs to be for HER to work (~80%) and thorough and easy to understand. This experimental completeness is itself a contribution. \\n\\nAdditionally, although the authors do not discuss this, this method is actually agnostic to the particular modality (e.g. text) of the goal space and could be used anytime the goal space differs from the state space.\", \"weaknesses\": \"The primary weakness of the paper is that the testbed environment and the textual goals are very simple. The \\u201clanguage\\u201d is just a list of up to 4 attributes describing the different objects and the control is simple navigation without any walls of visual variation. Additionally, the method requires accidentally getting successful trajectories early in training in order to train the mapping, and in this environment it is very easy to get successful trajectories. \\nI would interested in seeing how this method would work in the room2room environment (or some other more complex task). While it is unlikely to outperform the prior methods that use the human demonstrations, it would be useful to see how close THER can get to that performance and with how many environment steps. The advantage of this environment is that it has real human knowledge, and the textual goals are limited in number, making the experiment much more realistic (as humans are unlikely to sit next to an agent and generate infinitely many diverse textual goals). \\n\\nA missing ablation in Figure 4 left is THER without waiting 400k steps before relabeling. In realistic scenarios, we would not be able to evaluate the mapper ahead of time to know when to start relabeling. How is performance affected is this knowledge is not available?\\n\\nOverall, I lean to reject the paper in it\\u2019s current form, I believe this paper would be more impactful with experiments involving more language complexity or more policy complexity.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This work attempts to learn instruction following agents without the requirement for a paired instruction, behavior corpus and instead simply relying on environment interaction and using a form of hindsight relabeling to learn how to relate language and behavior.\", \"introduction\": \"\", \"it_thus_leads_to_the_following_questions\": \"are we eventually making the rein-forcement learning problem harder, or can we generate learning synergies between policy learningand language acquisition? -> it really doesn\\u2019t seem like the point of instruction following is that. It seems like you want instruction following so that you can communicate novel objectives to your agent, with the promise of generalization. Maybe reword?\\n\\nThe motivation for using HER makes sense, but maybe a bit more would be useful to describe why we need text here and not just regular HER. \\n\\nI think this line \\u201ctriggers alearning synergy between language acquisition and policy learning\\u201d is pretty confusing and not really adding too much value. Would remove. \\n\\nOverall motivation makes sense, do language grounding in an interactive way much more effectively by leveraging hindsight relabeling but to get around the circular problem of hindsight generation leverage a model of successful behaviors seen thus far. This is a pretty neat thing to do!\", \"textual_her\": \"\\u201cWe emphasize that such signals are inherent to the environment, and an external expert does not provide them\\u201d -> I do not think this is true. Rewards do not magically show up in the environment, they have to be provided. I get what you\\u2019re trying to say but this statement is very often not true. Please revise.\", \"conceptual_question\": \"what happens if none of the random trajectories are successful coz the reward is so sparse? Wouldn\\u2019t this be prohibitive? Importantly, I would be curious to understand how the number of entries in D affects the relabeling function m_omega and how this can be good or bad depending on the schedule of training. For instance if the D is very small at the start, it\\u2019s not going to be very good at doing the relabeling and might be erroneous.\\n\\nExperiments\\n\\n\\u201cSurprisingly, even highly noisy mappers, with a 80%noise-ratio, still provides an improvement over vanilla DQN-agents\\u201d -> do you know why?\\n\\nThe synthetic noisiness that you introduce is from a different distribution than the type of noisiness you\\u2019d expect from just having very few successful trajectories to train m_omega right? How can we evaluate that?\\n\\nIf we consider the number of successful trajectories obtained just by accident, is it even 1000 or 5000 as required to train the m_omega. If this is a negative feedback loop couldn\\u2019t it just keep getting worse because we are using erroneous m. Should we use some notion of uncertainty or something to know when to relabel with m? Or does it happen always\\n\\nI generally like Section 4.2 -> nicely motivated!\\n\\n\\u201cWe emphasize again that it is impossible to have an external expert to apply HER in the general case\\u201d -> why??\\n\\nCan the authors introduce other baselines. For instance the recent paper from Yiding Jiang, Chelsea Finn, Shane Gu and others might be a start. Consider corpus based instruction following would be another. Maybe these can be compared in terms of the number of instructions that need to provided to it? But I think for a successful ICLR paper, we would need 1-2 more meaningful baselines. \\n\\n\\u201cFinally, we observe a virtuous circle that arises.\\u201d -> is there some mechanism to ensure that this is a virtuous cycle and not a vicious one? Couldn\\u2019t we just have horrible label corruption and then everything goes bad?\\n\\nHow easily would this scale to more temporally extended tasks in minigrid which have larger grids and more challenging tasks which are harder to solve in the sparse reward case?\\n\\nCan we analyze whether the language goal space has some favorable generalization properties over a state based goal space as typical HER uses?\\n\\nThe language analysis in Section 4.4 is quite insightful and shows the good performance of the instruction generator over time. \\n\\nHow would this fare as language got more ambiguous and multimodal and the instruction generator had a harder time as well as HER might generalize more poorly?\\n\\nRelated Work\\nFu et al (From Language to Goals: Inverse Reinforcement Learning for Vision-Based Instruction Following) might be relevant for instruction following as well, and some of Karthik Narasimhans work. \\n\\nLearning interactively with language would also be related to Co-Reyes et al (Guiding Policies with Language via Meta-Learning)\\n\\nYiding Jiang\\u2019s recent work would also be relevant (https://arxiv.org/abs/1906.07343)\\n\\n\\nOverall I like the formulation, and it seems pretty useful for instruction following. But we need more comparisons, and a little more motivation on when this thing might become degenerate coz of the m labeling. Perhaps even a discussion/experiment on the uncertainty measure might be helpful.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes THER (textual hindsight experience replay), which extends the HER algorithm to the case when goals are represented as language. Whereas in HER the mapping from states to goals is the identity function, THER trains a separate modeling network which performs a mapping from states to goals represented by language. The policy (represented as a Q-function trained via DQN) takes in the goal (a command) as an additional argument as done in HER, which allows the agent to be commanded different tasks. The authors evaluate THER on the MiniGrid environment, where they demonstrate that THER greatly outperforms vanilla goal-conditioned DQN, even in the presence of significant label noise.\\n\\nOverall, combining HER with language-based goals is an interesting and novel problem, and potentially a promising approach to solving language-conditioned reinforcement learning where sparse rewards are common. The authors show fairly convincingly that THER heavily outperforms DQN, which fails to improve from the random initial policy. However, I have several conceptual concerns with the proposed algorithm:\\n\\n1) There seems to be a bootstrapping problem in the algorithm, with regards to the instruction generator and the policy. If the algorithm does not succeed in reaching goals, then the instruction generator m_w has little training data. However, if m_w is not good, then the algorithm will not be able to give good reward signal to the policy. HER does not have this problem as it has an oracle goal mapping function, so m_w is always good. Evidently, the algorithm worked on the domain that it was tested in, but do the authors have any intuition on when this bootstrapping behavior could be harmful, or some justification on why it would not happen? If the language was more complex (and not limited to a small set of template instructions), would the THER approach still be reasonable?\\n\\n2) How does the algorithm detect if a given goal state corresponds to successful execution of a command? Or in the notation of the paper, how is f(s,g) implemented? In general, this does not seem like a trivial question to answer if one were to implement this algorithm in a real-world scenario.\\n\\nMy overall decision is borderline (learning towards accept), as the experiments were well done and serve as a good proof-of-concept, but I am unsure if this approach will scale well outside of the particular tested domain.\"}" ] }
HJg_tkBtwS
Model-Agnostic Feature Selection with Additional Mutual Information
[ "Mukund Sudarshan", "Aahlad Manas Puli", "Lakshmi Subramanian", "Sriram Sankararaman", "Rajesh Ranganath" ]
Answering questions about data can require understanding what parts of an input X influence the response Y. Finding such an understanding can be built by testing relationships between variables through a machine learning model. For example, conditional randomization tests help determine whether a variable relates to the response given the rest of the variables. However, randomization tests require users to specify test statistics. We formalize a class of proper test statistics that are guaranteed to select a feature when it provides information about the response even when the rest of the features are known. We show that f-divergences provide a broad class of proper test statistics. In the class of f-divergences, the KL-divergence yields an easy-to-compute proper test statistic that relates to the AMI. Questions of feature importance can be asked at the level of an individual sample. We show that estimators from the same AMI test can also be used to find important features in a particular instance. We provide an example to show that perfect predictive models are insufficient for instance-wise feature selection. We evaluate our method on several simulation experiments, on a genomic dataset, a clinical dataset for hospital readmission, and on a subset of classes in ImageNet. Our method outperforms several baselines in various simulated datasets, is able to identify biologically significant genes, can select the most important predictors of a hospital readmission event, and is able to identify distinguishing features in an image-classification task.
[ "feature selection", "interpretability", "randomization", "fdr control", "p-values" ]
Reject
https://openreview.net/pdf?id=HJg_tkBtwS
https://openreview.net/forum?id=HJg_tkBtwS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "5z8aGOxQ4", "BkxJ_TjqoB", "SyxvyrNtiH", "S1l9qV4YjH", "B1gDv4VYiB", "S1x8sQEKiB", "r1eV9y2k5r", "H1xZGoOhFS", "rkgGvZb3KS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733965, 1573727591367, 1573631199179, 1573631122075, 1573631070764, 1573630878169, 1571958667629, 1571748617085, 1571717466203 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1846/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1846/Authors" ], [ "ICLR.cc/2020/Conference/Paper1846/Authors" ], [ "ICLR.cc/2020/Conference/Paper1846/Authors" ], [ "ICLR.cc/2020/Conference/Paper1846/Authors" ], [ "ICLR.cc/2020/Conference/Paper1846/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1846/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1846/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper presents an approach to feature selection. Reviews were mixed and questions whether the paper has enough substance, novelty, the correctness of the theoretical contributions, experimental details, as well as whether the paper compares to the relevant literature.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thanks for the update\", \"comment\": \"Thanks for the updated version and your responses. The responses did clarify some questions I had. I still wonder about the motivation of the proposed approach (as opposed to, say, using some conditional dependence test statistic which does not require estimating conditional densities many times).\"}", "{\"title\": \"Updates and clarifications regarding the paper\", \"comment\": \"We thank the reviewer for their comments.\\n\\n*Computational expense*\\nThe reviewer points out correctly that fitting a model on each feature is computationally expensive in high-dimensional settings. While our algorithm is embarrassingly parallel, we added discussion about a faster version, the fast-ami-crt, which requires only 1 extra regression onto response per feature. While the fast version may not enjoy the theoretical guarantees of ami-crt, it proves effective empirically. We also explore the computational vs statistical tradeoffs in choosing between the regular and fast versions.\\n\\nIn practice, knowledge about the data-generating process can be used. For example, in genetics, genes are independent of other genes located far away. Further, given groups of features, the inpainting method we use for images would apply to any data. We believe that automatically choosing such groups in a domain-agnostic manner is an interesting direction for the future extensions building on top of our methods.\"}", "{\"title\": \"Updates and clarifications regarding the paper (Cont.)\", \"comment\": \"*Instance-wise feature selection sufficient condition*\\nIn practice, this condition is achieved in tasks like image-recognition where the distribution y | x is sharp for most samples. This is reflected in the test error. As discussed in the instance-wise section, when perfect prediction does not hold, it may not be possible to distinguish between important and unimportant features for all samples. The original draft discusses this issue with an example in Section 3. In our experiments we have a noisy-selector, we show that while noise degrades performance across all methods, ami-iw still performs best.\\n\\n\\n*Experiments*\\n**Simulations**\\nD is the dimension of the x variable. We include particular parameter choices for each experiment in the updated version. We have also added further discussion explaining the performance of our method. Briefly, the simulated examples demonstrate that while other methods do perform well with respect to certain metrics, only ami-crt satisfies all desiderata: high power, uniformity of p-values for null features, and control of False-discovery rate (captured by AUROC). \\n\\n*# Minor but does affect the evaluation*\\n\\n* conditional distributions after lemma 1 vs. conditional distributions in Eq. 3*.\\n\\nAll f-divergences use the ratio of the conditional distributions in Eq. 3. This ratio simplifies to include the conditional distributions in the paragraph after lemma 1. We have noted that this might be hard to follow, and clarified this fact in the updated draft (final paragraph of section 2.1), rather than in the referenced appendix section.\\n\\n* section E.1 (appendix), Mixture of gaussians vs. gaussian density*\\n\\nThere seems to be a misunderstanding here. In the original draft, section E.1 (appendix), page 16, N(mu, sigma) is a gaussian density, not a gaussian random variable. The average of two gaussian densities is not a gaussian density, rather a mixture of gaussians. (In the new draft, the derivation is moved to section C.1)\\n\\n*Things that can be improved. Did not affect the score.*\\n\\n* Section 1.1: the sentence about permutation tests is vague.*\\n\\nWe have updated this sentence from \\u201cfail in the case of\\u201d to \\u201cfail to test conditional independence\\u201d for added clarity.\\n\\n* Page 2, our contributions: \\\"necessary\\\" should be \\\"sufficient\\\"?*\\n\\nWe believe that proper-test statistics are necessary for feature selection without making further assumptions about the relationship between outcome and response.\\n\\n* Section 2, conditional randomization tests.*\\n\\nWe have simplified the introduction to section 2 for clarity. The revised version better sets up the need for such tests in finite sample settings.\\n\\n* Eq 1: that (i) is unclear. Should state that for i=1,...,N.*\\n\\nThis equation states that all samples in the dataset have their jth feature replaced by a sample from q(x_j | x_{-j}). The equation has been updated to further clarify this.\\n\\n* Eq 2: rewrite the second line. The left hand side states that the p-value \\\"converges in distribution to\\\". The second line should be just 0.*\\n\\nEquation 2 describes the convergence in distribution of the p-values. If the jth feature was not important, then this distribution is Uniform(0,1). Otherwise, it converges to a distribution where observing 0 has probability 1. We have clarified this in the draft.\\n\\n* After eq.6, how to choose T (the number of bins) in practice?*\\n\\nThe choice of bins has been explored in the histogram approximation literature. We point the reviewer to the discussion in (Wasserman, 2006) and (Miscouridou, 2018).\\n\\n* Definition 3 is actually a proposition? It is unclear what is being defined there.*\\n\\nDefinition 3 has been updated to Proposition 1.\\n\\n* The word \\\"complete conditional knockoffs (CCKs)\\\" appears for the first time in Section 3.2 without any explanation.*\\n\\nWe have removed the terminology CCK and instead refer to the object q( x_ j | x_{ -j } ) itself as necessary.\\n\\n* Orange skin on page 8: what is \\\"~ exp(...)\\\"? An exponential distribution, or just exponential function?*\\n\\nThe discussion in the experiments about the data generation processes has been clarified. \\n\\nWe have updated the paper to clarify all definitions and added required information.\"}", "{\"title\": \"Updates and clarifications regarding the paper\", \"comment\": \"We thank the reviewer for their comments.\\n\\n*Overall*\\nWe have made edits in response to this review and reviewer 1. Your summary helped us identify the necessary edits. We clarify our contributions here:\\n\\nWe introduce the notion of proper test statistics. These are test statistics that, when used in a randomization test for feature selection, yield power that approaches 1 in the limit of data, and are uniformly distributed under the null hypothesis.\\nWe show estimators of expected divergences are proper test statistics. We develop AMI-CRT which uses the KL-divergence for a computational speedup over other divergences, and enables the reuse of model code for computing a null distribution. We also show that unlike the 0-1 loss, the log probability in the KL-divergence is smooth and therefore results better calibrated p-values.\\nWe develop sufficient conditions to perform instance-wise feature selection, show how our estimates can be adapted to this setting, and compare our methods to several state-of-the-art baselines on simulated and real datasets.\\n\\n*Proper-tests and AMI-CRT*\\n\\nIn the original draft, we mention that the definition of a proper test statistic mirrors that of a proper scoring rule. So the advantages that proper test statistics offer are analogous to the advantages that proper scoring rules offer in supervised learning.\\nProper test statistics make minimal assumptions about the true data generating process and are asymptotically as powerful as any other test.\\nWe have since clarified the discussion to highlight the value of the KL divergence\\nUsing the KL-based statistic does not require computing q(y | x_{-j}). This provides two important benefits: a) avoids learning from x and x_{-j} which have different dimensions and may require different model structures. For example, convolutional-networks require additional padding when learning from x_{-j}. This advantage also allows use to reuse our AMI-CRT framework to perform instance-wise feature selection. b) We compute one less conditional distribution per feature, thereby decreasing the amount of required computation.\\n\\n*Fitting regressions vs. conditional density models*\\nWe want to offer a clarification here. As discussed in the paper, feature selection involves testing conditional independence properties of the true data generating distribution in general. In this setting, knowledge of the true conditional density is required for feature selection.\\nMoreover, the conditional density of interest is over a scalar response variable which we can solve via supervised learning. When supervised learning is hard, fitting a single model from the features to the response which all but the most basic methods require might be hard, thereby ruling out both prediction and selection.\\n\\n\\n*Confusion regarding model-agnostic*\\nWe used the phrase 'Model-agnostic' to mean that our testing procedure does not depend on a specific model. Instead, we use whichever model fits best for a particular task.\", \"we_have_retitled_our_paper\": \"\\u2018Black-box feature selection with Additional Mutual Information\\u2019 to resolve this confusion and clarify our contributions.\\n\\n*Refitting for each draw from the null*\\nIn the updated draft we include discussion about fast-ami-crt. Fast-ami-crt fits only a single null model for each feature (rather than one per draw from the null), and uses a mixture of the full model and the null model to compute the test-statistic. In our experiments, relative to baselines we show that the mixture model guards against errors poor quality samples from the null.\\n\\n*Lemma 1 details*\\nThe reviewer points out correctly that the proof works for any divergence that can detect equality of distribution. However, we suggest f-divergences are simply an example of proper test statistics. Therefore, our proof need not rely on f-divergences. Regardless, we updated the writing to make this fact more apparent.\\n\\nWe note we only need the generalized inverse cumulative distribution function to guarantee uniformity of p-values. This exists when the cumulative distribution function of the test-statistic is continuous everywhere. We have clarified the discussion regarding this in the paper and updated the proof.\\n\\nUnder the null hypothesis H0, for any sample size N, the distribution of the p-value will be uniform(0,1). The fact that the estimators converge to a single number does not have any implication on the convergence of the distribution of the p-value. The reason behind this is that the limit and comparison (indicator function) do not commute.\"}", "{\"title\": \"Updates and clarifications regarding the paper\", \"comment\": \"We thank the reviewer for their comments.\\n\\n*Proper tests and AMI-CRT*\\n\\nThank you for the comment about divergences. We have updated the discussion about divergences. It now includes the derivation of delta_j and highlights the advantages of the KL divergence. See section 2.2 for the full derivation.\\n\\nWe added explanations for each step in the derivation for delta_j. \\\\tilde{x}_j is sampled anew from x_j | x_{-j}. Any dependence between y and \\\\tilde{x}_j is therefore broken by conditioning on x_{-j}.\\n\\n*Experiments*\\n\\nIn response to the review, we have reorganized the paper to include the presentation of these important results in the main text.\\n\\n*FDR controlling methods*\\n\\nThe Benjamini-Hochberg correction (1995) requires p-values to control the FDR. This table shows only those methods that produce p-values so that FDR can be controlled. We have made this fact more explicit in the main text.\\n\\n*Hospital readmission *\\n\\nYes, we reference a citation (Strack et. al. 2014) that clinically validates features from the hospital readmissions dataset and selects a subset as important. These features are used to compute our ROC curves.\\nThe dataset is from Strack et al (2014). Here is the filtering criteria they apply:\\n(1)\\tIt is an inpatient encounter (a hospital admission).\\n(2)\\tIt is a diabetic encounter, that is, one during which any kind of diabetes was entered to the system as a diagnosis.\\n(3)\\tThe length of stay was at least 1 day and at most 14 days.\\n(4)\\tLaboratory tests were performed during the encounter.\\n(5)\\tMedications were administered during the encounter.\\nNo additional samples were dropped from this dataset in our experiments. These steps have been added to the appendix.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a method to provide some level of interpretation on the influence of input features on the response of a machine level model all the way down to the instance level. The proposed method is model agnostic. Quoting the authors, they advocate for methods that look at interpretability \\u201cas understanding the population distribution through the lens of the model\\u201d without restriction on the models fit. The problem is posed as a hypothesis testing problem. The paper proposes \\u201cproper test statistics\\u201d for model agnostic feature selection. It is argued that f-divergence tests are proper statistic tests, with the KL being particularly interesting as it provides computational advantages.\\n\\nI have found the paper interesting. The topic is relevant and the approach is interesting. However, I have two main reservations for this work. First, I have found the method difficult to follow and sometimes unclear. Important results are only explained in the appendix. For instance, the derivation of Equation 5 is important but only shown in the appendix. Furthermore, that derivation in the appendix needs to be clarified in my view. For instance, on page 15, for the derivation of $\\\\delta_I$, can you explain how you went from the second equality to the third equality where references to \\\\tilde{x}_j are removed from one line to another? It could be due to your definition for the term with a conditional independence\\twith the outcome assumed but I suggest that you clarify this as it is important for the paper and for the use of the KL. Also, in this equation, should it be $q(x_j|x_{-j}) instead of $q(x_j,x_{-j})$?\\n\\nThe second issue that I have is with the experiments. Any reason why the key results on the interpretability of the approach are mostly shown in the appendix (e.g., table 4,5,6)? Why does table 6 not show results for all the baselines? For the hospital readmission use case, were you able to also get percentages of important features and have it compared with the baselines and vetted for clinical significance? This is more minor but worth double checking in my opinion. For this experiment on re-admission, the paper claims to have data from 130 hospitals for 10 years. Yet the n numbers seems pretty small to me. Total number of events < 100 000 for 130 institutions over 10 years. That would mean that we are dealing with less than an average of 80 admissions per institutions per year. Please confirm or explain if any filtering was done beyond what is described in appendix I.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"# Paper summary\", \"this_paper_addresses_supervised_feature_selection\": \"given a D-dimensional input variable x = (x_1, ..., x_D), and a response variable y, the goal is to find a subset of \\\"useful\\\" features in x. Here, a feature x_j is useful if it is dependent on y even when conditioning on all other input variables (denoted by x_{-j}, which is a set). A generic procedure that can produce a p-value for each feature (allowing on to test each feature whether it is useful) is the conditional randomization test (CRT) proposed in Candes et al., 2018. For the CRT to produce a valid p-value for each feature (input dimension) x_j, one needs to specify a test statistic that measures conditional dependence between x_j and y given the rest of the features.\", \"this_paper_contributes_the_following_results\": \"1. Propose using an estimate of the f-divergence for the conditional dependence measure and use it with the CRT (section 2.2). \\n\\n2. Measuring the conditional dependence with an f-divergence requires estimating a few conditional density functions. The paper considers the KL divergence as a special of f-divergence. This particular choice turns out the reduce the number of conditional density functions that have to be estimated (section 2.3). The paper also shows that the resulting conditional measure coincides with what is known as the Additional Mutual Information (AMI) studied in Ranganath & Perotte, 2018. \\n\\n3. The paper also studies instance-wise feature selection i.e., selecting a subset of input features which can explain the response specifically for one instance (example) x. Yoon et al., 2019 proposed a criterion to decide the importance of a feature for instance-wise feature selection (definition 2). Briefly, a feature x_j is deemed important if q(y | full x) > q(y | x without the jth feature), where q is the conditional density function of y given x. _Contribution_: The paper notes that this criterion may fail and derive sufficient conditions (Definition 3) under which this approach will always work. \\n\\nIn simulation on toy problems, the paper shows that the proposed method (KL divergence + CRT) has the highest mean under the ROC curve (Table 2), compared to competing methods. In real problems on images, the paper shows that the proposed instance-wise feature selection can be used to select relevant image patches (features) that explain the class of the input images. The paper also conducts experiments on hospital readmission data (Section 4.3), and genomics data (Section 4.2).\\n\\n\\n\\n# Review\\n\\nThe paper is overall well written with some parts that can be improved (details below). Introduction and related work in section 1 are easy to follow. The paper is also mostly self-contained and friendly to non-specialists who may not work on feature selection primarily. My concerns are\\n\\n1. I find that the amount of contribution is not sufficient. CRT is known from Candes et al., 2018. The present paper proposes using KL-divergence with it. This can be interesting if the combination gives some clear advantages. Unfortunately I do not find that this is the case. It turns out that one still needs to learn two conditional density functions (see lines 3-4 in Algorithm 1). Further and even more concerning, one has to refit another conditional density function *for each draw from the null distribution* (see \\\"Fit regression\\\" in the loop in Algorithm 1). As an intermediate step for solving the original feature selection problem, I find that learning conditional density functions is a much more difficult problem. All these limit the novelty of the idea. While the title of the paper contains \\\"model-agnostic\\\", the idea of fitting conditional density functions seems to contradict it. The paper could have considered some nonparametric conditional dependence measures but did not. For instance, see \\n\\nKernel-based Conditional Independence Test and Application in Causal Discovery\\nKun Zhang, Jonas Peters, Dominik Janzing, Bernhard Schoelkopf\\n2012\\n\\nand other papers that extend this paper.\\n\\nWhy was the approach of fitting conditional density functions chosen?\\n\\n2. Related to the previous point, refitting a conditional density model for each draw from the null distribution must be very costly computationally. This point is never addressed in the paper.\\n\\n3. Lemma 1 states that the expected f-divergence is a \\\"proper statistic\\\" (in the sense of Definition 1) i.e., p-value is uniformly distributed if the feature is not useful, and vanishes (asymptotically) if the feature is useful. This result unfortunately relies on a strong assumption that there is a consistent estimator for the f-divergence. In fact, the proof does not even rely on the fact that it is an f-divergence. It can be any divergence D(p,q) such that D(p,q) > 0 if p!=q and D(p,p) = 0. In the proof in section C.2 in the appendix, existence of the quantile function $(F^{-1}_N)$ is never discussed. I can see the first part of the proof (under the alternative H1). But I do not see the second part (under H0). Since $\\\\hat{f}$ is a consistent estimator by assumption, as N goes to infinity, the two quantities in the indicator function (in expectation) should both go to the same constant. Isn't this the case? \\n\\n4. As a contribution, the paper states a sufficient condition in Definition 3 under which instance-wise feature selection with the approach in Definition 2 is *always* possible. When does the condition hold in practice? How do we know if it holds or not? If it does not, what can go wrong?\\n\\n5. Toy experiments: What is D in Xor and Orange? Where is the \\\"selector\\\" problem in Table 2? In table 2, \\\"lime\\\" and \\\"shap\\\" also seem to perform well. The paper never explains why the proposed approach is better than other methods (only reporting higher mean are under the ROC curve). This should be possible for toy problems.\\n\\n\\n\\n# Minor but does affect the evaluation\\n\\n* Paragraph after Lemma 1: it is unclear why those conditional distributions are required instead of conditional distributions in Eq. 3.\\n\\n* Section E.1 (appendix), page 16: I think you should have $N( 0.5x_1 + 0.5x_2, \\\\sigma^2_\\\\epsilon)$ instead of \\n$0.5N(x_1, \\\\sigma^2_\\\\epsilon) + 0.5N(x_2, \\\\sigma^2_\\\\epsilon)$.\\n\\n\\n\\n# Things that can be improved. Did not affect the score.\\n\\n* Section 1.1: the sentence about permutation tests is vague.\\n\\n* Page 2, our contributions: \\\"necessary\\\" should be \\\"sufficient\\\"?\\n\\n* Section 2, conditional randomization tests: This paragraph is unfortunately not well written even though it is a very important prerequisite of this work. For instance, at \\\" ... replaced by samples of $\\\\tilde{x}_j^{(i)}$ that is conditionally independent of the outcome...\\\", at that point, it is unclear \\\"conditioning on what\\\". Following this sentence, one approach might be to replace $\\\\tilde{x}_j^{(i)}$ with a constant (which is independent of everything else). It is not until definition 1 that this becomes clearer. Also, the \\\"null hypothesis\\\" (which is in the first line of equation 2) is never stated throughout the paper.\\n\\n* Eq 1: that (i) is unclear. Should state that for i=1,...,N.\\n\\n* Eq 2: rewrite the second line. The left hand side states that the p-value \\\"converges in distribution to\\\". The second line should be just 0.\\n\\n* After eq.6, how to choose T (the number of bins) in practice?\\n\\n* Definition 3 is actually a proposition? It is unclear what is being defined there.\\n\\n* The word \\\"complete conditional knockoffs (CCKs)\\\" appears for the first time in Section 3.2 without any explanation.\\n\\n* Orange skin on page 8: what is \\\"~ exp(...)\\\"? An exponential distribution, or just exponential function?\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a practical improvement of the conditional randomization test (CRT) of (Candes et al., 2018).\\nIn the study of (Candes et al., 2018), the choice of the test statistic as well as how one estimates conditional distributions were kept open.\\nThe authors proposed \\\"proper test statistic\\\" as a promising test statistic for CRT, and proved that f-divergence is one possible choice.\", \"they_further_shown_that_kl_divergence_has_a_nice_property_among_possible_f_divergences\": \"KL-divergence cancels out some of the conditional distributions, and thus the users need to estimate only two conditional distributions to compute the test statistic.\\nFor estimating those conditional distributions in the test statistic, the authors proposed fitting regression models.\\n\\nOverall, I think the paper is well-written and the idea is stated clearly.\\nThe use of KL-divergence for CRT seems to be reasonable.\\nThe proposed algorithms look simple and easy to implement.\\n\\nMy only concern is on the practical applicability of the proposed algorithms (which, however, may be not a unique problem for this paper, but for all the CRT methods).\\nThey require fitting regression models for each feature xj.\\nFor high-dimensional data with more than thousands of features, fitting regression models for all the features seem to be impractical.\\nFor the imagenet data experiment, the authors successfully avoided this problem by using an inpainting model.\\nHowever, this approach is apparently limited to image data.\\nI am interested in seeing if there is any promising way to make the algorithms scalable to high-dimensional data.\\n\\n\\n### Updated after author response ###\\nThe authors have partially addressed my concern on the scalability of the proposed algorithm to high-dimensional data.\\nI therefore keep my score unchanged.\"}" ] }
ryevtyHtPr
Do Deep Neural Networks for Segmentation Understand Insideness?
[ "Kimberly M Villalobos", "Vilim Stih", "Amineh Ahmadinejad", "Jamell Dozier", "Andrew Francl", "Frederico Azevedo", "Tomotake Sasaki", "Xavier Boix" ]
Image segmentation aims at grouping pixels that belong to the same object or region. At the heart of image segmentation lies the problem of determining whether a pixel is inside or outside a region, which we denote as the "insideness" problem. Many Deep Neural Networks (DNNs) variants excel in segmentation benchmarks, but regarding insideness, they have not been well visualized or understood: What representations do DNNs use to address the long-range relationships of insideness? How do architectural choices affect the learning of these representations? In this paper, we take the reductionist approach by analyzing DNNs solving the insideness problem in isolation, i.e. determining the inside of closed (Jordan) curves. We demonstrate analytically that state-of-the-art feed-forward and recurrent architectures can implement solutions of the insideness problem for any given curve. Yet, only recurrent networks could learn these general solutions when the training enforced a specific "routine" capable of breaking down the long-range relationships. Our results highlights the need for new training strategies that decompose the learning into appropriate stages, and that lead to the general class of solutions necessary for DNNs to understand insideness.
[ "Image Segmentation", "Deep Networks for Spatial Relationships", "Visual Routines", "Recurrent Neural Networks" ]
Reject
https://openreview.net/pdf?id=ryevtyHtPr
https://openreview.net/forum?id=ryevtyHtPr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "za6-9gX-SnT", "61KUonbIR0", "SJeRkVKDoB", "BJgVUXYPor", "BJxYNXtDsS", "r1lxfXKwsB", "rkggwfYwsr", "rJgEyzYwir", "SJgAmWFwoS", "r1ldz9K69r", "HkgS6FcY9H", "HyeXzhdjYS" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1586316858205, 1576798733936, 1573520357970, 1573520204425, 1573520177079, 1573520135870, 1573519959842, 1573519836281, 1573519653650, 1572866576440, 1572608445262, 1571683339045 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper1845/Authors" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1845/Authors" ], [ "ICLR.cc/2020/Conference/Paper1845/Authors" ], [ "ICLR.cc/2020/Conference/Paper1845/Authors" ], [ "ICLR.cc/2020/Conference/Paper1845/Authors" ], [ "ICLR.cc/2020/Conference/Paper1845/Authors" ], [ "ICLR.cc/2020/Conference/Paper1845/Authors" ], [ "ICLR.cc/2020/Conference/Paper1845/Authors" ], [ "ICLR.cc/2020/Conference/Paper1845/AnonReviewer5" ], [ "ICLR.cc/2020/Conference/Paper1845/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1845/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Paper published at Neural Computation\", \"comment\": \"https://doi.org/10.1162/neco_a_01413\\n\\n*The paper in the ICLR website is not updated*\"}", "{\"decision\": \"Reject\", \"comment\": \"This paper investigates a notion of recognizing insideness (i.e., whether a pixel is inside a closed curve/shape in the image) with deep networks. It's an interesting problem, and the authors provide analysis on the limitations of existing architectures (e.g., feedforward and recurrent networks) and present a trick to handle the long-range relationships. While the topic is interesting, the constructed datasets are quite artificial and it's unclear how this study can lead to practically useful results (e.g., improvement in semantic segmentation, etc.).\", \"title\": \"Paper Decision\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We appreciate the many positive aspects that R#2 highlighted about the paper. It is very encouraging. Thank you.\\n\\nRegarding the only concern, we agree with this reviewer that the experiment with off-the-shelf models is confusing as it is placed in the future work section and does not guarantee that our findings can improve segmentation in natural images. To avoid this confusion, we have moved the experiment to the experiments section, where is useful to emphasize the lack of generalization of existing DNNs for segmentation. We have also added in the future work section that our findings leave several open questions that require future research, such as learning insideness to improve segmentation in natural images, in cartoons and sketches, and in other contexts, as well as improving other tasks that require spatial understanding.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for this very valuable and insightful review. In the following, we answer the reviewer\\u2019s questions.\"}", "{\"title\": \"Rebuttal on (1) \\u201cInsideness in discontinuous curves\\u201d:\", \"comment\": \"(1) \\u201cInsideness in discontinuous curves\\u201d:\\nThe Gestalt law of closure shows that human subjects tend to perceive shapes as being whole even when parts of the shapes are missing, as human perception fills in the visual gap. Our definition of insideness does not take into account the Gestalt\\u2019s law of closure because in our definition, if there is a discontinuity in the curve all the image would be considered as \\u201coutside\\u201d region, ie. the \\u201cinside\\u201d region requires a complete closure of the curve. This simplification is because of the reductionist approach we have used in the paper, which isolates insideness from other factors and facilitates its analysis. Now that we have gained some understanding of the generalization capabilities of existing DNNs for insideness, we are ready to explore a more sophisticated version of insideness in future works. The Gestalt\\u2019s law of closure is a very interesting research direction. We have added this in the paper in section 2 and future work, jointly with the other factors we already commented (eg. the representation of the hierarchy of segments).\"}", "{\"title\": \"Rebuttal on (2) \\u201cConnections to current algorithms\\u201d\", \"comment\": \"2 ) \\u201cConnections to current algorithms\\u201d\\n(2.1) \\u201cWhat is the gain of using deep networks with regard to rather old techniques?\\u201d \\nNote that our analysis focuses on existing DNNs for segmentation that are state-of-the-art, ie. networks with dilated convolutions and with convolutional LSTMs. The use of \\u201cold techniques\\u201d, namely ray-intersection and the coloring algorithms, is solely for the purpose of mathematically demonstrating that the state-of-the-art DNN architectures can solve the insideness problem with a network\\u2019s size that is realizable in practice. Note that our proof is a proof of existence and we do not claim that the solutions we found are unique, ie. it is possible that there are even smaller networks that solve the insideness problem. \\n\\n(2.2) \\u201cconnections to recent algorithms of automatic fill-in of images of contours based on conditional GANs\\u201d\\nWe agree with R#1 that the paper [2.2] is related to our insideness work because it is a potential application of insideness in natural images. Also, the paper [2.2] helps motivating our work, as it is unclear if the DNN in [2.2] (which is a DNN for segmentation, FCN) uses insideness and captures the long-range dependencies in the image, or solely exploits biases in the training set that do not generalize in novel images. We have cited [2.2] in the Introduction. Thank you for pointing us to this interesting work.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank R#5 for all the comments and for pointing us what she/he finds unconvincing. This review has been valuable for improving the paper and in the following we address R#5\\u2019s concerns.\"}", "{\"title\": \"Rebuttal on 1. \\u201cUsefulness of learning insideness to improve segmentation\\u201d\", \"comment\": \"1. \\u201cUsefulness of learning insideness to improve segmentation\\u201d\\n \\nWe agree with R#5 that segmentation in natural images may involve different cues than insideness. This was commented in the introduction: \\\"[in semantic segmentation benchmarks], insideness is not necessary since a solution can rely only on object recognition.\\\" Also, we agree with R#5 that segmentation is not the same as insideness, eg. in the introduction we mention: \\u201c[In this paper,] we take the reductionist approach by isolating insideness from other components in image segmentation.\\u201d\\n \\nYet, note that the motivation of analysing insideness is to understand the generalization capabilities of existing segmentation architectures beyond current benchmarks in natural images. This motivation arises from the recent trend of tackling more sophisticated segmentation tasks, eg. segmentation in images that lack texture or color (as in cartoons and sketches) or with unfamiliar objects (such as objects with different textures from those seen during training), in new tasks that require more sophisticated visual spatial reasoning (such as containment or instance-aware segmentation), etc. Note that insideness is a key component for image segmentation in such general settings. Analysing insideness in isolation is a step towards solving these more challenging segmentation problems. Thus, the motivation of this work goes beyond improving DNNs in the current benchmarks (although improvements in these benchmarks with insideness can not be discarded, as pointed out in \\u201cfuture work\\u201d). We have reworded the Introduction in order to further clarify these points. \\n\\nWe think that R#5\\u2019s concern can be also resolved with R#2's comments, who has \\u201cread the paper thoroughly\\u201d (quoting R#2): \\\"This work is not like other segmentation publications that just propose a network and start training, but perform some deep analysis about the generalization capability of the existing network architectures.\\\", \\\"It helps other researchers to rethink the boundary problem by using the insideness concept. I think this work will have an impact in semantic segmentation field.\\\" and \\u201cThis paper is well written and well motivated.\\u201d\"}", "{\"title\": \"Rebuttal on 2. \\u201cMore analyses for experiment results\\u201d\", \"comment\": \"2. \\u201cMore analyses for experiment results\\u201d\\n\\nNote that the paper provides insights about why DNNs do not generalize in the subsection of 4.2 called \\\"visualization\\u201d of the initial submission. We show that the neurons of the feed-forward networks are tuned to the features of the curves in the dataset and there are no signs that they capture the long-range dependencies necessary for solving insideness in general. Also, we found that the recurrent networks expand the inside/outside regions starting from the curve, resulting in only local features being used to determine the direction of expansion. Thus, the DNNs that we evaluated do not generalize because they learned solutions that do not take into account the long-range dependencies in an effective way. These learned solutions are sufficient to achieve high accuracy in the family of curves seen during training, but they do not generalize to other curves. Then, in section 4.3 we show that the learning strategy can be constrained with stepwise learning in order to encourage that the learned solution captures the long-range dependencies and can generalize. We have reworded these sections to make clear the insights we provide. \\n\\nRegarding that R#5 does not find surprising that the stepwise learning improves the generalization capabilities, we would like to emphasize the massive gains of accuracy yielded by this strategy. Observe that the stepwise training leads to a cross-dataset accuracy of almost 100% while with the standard learning the cross-dataset accuracy is only ~20% in the best case. In the revised version of the paper, we have emphasized this massive improvement by splitting Fig.5b into two: one for the cross-dataset evaluation and the other for the within dataset evaluation (moved to the appendix). It can now be seen after a quick assessment that the improvement of the generalization capabilities with the step-wise learning is very large. We believe this is a non trivial observation, given that stepwise learning has not been used in any of the state-of-the-art learning strategies.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #5\", \"review\": \"This paper investigates the problem of modeling insideness using neural networks. To this end, the authors carefully designed both feedforward and recurrent neural networks, which are, in principle, able to learn the insideness in its global optima. For evaluation, these methods are trained to predict the insideness in synthetically generated Jordan curves and tested under various settings such as generalization to the different configuration of curves or even different types of curves. The experiment results showed that the tested models are able to learn insideness, but it is not generalizable due to the severe overfitting. Authors also demonstrated that injecting step-wise supervision in coloring routine in recurrent networks can help the model to learn generalizable insideness.\\n\\nThis paper presents an interesting problem of learning to predict insideness using neural networks, and experiments are well-executed. However, I believe that the paper requires more justifications and analyses to convince some claims and observations presented in the paper. More detailed comments are described below.\\n\\n1. Regarding the usefulness of learning insideness to improve segmentation\\nThe authors motivated the importance of learning insideness in terms of improving segmentation (e.g., instance-wise segmentation). However, I believe that this claim is highly arguable and needs clear evidence to be convincing. Although I appreciate the experiments in the supplementary file showing that some off-the-shelf segmentation models fail to predict insideness, I believe that these two are very different tasks (one is filling the region inside the closed curve and the other is predicting the labels given the object texture and prior knowledge on shapes; please also note that segmentation masks also can be in very complex shapes, where the prior on insideness may not be helpful). It is still weak to support the claim that learning to predict insideness is useful to improve segmentation. \\n\\n2. More analyses for experiment results \\nIn the experiment, the authors concluded that both feedforward and recurrent neural networks are not generalized to predict insideness in fairly different types of curves. However, it is hard to find further insights in the experiments, such as what makes it hard to generalize this fairly simple task. Improving generalization using step-wise supervision in a recurrent neural network is interesting but not surprising since we simply force it to learn the procedure of predicting insideness using additional supervision. \\n\\nTo summarize, although the problem and some experiment results presented in the paper are interesting, I feel that the paper lacks justifications on the importance of the problem and insights/discussions of the results.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper shows that deep-nets can actually learn to solve the problem of \\\"what is inside a curve\\\" by using a sort of progressive filling of the space outside the curve. The paper suceeds in explaining that and in pointing out the limitations if standard learning to address this problem.\\n\\nHowever, \\n(1) the proposed demonstration is based in ideal (continuous, noiseless) curves. What would happen in actual (discontinuous, noisy) curves?. What implications does this has in the requirements of the network.\\n(2) I think more connections to classical and current algorithms are required. For instance:\\n(2.1) The proposed demonstration (and the arguments of Ullman) reminds me of classical watersheed algorithms [see 2.1]. What is the gain of using deep networks with regard to rather old techniques?. Advantages are not clear in the text.\\n(2.2) What about connections to recent algorithms of automatic fill-in of images of contours based on conditional GANs [see 2.2]. It seems that these recent techniques already solved the \\\"insideness\\\" problem and even learnt how to fill the inside in sensible ways...\\nThen, what is the gain of the proposed approach?.\", \"references\": \"[2.1] Fundamenta Informaticae 41 (2001) 187\\u2013228\", \"the_watershed_transform\": \"Definitions, Algorithms and Parallelization Strategies\\nJos B.T.M. Roerdink and Arnold Meijster\", \"http\": \"//www.cs.rug.nl/~roe/publications/parwshed.pdf\\n\\n[2.2] Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks\\nJun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A. Efros\", \"iccv_2017_https\": \"//arxiv.org/abs/1703.10593\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This submission introduces a new concept, termed insideness, to study semantic segmentation in deep learning era. The authors raise many interesting questions, such as (1) Does deep neural networks (DNN) understand insideness? (2) What representations do DNNs use to address the long-range relationships of insideness? (3) How do architectural choices affect the learning of these representations? This work adopts two popular networks, dilated DNN and ConvLSTM, to implement solutions for insideness problem in isolation. The results can help future research in semantic segmentation for the models to generalize better.\\n\\nI give an initial rating of weak accept because I think (1) This paper is well written and well motivated. (2) The idea is novel, and the proposed \\\"insideness\\\" seems like a valid metric. This work is not like other segmentation publications that just propose a network and start training, but perform some deep analysis about the generalization capability of the existing network architectures. (3) The experiments are solid and thorough. Datasets are built appropriately for demonstration purposes. All the implementation details and results can be found in appendix. (4) The results are interesting and useful. It help other researchers to rethink the boundary problem by using the insideness concept. I think this work will have an impact in semantic segmentation field. \\n\\nI have one concern though. The authors mention that people will raise the question of whether these findings can be translated to improvements of segmentation methods for natural images. However, their experiments do not answer this question. Fine-tuning DEXTR and Deeplabv3+ on the synthetic datasets can only show the models' weakness, but can't show your findings will help generalize the model to natural images. Adding an experiment on widely adopted benchmark datasets, such as Cityscapes, VOC or ADE20K, will make the submission much stronger.\"}" ] }
rygvFyrKwH
Adversarial Robustness as a Prior for Learned Representations
[ "Logan Engstrom", "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Brandon Tran", "Aleksander Madry" ]
An important goal in deep learning is to learn versatile, high-level feature representations of input data. However, standard networks' representations seem to possess shortcomings that, as we illustrate, prevent them from fully realizing this goal. In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks. It turns out that representations learned by robust models address the aforementioned shortcomings and make significant progress towards learning a high-level encoding of inputs. In particular, these representations are approximately invertible, while allowing for direct visualization and manipulation of salient input features. More broadly, our results indicate adversarial robustness as a promising avenue for improving learned representations.
[ "adversarial robustness", "adversarial examples", "robust optimization", "representation learning", "feature visualization" ]
Reject
https://openreview.net/pdf?id=rygvFyrKwH
https://openreview.net/forum?id=rygvFyrKwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "BWCvP_iAZM", "HyxZF9njsS", "rke78YtiiS", "HyerEUFsoH", "HkeXjWKiiS", "SkefKjwjiB", "r1lFLcnDjH", "B1x0RYdPor", "r1xY0VPwjS", "Bke8kM9Ljr", "SJeuKWcIjr", "rJeVkb58sH", "ryerslqLsr", "Bklg_o8TFS", "rkljGH8aKB", "rylBUSKFYH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733907, 1573796473326, 1573783882701, 1573783084676, 1573781914554, 1573776249643, 1573534288984, 1573517781730, 1573512401033, 1573458398127, 1573458304488, 1573458139931, 1573458077479, 1571806056439, 1571804434954, 1571554637463 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1844/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1844/Authors" ], [ "ICLR.cc/2020/Conference/Paper1844/Authors" ], [ "ICLR.cc/2020/Conference/Paper1844/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1844/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1844/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1844/Authors" ], [ "ICLR.cc/2020/Conference/Paper1844/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1844/Authors" ], [ "ICLR.cc/2020/Conference/Paper1844/Authors" ], [ "ICLR.cc/2020/Conference/Paper1844/Authors" ], [ "ICLR.cc/2020/Conference/Paper1844/Authors" ], [ "ICLR.cc/2020/Conference/Paper1844/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1844/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1844/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes recasting robust optimization as regularizer for learning representations by neural networks, resulting e.g. in more semantically meaningful representations.\\n\\nThe reviewers found that the claimed contributions were well supported by the experimental evidence. The reviewers noted a few minor points regarding clarity that seem to have been addressed. The problems addressed are very relevant to the ICLR community (representation learning and adversarial robustness).\\n\\nHowever, the reviewers were not convinced by the novelty of the paper. A big part of the discussion focused on prior work by the authors that is to be published at NeurIPS. This paper was not referenced in the manuscript but does reduce the novelty of the present submission. In contrast to the current submission, that paper focuses on manipulating the learned manipulations to solve image generation tasks, whereas the current paper focuses on the underlying properties of the representation. Since the underlying phenomenon had been described in the earlier paper and the current submission does not introduce a new approach / algorithm, the paper was deemed to lack the novelty for acceptance to ICLR.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Re: Private response\", \"comment\": \"I see. I presumed that you set the response to private by mistake. I personally don't think there is anything wrong with revealing this information to the public, but I will discuss it with the area chair during the post rebuttal discussion period and if he/she agrees that it's better to not make this information public, I will edit my comment.\"}", "{\"title\": \"Re: Updated rating\", \"comment\": \"Thanks for your comments. As we point out to R3 somewhat deep in the thread (and as you acknowledge in your comment), the papers do not in fact cover the same phenomenon, the NeurIPS submission shows that downstream applications are possible with robust models, while this paper is about representation learning.\\n\\nThere is also a key factual inaccuracy w.r.t. inversion vs. generation: we would refer the reviewer to our reply to R3, where we clarify this---generation and inversion are actually two *completely* different tasks (crucially, generation doesn't say anything about representation learning, whereas there have been many papers studying *solely* inversion in the context of representation learning).\\n\\nPlease let us know if we can make any further clarifications. The only real similarity between the two works is in the feature painting vs feature manipulation. We emphasize that the latter was the inspiration for the former as cited in the camera-ready paper distributed, which is why it is claimed novelly here. (For the purposes of the review process though, given the confusion, we are fine with the reviewers considering feature painting prior work---in our view, our work still provides novelty in this case both in thoroughly studying feature visualization through the lens of representation learning, and studying the inversion problem, which is completely orthogonal to the prior paper.)\"}", "{\"title\": \"Re: Review Updated\", \"comment\": \"Regarding inversion vs generation: We stress that these two tasks are in fact entirely different. To elaborate on this, we illustrate that neither implies the other.\\n\\n-> Inversion does not imply generation: The reviewer claims that we can generate images by inverting some representation R_0. However, without explicitly saying how to find R_0, this is an entirely vacuous claim, as one could just say that any computer vision task is \\\"just inverting some representation R_0\\\" (after all, one could just solve the task, find the corresponding representation, and invert it). In our experiments, we tried finding R_0 by learning distributions over representations, perturbing representations of natural images, finding the representations that maximize class scores within a ball, and various other methods---we are actually unable to find *any* synthetic representation that can be inverted successfully.\\n\\n-> Generation does not imply inversion: This direction, which is __more important (as the current submission introduces inversion, not generation)__, is entirely clear: just because class maximization introduces salient features does *not* mean that the features captured by the representation are sufficient to approximately invert an image.\\n\\nAs for the confusion around the prior work, we have stated numerous times that the lack of citation for the previous work in our submission to ICLR was an oversight and that the concurrency of the two works was handled in the most careful way we could.\", \"regarding_private_response\": \"We made some responses private because we did not think that discussing the reviewing (and decision) process of another conference would be appropriate for a public forum, and, as R2 notes, these would not be available to a reviewer in a double-blind review process.\"}", "{\"title\": \"Updated rating\", \"comment\": \"Thank you for your reply and clarifications. I don't want to take you into roller coaster ride throughout this update so I'll be upfront in saying that I have change my rating to a weak reject.\\n\\nI have been following the conversation between the authors and reviewer #3 and I also read the anonymized version of the Neurips paper that anonymous reviewer #3 kindly provided. I've tried to ignore the discussion about the chain of dependencies between both papers and about which paper was uploaded to arXiv first because that information wouldn't be available to us if this was a fully anonymized process. However, the bottom line is how novel the idea introduced by this paper is given the existing literature. \\n\\nI agree with reviewer #3 about how both papers deal with the same phenomenon: adversarially robust networks learn features that have high correspondence with the high-level features in natural images, which are semantically meaningful to humans. However, I also agree with the authors' statement about the first paper (Neurips paper) being a downstream application of this phenomenon while the current paper is a more in depth study of the phenomenon. In fact, the paper excels at this latter point since its presentation of the phenomenon is very well written and intuitive. Nevertheless, the applications presented in the paper are not as novel as the paper suggests. For example, the feature manipulations done in Section 4.2.1 of these paper are very similar to the feature paintings presented in Section 3.5 of the Neurips paper. It is true that the Neurips paper references this paper as a source of inspiration. However, since this paper would be published after the Neurips paper, it should not present the idea of manipulating features in adversarially robust networks as completely novel. Another example is the relationship between representation inversion and image generation that reviewer #3 highlighted in their updated respond.\\n\\nI want to emphasize that I like the argument that the paper is trying to make and how it is trying to dig deeper into this phenomenon. Moreover, I think the quality of the presentation and writing are excellent. However, the claims about the novelty of the application of this phenomenon are not well justified given that the Neurips paper has already being accepted and will be published in the proceedings of that conference in December. \\n\\nThe question that stands is what could be changed to the paper to make it a stronger submission to future conferences? What I found most valuable about this paper was the study of the phenomenon itself and not necessarily the applications. Thus, one possibility would be, as suggested by reviewer #3, to study the representations of the networks trained using other adversarial optimization methods. Another very interesting alternative would be, as the authors suggested, to study the type of features that are predictive but not necessarily robust, which could guide the design of methods that could achieve both high accuracy and robustness to adversarial attacks.\"}", "{\"title\": \"Review Updated\", \"comment\": \"I've updated my review (See \\\"Update 2\\\").\\n\\n** Response edited\"}", "{\"title\": \"Further clarification\", \"comment\": \"It's not clear to me how the chain of dependency is this way. Was this paper submitted to NeurIPS concurrently to the Image Synthesis paper and didn't get in? Based on the dates they were first posted on arxiv, I presume that is the case.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for the clarification of your concerns.\\n\\nIn our opinion, the primary issue here is the delineation between performing downstream applications, and understanding the learned representation. Crucially, performing these applications alone is not sufficient to make any statements about the quality of representations of robust networks (and vice-versa)\\u2014the former simply establishes, as the reviewer notes, that it is possible to change salient features in the input via the classifier. In order to study representations in this work, we turn to established representation learning tests like inversion and feature visualization, which are not implied by the results in the NeurIPS papers. For example, inversion is not implied by any results in the NeurIPS paper, as that paper only shows that the image is manipulatable using gradient descent in input space, and not that the features of the image can be recovered from the representation. \\n\\nConcretely, to respond to each of your numbered points:\\n\\n1. Note that the NeurIPS paper cites the feature visualization observations from this submission as inspiration for feature painting.\\n\\n2. See response (1) above. The correct chain of dependencies is thus (feature visualization [this work]) => (feature manipulation), (feature painting). Note that part of our goal in doing feature manipulation is again to probe properties of the representation, showing that that features are introduced *gradually* into the image. (Conversely, the NeurIPS paper is just trying to exploit both of these properties to accomplish the downstream task of interactive image editing).\\n\\n3. We disagree that inversion parallels generation. For generation to work, all that is needed is that \\u201cmaximizing the dog class in the network introduces dog features.\\u201d On the other hand, for inversion to work one needs that \\u201cthe representation captures all the salient features of the given input in the representation.\\u201d In general, neither of these statements implies the other. (Also, in experiments for the NeurIPS paper we were unable to use inversion for generation, as we were unable to find representations corresponding to natural images).\\n\\nWe hope that the above points have alleviated some of the reviewer\\u2019s concerns, and would be happy to elaborate on any of them further.\"}", "{\"title\": \"Re: Response #3\", \"comment\": \"Thank you for your response.\\n\\nI do agree that the two papers focus on different things -- image synthesis is certainly not the same as representation inversion and feature manipulation; however, I don't agree they are fundamentally different. \\n\\nThe results in the two papers are a corollary of a single observation -- features learned by robust model correspond to salient aspects of the inputs. For example:\\n\\n1. Feature painting from the NeurIPS paper parallels features manipulation in this submission (with an added mask).\\n\\n2. Feature visualization is also a corollary of feature painting -- in both cases, a feature is maximized with respect to the input image. The primary difference between the two is that in feature painting, the initial input image is a natural image (with a binary mask); whereas in feature visualization, the initial input image is a random image (or natural image without a mask). \\n\\n3. Image Generation from the NeurIPS paper parallels representation inversion in this submission -- maximizing a target label indirectly corresponds to inverting representation that would maximize the target probability (Representations from in-domain images would already strongly correspond to a target label). I would acknowledge that feature inversion is slightly more general than image generation, and feature inversion of out-of-domain images is original to this paper. However, the qualitative results in B.1.2, Figure 14 are not very impressive -- I can not tell what any of the inverted images are without looking at the original images. \\n\\nFor me, the primary appeal of both papers is that they are demonstrating an intuitive and interesting phenomenon on a range of examples. However, either of the two papers does a good job of demonstrating the phenomenon, and one doesn't add much to the discourse given the other. \\n\\nNonetheless, since the NeurIPS paper has been added in the discussion of this paper, my concern now is only about the lack of novelty. I've improved my score by one increment to reflect that.\"}", "{\"title\": \"Revision and responses uploaded\", \"comment\": \"We thank all the reviewers for their thoughtful comments and suggestions regarding our work. We have updated (a) the manuscript to fix the notational typos in Equations (1), (4), and (5) pointed out by the reviewers, as well as some minor wording/grammar/formatting edits; and (b) responded inline to each review.\"}", "{\"title\": \"Response #1\", \"comment\": \"We thank the reviewer for their comments and suggestions.\\n\\nWhy $\\\\ell_2$ robustness:\\nIn preliminary experiments, we found that both L-2 and L-infinity robust models (L2 and L-inf are the two most commonly studied threat models in adversarial robustness) yielded the properties explained in this paper. We thus decided to only study one of these settings for simplicity., We do agree that taking a detailed look at the effect of robustness metric (L2, L-inf, etc.) on representations is an interesting future direction.\", \"comparing_our_inversion_results_to_other_methods\": \"From a computational perspective, our method does not require training a separate network or access to a generative model. As such, it would only be comparable to Mahendran & Veraldi (2015) who, like us, use gradient descent to minimize the L2 representation distance (Eq. 4). In comparison to theirs, our method does not require regularizers or hyper-parameter tuning and produces significantly better results qualitatively. Specifically, their inverted images often appear blurry or lack salient features of the original image (we invite the reviewers to inspect the results https://arxiv.org/abs/1412.0035). We attribute both the lack of need for regularization as well as the better qualitative results to the favorable properties of robust representations discussed in our work.\\nFrom a performance perspective, given that the inversion quality for robust models outperforms that of standard models so clearly (c.f. Figure 3 middle vs bottom), we decided that quantitative experiments would not be necessary to support the presented thesis.\", \"with_respect_to_perceptual_meaningfulness\": \"It would indeed be interesting to see large-scale human studies comparing representation properties in future work. However, we believe that the difference between standard and robust networks (which is our main focus) is so apparent that a human-study would be unnecessary (see, for example, Figure 3 robust vs standard, or Figure 7 robust vs standard). \\n\\nWe have also fixed the notational issues brought to our attention, we thank the reviewer for pointing them out:\\n- Fixed (1) and (4) to be argmin instead of min (they are indeed the same equation but reproduced for clarity)\\n- Equation (5) was missing an \\u201cx_0 + \\u201c so that x always represents images.\"}", "{\"title\": \"Response #2\", \"comment\": \"Thank you for your comments and suggestions about our work.\\n\\nAs for the question about the robust models being less accurate, we agree that this is an interesting direction of study. Indeed, this question has been the focus of many recent papers in adversarial robustness (e.g., Su et al. 2019, Tsipras et al. 2019, referenced in our manuscript). Of these, Tsipras et al. (2019) provides a theoretical model very similar to what the reviewer is suggesting: that there are features that are predictive but not robust (and hence not human-meaningful). This view might also provide some insight into why robust models seem to have better feature representations as we observe in our work (i.e., robustness prevents the model from learning these brittle features). \\n\\nComments (fixed in the revision):\\n- Thank you for pointing out the typo in (1) and (4), we have corrected this.\\n- The error bars are over random draws of the source and target images, we have modified the figure caption to reflect this.\"}", "{\"title\": \"Response #3\", \"comment\": \"# As per the AC's instructions, we have written this review assuming that everyone has access to the relevant paper (NeurIPS 2019).\\n\\nBoth the NeurIPS 2019 paper and this submission manipulate inputs using a robust classifier. However, the papers are fundamentally different in almost every other facet: \\n\\nThe NeurIPS 2019 paper is focused entirely on the task of image synthesis. The main contribution of that paper has nothing to do with representation learning, but rather demonstrates that tasks traditionally performed by generative models (or task-specific methods) can be accomplished with a classifier alone.\\n\\nIn contrast, this work revolves around studying the features captured by the representations of robust classifiers, and showing that they are more aligned with human perception. Thus, our experiments are not on traditional computer vision tasks, but instead we use established methods for studying and understanding the representations learned by neural networks. Note that many of the tasks we consider have been the central studies of many representation learning papers:\\n\\n1. Inversion: Previous work has established inversion of deep representations as in Section 4.1 of our work as a tool for understanding the features captured by the representation, e.g., Understanding Deep Image Representations by Inverting Them, (Mahendran & Vedaldi), and other references in our paper. Our experiments indicate that robust networks may be learning a much more human-aligned set of features, as inverting them actually approximately recovers the image without the need for any regularization or post-processing. (For standard networks, Mahendran & Vedaldi find that regularization/post-processing techniques are needed for even moderately decipherable results.)\\n\\n2. Feature visualization & manipulation: Similarly, many prior works, e.g. Feature visualization (Olah et al) and others referenced in our paper, have established the feature visualization process as in Section 4.2 of our paper as a method for seeing what neurons are responsible for in classification. Our results show that in contrast to the negative result of Olah et al for standard networks (\\u201cNeurons [in the representation layer] do not seem to correspond to particularly meaningful semantic ideas\\u201d), neurons in the final layer of robust networks actually seem to learn clear, human-decipherable features.\\n\\nIn summary, the goal of this submission is to study and understand the feature representations of robust network, and not to accomplish any sort of downstream task using the networks (the latter was precisely the goal of the NeurIPS 2019 paper). \\n\\nThat said, since both papers do fall under the same broad umbrella of using a robust classifier to manipulate inputs, we should have referenced the NeurIPS 2019 paper from this one (failing to do so was a simple oversight on our part). We have updated the submission with a reference to the submission in the related work, and a shortened explanation of the difference between the two works.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"===== Summary =====\\nThe paper presents a study about the representations learned by neural networks trained using robust optimization \\u2014 a type of optimization that requires the model to be robust to small perturbations in the data. Specifically, the paper presents results of ResNet-50 trained on ImageNet with standard optimization and robust optimization. The paper draws three main insights from studying the learned representations of the standard and robust networks. First, the representation of the robust network is approximately invertible. In other words, when recovering an image by matching the representation of a random image to the representation of a target image by adding noise, the recovered images are semantically similar to the target image; the recovered images look similar to a human. Moreover, this is also demonstrated with images from outside of the distribution of the training data. Second, the representation of the robust network, unlike the representation of standard network, shows semantically meaningful high level features without any preprocessing or regularization. This leads to the final insight, feature manipulation is easier in robust networks. This is demonstrated by adding noise to an initial image in order to maximize the activation of a specific higher level feature and stopping early to preserve most of the other features of the original image.\", \"contributions\": \"1. The paper demonstrates that robust optimization enforces a prior on the representation learned by neural networks that results in high correspondence between the high-level features of an image and its representation in the network, i.e., similar images share similar representations. \\n2. The paper shows that the features learned by networks trained using robust optimization are semantically meaningful to humans without having to use any form of preprocessing. \\n3. The paper demonstrates that robust networks facilitate feature manipulation by injecting noise that maximally activates one of the features in the representation. \\n\\n===== Decision =====\\nI consider that this paper should be accepted. The paper does not introduce any new algorithm or shows any theoretical results, but it is a great source of insight and intuition about robust optimization and deep learning. Moreover, the paper excels at the presentation and careful study of each of the main findings and it is well framed within the robust optimization literature. \\n\\n===== Comments and Questions =====\\n\\nThere is still a major question that the paper does not directly address, but that is very relevant to robust optimization. Given that robust optimization seems to result in better-behaved and semantically meaningful representations, as evidenced by the findings in the paper, why is it that the performance of the resulting networks, in terms of classification accuracy, is lower than the performance of standard networks (trained with standard optimization)? It seems counter-intuitive that the robust network have worse accuracy than the standard network given that it is more robust to small perturbations. I am curious if we could obtain any insights about this issue based on what has already being done in the paper. For example, are there any salient features in the images that the standard network classifies correctly but the robust network does not? \\n\\n=== Minor Comments ===\\n1. I think the operator in Equations (1) and (4) should argmin since the noise is being added to x_1 in order to obtain x\\u2019_1.\\n\\n2. What is the meaning of the error bars in Figure 4? I think this should be mentioned in the caption.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"### Summary\\n1. The paper proposes robustness to small adversarial perturbations as a prior when learning representations. \\n2. It demonstrates that representations that satisfy such a prior have non-trivial properties -- they are easier to visualize, are invertible (i.e. optimizing an input that produces the desired activation leads to reasonable images), and allows for direct manipulation of input features (by changing a feature in the representation space and then optimizing the image to satisfy this new representation.)\\n\\n### Non-blind review \\nTHIS IS NOT A BLIND REVIEW\\nReviewing this paper reminded me of a recent NeurIPS paper I read. \\n\\nI went back to that NeurIPS paper (to better compare the similarities and differences) only to found out:\\n\\n1- The NeurIPS19 paper cites an earlier arxiv version of this paper as an inspiration for its approach. \\n2- It is from the exact same authors. \\n\\nThis, unfortunately, means I know who the authors are (however, there is no conflict of interest). \\n\\nMore importantly, this paper is too similar to the NeurIPS paper and It's hard to review without taking into account the NeurIPS paper. In this review, I will treat the said NeurIPS19 paper as published work, and evaluate if this work adds more to the discourse. (I've refrained from naming the neurips paper so the anonymity is maintained for other reviewers; the authors, I presume, would immediately know which paper I'm referring to). \\n\\n### Decisions with reasons\\n\\nEven though I think the idea introduced in this paper is interesting, I would argue for rejecting this paper for the simple reason: It doesn't add much to the existing discourse. \\n\\nUsing the proposed framework (i.e. learning robust representations), it demonstrates two phenomena.\\n\\nFirst, it shows that robust models allow feature inversion. Second, it shows that it's easily possible to directly visualize and manipulate features for such a model. (Both of these are achieved using the same idea: treating input to the model as parameterized, and optimizing for a target activation)\\n\\nThese are interesting observations and show that robust models learn features that rely on salient parts of the input image. However, the NeurIPS19 paper shows this even more cleary. \\n\\nAs a result, I'm not convinced that demonstrating the same phenomena with different examples is sufficient for this to be a standalone paper. (Perhaps the two papers could have been one single paper). \\n\\n### Questions \\n\\nWhat is the rationale behind dividing examples showing robust models rely on salient parts of input into two papers? Is there a semantic meaning to the grouping i.e. showing feature inversion, feature manipulation, and visualization in one paper and Generation, inpainting, translation, etc in another?\\n\\nIf I understand correctly, all of these examples exist because the robustly learned representation relies on the salient parts of the input and not on the non-robust features. If that is the case, it makes more sense to show all of these examples in a single paper. \\n\\n### Update after Author's response\\nSince the authors have added and discussed the pertinent NeurIPS paper in this submission, I'm updating my score. \\nI still think that the two papers are more similar than they might seem (See Re: Response 3 for more details). \\n\\n\\n### Update 2 \\n\\nI pointed out the similarities between the three contributions in this paper and the NeurIPS paper in \\\"Re: Response #3\\\" below. The authors replied to my concerns. I'm summarizing the author's position to my concerns followed by my response. \\n\\n#### Author's Position\\nThe authors agreed that features manipulation and feature visualization is similar, but pointed out that the chain of dependency is this paper -> NeurIPS paper and not the other way around. They mentioned that the NeurIPS paper cites this paper and acknowledges this. Moreover, they argued that even if we consider NeurIPS paper to be prior work, feature visualization is explored in much more detail in this paper. \\n\\n#### Response \\nI think the direction of the chain of dependency is not that important since neither paper clearly builds on top of the other. The NeurIPS paper is published work now, and it makes sense to consider it prior work (Especially since it is from the same authors). \\n\\nMoreover, during the NeurIPS review period, the authors did not cite this paper; they only added the citation in the camera-ready version. This means that during the NeurIPS review period, they did, in fact, take credit for the ideas used in feature painting. (The authors mention that they somehow did not, and just stated the method and showed the pictorial result in the NeurIPS paper. However, I don't see how it is possible to present a method and a pictorial result without citing other work and not take credit for the method and result.) \\n\\nI would agree with the authors that this paper does go into more detail for feature visualization. More specifically, this paper also looks at visualizing individual features in the representation (The NeurIPS feature painting restricts the visualization using a mask) and demonstrates that the same feature can be used to visualize similar semantic concepts (such as red limbs) on multiple images. This is definitely interesting, but still very related to the feature-painting result. It would have made more sense to include these feature visualization results in the NeurIPS paper instead of adding them in a separate paper.\\n\\n#### Author's Position 2\\nThey disagreed that feature inversion (this paper) is similar to image generation (NeurIPS paper). I did acknowledge in my initial response that feature inversion is slightly more general than image generation, however, the authors suggest that they are completely different.\\\"\\n\\n#### Response \\nI think representation inversion is more similar to generation than it might seem. Representation for an in-distribution image would correspond to a class with high probability. Maximizing a class probability would indirectly optimize for a representation (Say R_0) that maximizes that class probability. Image generation, as presented in NeurIPS paper, can be seen as inverting R_0.\\n \\nMoreover, the qualitative results for feature inversion, as presented in this paper, are not extra-ordinary. In the majority of the inverted images, I can not classify the inverted image correctly. That shows the model is still not paying attention to the correct aspects of the input to do classification. As a result, this paper certainly does not solve the feature inversion problem (Ideally, inverted features would highlight parts of the input necessary for making predictions and ignore other parts. Robust models, on the other hand, seem to be uniformly retaining all information of the image including the background and not highlighting the parts important for making predictions. As a result, many inverted images can not be classified by humans). \\n\\n#### My current position \\nAt the end of the day, both this and the NeurIPS paper are demonstration papers (They are empirically demonstrating an unintuitive phenomenon). Both papers are demonstrating that robust models learn features that correspond to salient parts of the input. Even though both papers are nice, either one is sufficient to demonstrate the phenomenon. For this to be a stand-alone paper, the authors would have to do more in my opinion. One option would be to explore and compare different forms of adversarial robustness as priors (The paper is called \\\"Adversarial Robustness as a Prior for Learned Representations\\\" and not \\\"L2 Adversarial Robustness as a Prior for Learned Representations,\\\" after all). Another option would be to see if such representations are 'quantitatively' better in some settings (Such as for transfer learning). \\n\\nIn its current form, I feel that the two papers are too similar to recommend acceptance.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"Summary:\\nThe paper shows that the learnt representations of robustly trained models align more closely with features that the human perceive as meaningful. They propose that robust optimization can be viewed as inducing a human prior over learnt features. Extensive experiments demonstrate that robust representations are approximately invertible, can be visualized yielding more human-interpretable features, and enable direct modes of input manipulations.\\n\\nThe paper indicate adversarial robustness as a promising avenue for improving learned representations in from several aspects. It is well written and contains extension experimental results. I'd suggest accepting the paper.\", \"questions_and_comments\": [\"Is there a particular reason that $L_{2}$ norm is used throughout the paper? How is the performance if using other ones?\", \"Compared to the other methods introducing priors or additional components into the inversion process, how is the quantitative inversion quality and computational complexity of the proposed method?\", \"The paper claims that the representations are more perceptually meaningful than the others, which may need to be evaluated with broader human subjective.\", \"I think it should be argmin in Equation (1).\", \"Also Equation (4) is exactly the same as (1).\", \"Some symbols seem to be used somewhat interchangeably. E.g., x' in Equation (1) represents x+\\\\delta, while x' in Equation (5) is \\\\delta itself.\"]}" ] }
HygDF1rYDB
Explaining Time Series by Counterfactuals
[ "Sana Tonekaboni", "Shalmali Joshi", "David Duvenaud", "Anna Goldenberg" ]
We propose a method to automatically compute the importance of features at every observation in time series, by simulating counterfactual trajectories given previous observations. We define the importance of each observation as the change in the model output caused by replacing the observation with a generated one. Our method can be applied to arbitrarily complex time series models. We compare the generated feature importance to existing methods like sensitivity analyses, feature occlusion, and other explanation baselines to show that our approach generates more precise explanations and is less sensitive to noise in the input signals.
[ "explainability", "counterfactual modeling", "time series" ]
Reject
https://openreview.net/pdf?id=HygDF1rYDB
https://openreview.net/forum?id=HygDF1rYDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "x1ZETd2h-j", "H1xZ7kxqsr", "HJgflT0FiB", "Bke9MQTKsr", "BkerW5tNqH", "HyxTgDBx9H", "BklV1x0J5H", "SJeAAjm19r", "ryejqDjqtS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733878, 1573678872631, 1573674217949, 1573667602216, 1572276733101, 1571997429336, 1571966940039, 1571924950183, 1571628947180 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1843/Authors" ], [ "ICLR.cc/2020/Conference/Paper1843/Authors" ], [ "ICLR.cc/2020/Conference/Paper1843/Authors" ], [ "ICLR.cc/2020/Conference/Paper1843/Authors" ], [ "~Dani_Kiyasseh1" ], [ "ICLR.cc/2020/Conference/Paper1843/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1843/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1843/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes a definition of and an algorithm for computing the importance\\nof features in time series classification / regression. \\nThe importance is defined as a finite difference version of standard sensitivity \\nanalysis, where the distribution over finite perturbations is given by a \\nlearned time series model. \\nThe approach is tested on simulated and real-world data sets. \\n \\nThe reviewers note a lack of novelty in the paper and deem the contribution \\nsomewhat incremental, although exposition and experiments have improved compared \\nto previous versions of the manuscript. \\n \\nI recommend to reject this paper in its current form, taking into account on the reviews and my own \\nreading, mostly due t a lack of novelty. \\nFurthermore, the authors call their method a \\\"counterfactual\\\" approach. \\nI don't agree with this terminology. \\nNo attempt is made to justify is by linking it to the relevant causal literature \\non counterfactuals. \\nThe authors do indeed motivate their algorithm by considering how the classifier \\noutput would change \\\"had an observation been different\\\" (a counterfactual), but \\nmathematical in their model this the same as asking \\\"what changes if the observation is \\ndifferent\\\" (interventional query).\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"We address your concerns in detail below.\\n\\n1.Technical novelty: \\nWe agree our method has little technical novelty but fortunately that was not the goal of this contribution. We purposely used standard models and approaches, because our main contributions are conceptual: We introduce a new approach to explaining model decisions. Our approach generalizes standard saliency map methods, which rely only on gradients. Gradient-based perturbations can be viewed as infinitesimally-different counterfactuals. We approximately integrate over the entire space of counterfactuals to find the data that would, in expectation, most change the decision if it were observed. This definition of counterfactual based explanations is more suitable for a time series domain, as it allows to characterize underlying dynamics in the signal. To the best of our knowledge, this is a substantial contribution to an overlooked problem in modeling time series data and has the potential of being used in a lot of applications, including but not limited to healthcare.\\n\\n2. Detailed Analysis:\\nWe agree that more analysis would shed light on the model. To improve this aspect of the paper, we\\u2019ve added these extra analyses in the paper as described below.\\n-- Non-stationary time series:\\nYour question about nonstationarity is a good one. Like all model explanation methods, our method\\u2019s explanations will depend on whether the model being explained successfully models non-stationarity. Following this thread, we updated our simulation experiment to include non-stationarity by making the transition probability in the HMM a function of time. The table below reports performance results for this data, and appear in the updated draft. We would like to clarify that since the model output is a probability, the scale of the output doesn\\u2019t change over time with non-stationarity and this will not be an issue here.\\n \\nMethod | AUROC | AUPRC \\n__________|___________________|_____________________ \\nFFC | 0.954 +/- 0.005 | 0.259 +/- 0.035 \\nAFO | 0.724 +/- 0.012 | 0.0374 +/- 0.002 \\nFO | 0.734 +/- 0.009 | 0.0376 +/- 0.003 \\nSens | 0.712 +/- 0.011 | 0.0428 +/- 0.001 \\nLIME | 0.4214 +/- 0.080| 0.0181 +/- 0.0008\\n\\nResults demonstrate our method works on models trained on non-stationary data. All other baselines deteriorate substantially in this regime compared to their performance on stationary data.\\n\\n-- Explanation quality as a function of generator quality:\\nWe agree that evaluating the choice of generators is an interesting question. Different generators can be obtained by varying the data size for training the generator or changing the overall model structure. In our experiments, varying training size does not impact the generator performance significantly. This is mainly because of the time-series nature of our signals. A few samples are enough for the generator to model the dynamics. As shown in Figure (\\u201cAUROC_percent.pdf\\u201d [1]), we can see that the change in explanation performance is also negligible.\\nIn terms of different generator models, the AFO method we introduce is another class of generators that samples counterfactuals from a marginal distribution. We also added a generator that only carries forward previous observations based on your feedback. As shown in the table below, using a simpler generator decreases performance. We will investigate other generators to further quantify the quality of explanations as a function of generator quality.\\n\\nMethod | AUROC | AUPRC \\n____________________________________________ \\nFFC | 0.954 | 0.259 \\nFFC-Carry Forward | 0.8692 | 0.1215 \\nAFO | 0.724 | 0.0374 \\nFO | 0.734 | 0.0376 \\n\\n-- Sanity check\\nWe have also added another section to our evaluation, called sanity checks (Sec 4.5) as recommended by Reviewer #1. The goal is to evaluate the robustness of explanations in accordance with tests proposed by Adebayo et al, Sanity Checks for Saliency Maps, NeurIPS, 2018. The results further support our claim that our explanations are expectedly sensitive to model parameters and the relationship between the input and labels.\\n\\n[1] https://www.dropbox.com/sh/tpfci7w1tqc3id5/AACwtkwqBUuGnZU8xqfldjyNa?dl=0\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"1. Saliency test:\\nWe agree that adding \\u2018sanity checks for saliency maps\\u2019 is a great idea, and have done so. In addition to presenting the results here, we have also added a corresponding section in the paper. Here, we report the results of 3 randomization tests on our simulation data II to evaluate the robustness of our approach. The results further support our claim that our explanations are expectedly sensitive to model parameters and the relationship between the input and labels. Thus our FFC method passes the model and data randomization tests.\\n\\n-- Data randomization test:\\nFor this experiment, we train the model on data with shuffled labels (predictor model AUC is 0.62). The table reports the drop in explanation performance for all baselines. We see that FFC has the greatest drop compared to the original model. An example of explanations generated for the trained and random model is shown in the figure (\\u201crandomized_data.pdf\\u201d [1])\\n-- Model randomization test:\\nWe randomly shuffle the parameters of the prediction model and inspect the effect on the explanations. The shuffled predictor model has an AUC of 0.52. Figure (\\u201crandomized_param.pdf\\u201d [1]) shows an example of importance assignment results for the randomized model as well as the trained models. We can see that the explanation results are different for the 2 different models in this example. The overall performance drop for the randomized model is also reported in the table below\\n\\n Data Randomization Test Model Randomization Test \\nMethod | $\\\\Delta$ AUROC | $\\\\Delta$ AUPRC | $\\\\Delta$ AUROC | $\\\\Delta$ AUPRC \\n____________________________________________________________________________|__________________\\n FFC | -0.2888 | -0.2129 | -0.2351 | -0.2202\\n AFO | -0.2060 | -0.0184 | -0.1662 | -0.0174 \\n FO | -0.2070 | -0.0176 | -0.1565 | -0.0172 \\n SA | -0.2252 | -0.0258 | -0.3501 | -0.0253 \\n\\n2. Runtime analysis:\\nThank you for bringing this up. In the table below we report inference runtime (in seconds) for all the baseline methods on a machine with Quadro 400 GPU and Intel(R) Xeon(R) CPU E5-1620 v4 @ 3.50GHz CPU. The runtime for the counterfactual approaches (FFC, FO, and AFO) is only dependant on the length of time series. This is clear for AFO and FO, but it is also the case for FFC since the conditional generator models the joint distribution of all features. This property is an advantage since, for approaches like LIME, the runtime depends both on the length of the signal as well as the number of features. Overall, FFC performs reasonably compared to ad-hoc counterfactual approaches, since inference on the RNN based conditional generator is efficient. This is one of the reasons that the RNN generator model is used to approximate the conditional distribution. We have also added this analysis to the supplementary material of the paper. \\n\\n Method | Simulation data(t=100, d=3) | MIMIC data(t=48, d=27) \\n FFC | 0.99 | 0.36 \\n AFO | 1.64 | 0.62 \\n FO | 2.09 | 0.84 \\n LIME | 2.23 | 8.72 \\n SA | 0.212 | 0.055 \\n\\n3.Log-probabilities:\\nLog-probabilities represent the likelihood of a sample under the original data distribution. We have used them only as a measure to evaluate the quality of the generators. These values are not reported to compare explanation qualities across methods and we will ensure we clarify this in the text.\\n\\n4.Adversarial attacks on explanations:\\nWe would appreciate it if you could clarify your concern in this regard. In our understanding, the robustness of explanations would depend on the robustness of the prediction model on adversarial attacks. However, investigating the explanation for non-robust models under adversarial attacks is an interesting extended analysis of our method. We will actively consider this as future work since it can be a valuable standalone contribution.\\n\\n5.Univariate nature of the counterfactuals:\\nAs stated in the future work section of the document, our next steps are to extend this method to find subsets of features with the highest importance. This involves sampling multivariate counterfactuals like you suggest based on feature correlations. Enumerating all possible subsets over features to estimate importance is inefficient. We are actively considering follow up work that will allow doing this efficiently.\\n\\n[1] https://www.dropbox.com/sh/tpfci7w1tqc3id5/AACwtkwqBUuGnZU8xqfldjyNa?dl=0\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"We would like to thank the reviewer for taking the time to provide thoughtful and constructive feedback on our paper. We addressed all your comments and believe it made our paper better in the process.\\n\\n1. Definition:\\nThank you very much for spotting the discrepancy in our two definitions of importance. There indeed was a typo in the algorithm. The difference primarily occurs when the counterfactual sample is very close to the actual observation. In this situation, averaging before evaluating the absolute value may underestimate the amount of possible risk change we have observed. Therefore we think that definition 1 should be preferred. We have fixed the algorithm box and code to match and updated all results in the paper (along with making the simulations more realistic based on other reviews), and this did not change the relative performance of our method.\\n\\n\\n2. Choice of conditioning:\\nWe agree and updated the paper to clarify the benefit of using the conditional distribution versus the marginal. \\nThe conditional distribution we use models the underlying characteristic of an individual sample, while the marginal is an average over the population. Counterfactuals under the marginal distributions may not necessarily be likely or realistic for a specific sample, as reflected in log-probabilities. Unrealistic counterfactuals can result in inaccurate importance assignments since they can potentially overestimate the change in model outcome significantly, but only because they are unlikely under the individual sample\\u2019s distribution. \\n\\n3. Explanation of baseline failures: \\nSensitivity analysis characterizes the approximate change in the risk due to relatively infinitesimally small perturbations to the observations. Such estimates are unreliable for levels of perturbations observed when a patient changes state or deteriorates significantly. We see this unreliability in the Simulation Experiment II (Figure 2) where the underlying generative model has latent states (HMM). The meaningful time and feature importance correspond to that of state transitions while sensitivity analysis highlights within state variations. FFC resolves this issue by evaluating risk changes on perturbations that are clearly indicative of past patient state and could be significantly large if the underlying state has changed. By virtue of the design of RNN based methods, sensitivity analysis is also more likely to highlight more recent observations as important, as we see in MIMIC-III experiments. By observing local estimates in risk changes, FFC and other methods avoid this issue. Finally, we do not believe it is entirely fair to LIME to compare it to methods like FFC, and AFO, that are specially designed for time series data. LIME locally approximates the model around the current sample to determine the importance and thus uses much less dynamic information than methods designed specifically for time series, leading it to perform much worse than all other methods.\\n\\n\\n4. Log-likelihood and Better explanations:\\nWe have only used the log-probabilities as a measure to evaluate the quality of generated counterfactuals. These values are not reported to compare explanation qualities across methods and we will ensure that this is clarified in the text. The relationship between choice of modeled distribution and explanation is elaborated above in \\\"Choice of conditioning\\\"\\n\\n5. Clinical annotations:\\nWe will obtain additional clinical annotations based on your suggestions, and as it was also mentioned in our future work section. The difficulty with this task is that clinicians often don\\u2019t associate direct relationships between observations and interventions. However, we are hoping to find some accurate and generalizable annotations by interviewing a larger group of clinicians, and aggregating results across.\\n\\nWe agree that the frequency of measurements is related to which features are deemed more important. However, this is due to the way the model learns to predict a risk change. We are therefore limited by the data as well as the model in terms of the quality of the explanation. All explanation baselines we compare to will suffer from this limitation as well. Thank you for bringing this up as we consider it an important followup evaluation to characterize how the frequency of observations affect importance estimates.\"}", "{\"title\": \"Inline response\", \"comment\": \"We thank you for your comment. Please see our responses below.\\n\\n1) Could the authors provide some more details about the RNN conditional generator. \\n\\nThe RNN conditional generator only models the joint distribution of all the features at every time step using a multivariate gaussian distribution. The trained generator is eventually used to generate the counterfactual vector $\\\\hat{\\\\mathbf{x}}_{t}$. To derive importance for feature $i$ at time $t$, we concatenate this marginal $\\\\hat{x}_{i,t}$ with actual observations $\\\\mathbf{x}_{-i,t}$.\\n\\n2) Based on your definition of importance, it appears that it might be sensitive to models that lack robustness to input perturbations. \\n\\nOur method explains model behavior and as mentioned in the draft is definitely dependent on model performance. If a model lacks robustness, the explanations would be highlight feature importances the model has picked up on. This is not a limitation of the explanation method, but of the model itself.\\n\\n3) The word 'importance' is being used loosely. In Figure 5, for instance, it is interesting that the FFC and AFO approaches identify clinically valid important features with regards to the subsequent intervention. The task, however, was mortality prediction. Therefore, are these features important in the context of the overall mortality task or to the intermediate interventions? Perhaps the authors can identify locally and globally-important features. (By global, I mean pertaining to the high-level task of mortality prediction). \\n\\nThe definition of importance is only associated with the mortality task. Interventions as we know generally help stabilize a patient's condition after a deterioration. We use information about interventions to validate our explanations generated for the predictive task.\\n\\n4) Would the relative importance of the variables remain the same when the model is trained with fewer clinical parameters? Consistency in this context would be useful in the event certain clinical environments do not have access to/cannot collect all the included variables. \\n\\nThe process of generating explanations using the conditional generator is completely decoupled from the model itself. We do not recommend deploying models with different set of available features across different clinical environments.\\n\\n6) Could the authors shed light on an example on the MIMIC dataset where their approach produces nonsensical results? This would provide future researchers with insight on how to improve upon your approach. \\n\\nOne limitation of our method is that we evaluate the importance of every observation separately. This can result in imperfect importance assignment in the case of highly correlated signals, because this correlation will be broken. Clinically, it is relevant to derive feature importance over subsets of features. We identify this as important future work in the draft.\"}", "{\"title\": \"Simple Yet Interesting Approach and Results - Several Questions\", \"comment\": \"Implementation Details -\\n\\n1) Could the authors provide some more details about the RNN conditional generator. This is how I understood it: the outputs of the RNN are a mean vector and covariance matrix which are used to model a multivariate Gaussian. A vector, z_t, is sampled from this Gaussian and concatenated to the input vector at time t (except for feature of interest) to eventually generate scalar counterfactual observation. Please correct my understanding if this is inaccurate.\\n\\nDefinition of Importance - \\n\\n2) Based on your definition of importance, it appears that it might be sensitive to models that lack robustness to input perturbations. Is there a way to show that this metric leads to consistent feature importance results regardless of the model sensitivity?\\n\\nMIMIC Results - \\n\\n3) The word 'importance' is being used loosely. In Figure 5, for instance, it is interesting that the FFC and AFO approaches identify clinically valid important features with regards to the subsequent intervention. The task, however, was mortality prediction. Therefore, are these features important in the context of the overall mortality task or to the intermediate interventions? Perhaps the authors can identify locally and globally-important features. (By global, I mean pertaining to the high-level task of mortality prediction). \\n\\n4) Would the relative importance of the variables remain the same when the model is trained with fewer clinical parameters? Consistency in this context would be useful in the event certain clinical environments do not have access to/cannot collect all the included variables. \\n\\n5) In Figure 5, FFC, AFO, and FO all appear to place close-to-zero importance on the features around the onset of a risk-score of 1. Intuitively, this makes sense and if not enforced in any way into the model, is a reassuring outcome of your method. \\n\\n6) Could the authors shed light on an example on the MIMIC dataset where their approach produces nonsensical results? This would provide future researchers with insight on how to improve upon your approach. \\n\\nThank you,\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a new method computing the importance of features in time series, called Feed Forward Counterfactual (FFC).\\nIn previous work, the explainability problem in time series was tackled with feature occlusion (FO), sensitivity analyses (SA) methods. However, previous counterfactual based methods do not carefully consider appropriate conditional distribution and generate out-of-distribution counterfactuals.\\nThe proposed FFC method addresses this issue by leveraging a generative model which learns the underlying dynamics and generates a realistic counterfactual given the past observations. FFC is evaluated on simulated and real datasets and shows that it is better at localizing important observations over time compared to the other baselines.\\nIn summary, this paper introduces a way of defining the feature importance at every time point. The main idea of this paper follows in line with [Chang et al. 2019] which address the problem of out-of-distribution counterfactuals. Although the experiment shows successes of the proposed method on several datasets, the major weakness of this paper is the lack of technical novelty and detailed analysis of the proposed method. For example,\\nIf the time series is non-stationary, this could incur a different amount of the change in the model output and proposed time importance might not work. How about this?\\nDid the authors consider trying out with varying size of training data or generator model?\\nMinor\\nOn page 2, p(\\\\mathbf{x_{t,i}|\\\\mathbf{X_{0:{t-1}}}}) -> p(x_{i,t}|\\\\mathbf{X_{0:{t-1}}}})\\nOn page 6, affected by feature and -> affected by feature 1 and\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"--- Overall ---\\n\\nThis paper proposes a method for evaluating the influence of individual observations on the output of a time series prediction model by replacing each (discrete time) observation with its conditional expectation given the other observations. They evaluate this method qualitatively on synthetic, healthcare, and climate datasets. I reviewed this paper for NeurIPS and was happy to see that the authors have made substantial improvements to the presentation and evaluation of the method. With that said, I think that the methodological contribution is incremental (sampling from a conditional rather than marginal distribution), there is at least one major correctness issue that needs to be addressed, and the analysis of the experiments fails explain why the models perform differently.\\n\\n--- Major comments ---\\n\\n1. The Montecarlo approximation in Algorithm 1 does not approximate Imp(i,t). Specifically, because the averaging is done before the absolute value, Algorithm 1 approximates |F(X_{0:t}) - E[F(X_{0:-t},x_{-i,t},\\\\hat{x}_{i,t}]|. This is also a valid measure of feature importance and it is not clear from the paper why we should prefer one over the other.\\n\\n2. I think the paper needs to do a better job explaining why sampling from the conditional leads to better explanations than sampling from the marginal. The second paragraph makes an argument based on variance, but it is not clear that low variance translates to better explanations. In particular, using mean imputation has very low variance, but I would expect it to give poor explanations. I recommend using a toy example to make this point. For example, in a healthcare context, doctors are reacting to changes relative to a particular patient's baseline. A conditional model can capture this baseline but a marginal model cannot.\\n\\n3. In general, I thought that the experiments were well done, but stops short of explaining *why* the methods perform differently. Put differently, I think it is really important to clearly explain why certain methods fail while others succeed. For example, the authors demonstrate the sensitivity analysis fails on the synthetic data, but never explain why. I am looking for a statement of the form: \\\"Sensitivity analysis fails on this data because... FFC solves this weakness by doing... which is reflected in the experimental results.\\\"\\n\\n4. In 4.2.1, it is very unsurprising to me that a model that samples from an approximation of the conditional has higher likelihood under the conditional than samples from the marginal, but why should we expect this to lead to better explanations?\\n\\n5. I thought the idea of looking at feature importance just before clinician intervention was a very clever evaluation, but I wanted the qualitative evaluation to go one step further. That is, does bicarbonate being the most important feature just before administration norepinephrine and fluids make clinical sense? Is this picking up on a specific condition and if so what condition? A clinician could tell you what they are typically reacting to when they administer fluids or vasopressors and you can compare what they say to what the model says. I was surprised to see the top features all being lab measurements as opposed to vital signs. In particular, in a vacuum, I would expect systolic blood pressure to be the most important feature in both of these cases. Is it possible that the frequency of measurements affects which features are selected as important?\\n\\n6. I thought GHG experiment was *much* better and clearer in this version of the paper. Well done.\\n\\n7. I recommend moving the notation from the appendix to the main paper. I don't think a reader should have to reference another document to follow notation.\\n\\n--- Minor comments ---\\n\\n1. Pg. 3 \\\"The magnitude of our...\\\": I call the authors' definition of feature importance absolute not relative. I would expect a \\\"relative importance\\\" to be a ratio of some sort (e.g. relative risk). \\n\\n2. Pg. 5: Figure 8 --> Figure 3\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes an extension of feature occlusion (FO) [Suresh et al., 2017] in which they sample from a pre-trained generative model for replacing the observed variables. Technically, it is in the category of saliency maps, only with possibly larger perturbations defined by the generative model. Thus, it most likely should inherit the same properties as the saliency maps.\\n\\nThe authors experiment on both synthetic and real-world datasets. They use log-probabilities of the generated samples as a metric for the quality of the counterfactuals; however, because not all baselines are based on counterfactuals, this approach has limited usefulness. Also, it seems that the log-probabilities are too small, indicating that most likely the authors have reported the sum log-probabilities instead of average probabilities.\\n\\nGiven the similarity to the saliency maps, the authors should have tested the proposed method in the sanity checks in [1]. Also, the authors should have examined the robustness of the proposed explanations given the adversarial vulnerability phenomenon.\\n\\nDespite sampling from a generative model, because of the univariate nature of the counterfactuals used in this paper, the process might create invalid data points. For example, increased blood sugar is usually correlated with increased blood pressure. However, this method does not account for the correlation among the features.\\n\\nFinally, this method should be very slow to run. The authors should have compared the run-time speed of the algorithms in the experiment section, too.\\n\\n[1] Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. In NeurIPS.\"}" ] }
rkg8FJBYDS
Variational Diffusion Autoencoders with Random Walk Sampling
[ "Henry Li", "Ofir Lindenbaum", "Xiuyuan Cheng", "Alexander Cloninger" ]
Variational inference (VI) methods and especially variational autoencoders (VAEs) specify scalable generative models that enjoy an intuitive connection to manifold learning --- with many default priors the posterior/likelihood pair $q(z|x)$/$p(x|z)$ can be viewed as an approximate homeomorphism (and its inverse) between the data manifold and a latent Euclidean space. However, these approximations are well-documented to become degenerate in training. Unless the subjective prior is carefully chosen, the topologies of the prior and data distributions often will not match. Conversely, diffusion maps (DM) automatically \textit{infer} the data topology and enjoy a rigorous connection to manifold learning, but do not scale easily or provide the inverse homeomorphism. In this paper, we propose \textbf{a)} a principled measure for recognizing the mismatch between data and latent distributions and \textbf{b)} a method that combines the advantages of variational inference and diffusion maps to learn a homeomorphic generative model. The measure, the \textit{locally bi-Lipschitz property}, is a sufficient condition for a homeomorphism and easy to compute and interpret. The method, the \textit{variational diffusion autoencoder} (VDAE), is a novel generative algorithm that first infers the topology of the data distribution, then models a diffusion random walk over the data. To achieve efficient computation in VDAEs, we use stochastic versions of both variational inference and manifold learning optimization. We prove approximation theoretic results for the dimension dependence of VDAEs, and that locally isotropic sampling in the latent space results in a random walk over the reconstructed manifold. Finally, we demonstrate the utility of our method on various real and synthetic datasets, and show that it exhibits performance superior to other generative models.
[ "generative models", "variational inference", "manifold learning", "diffusion maps" ]
Reject
https://openreview.net/pdf?id=rkg8FJBYDS
https://openreview.net/forum?id=rkg8FJBYDS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "xJgB3NAy9D", "HJekY6E3iS", "HkefWPY7or", "r1eW2LtQsr", "Hyle6WtmoH", "ryefctiqqH", "HyeY1E_y9B", "BkgOaj6RKr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733850, 1573830007218, 1573259002184, 1573258920879, 1573257656247, 1572678025643, 1571943393379, 1571900352167 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1842/Authors" ], [ "ICLR.cc/2020/Conference/Paper1842/Authors" ], [ "ICLR.cc/2020/Conference/Paper1842/Authors" ], [ "ICLR.cc/2020/Conference/Paper1842/Authors" ], [ "ICLR.cc/2020/Conference/Paper1842/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1842/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1842/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes to train latent-variable models (VAEs) based on diffusion maps on the data-manifold. While this is an interesting idea, there are substantial problems with the current draft regarding clarity, novelty and scalability. In its current form, it is unlikely that the proposed model will have a substantial impact on the community.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Updated manuscript\", \"comment\": \"Thank you again for your insightful reviews. We have made a few changes to the manuscript. Namely, four items:\\n\\n#1: We made changes to manuscript to reflect many of the items suggested by R1. We also proofread the manuscript and fixed several grammatical errors pointed out by R2. Thank you both for your thorough reading!\\n#2: We added an extra experimental section (6.4) that compared the local bi-Lipschitz constant between our method, WGAN-GP, VAE, and SVAE. As the local bi-Lipschitz constant is a sufficient condition for a homeomorphism, we are able to directly evaluate how well-behaved the mappings of each method are. We show that our method has state-of-the-art performance by this new measure.\\n#3: We re-arranged the document to more clearly describe the locally bi-Lipschitz property. We moved 5.1 (the property) to Section 2 in background, and added some sentences in the intro and in Section 5 to clarify our measure.\", \"edit\": \"Added some more changes\"}", "{\"title\": \"Response to Blind Review #3\", \"comment\": \"Thank you for this thoughtful review. We can briefly address your main concern that several of the elements are approximations to the original formulation. We agree that the SpectralNet embedding is an approximation of the kernel eigenvectors that form a diffusion map. However, given that we still minimize the Rayleigh quotient to the same degree as the eigenvectors, then the latent space created by SpectralNet will be the same as the subspace spanned by the eigenvectors. This would allow a carryover of similar guarantees from diffusion maps. The closest guarantee that can be attained for SpectralNet, or truly for any network trained on manifold data, is that a small network can learn such a subspace with dependence only on the intrinsic dimension of the manifold (Shaham, Cloninger, Coifman, 2017). So this justifies that the approximation should be close to the true original formulation. This is the same argument for the inverse (VAE decoder) and the results in our Theorem 1.\"}", "{\"title\": \"Response to Blind Review #1\", \"comment\": \"Thank you for the thoughtful review. We will first address your questions and concerns about the bi-Lipschitz property. You are correct that it is first brought up in Section 5, and that can be adjusted by moving it to a related works in Section 2 when discussing the diffusion map. To clarify the connection, the bi-Lipschitz property is a property of diffusion maps and Laplacian Embeddings, as proved by Jones et al. Because of this, as long as we model the eigenfunctions of the kernel with SpectralNet, we don't have to regularize to maintain this property (and thus a stable homeomorphism). It comes up in section 5 so that we can prove there exists a stable inverse to map from the latent space back to the original data.\\n\\nAs for verifying the condition, we will add to our manuscript a description of the exact condition. The quantity to measure is $\\\\|\\\\psi(x) - \\\\psi(y)\\\\| / \\\\|x - y\\\\|$ for any points x,y such that $x,y \\\\in B(x, r)$ for some radius $r$. We can add an experimental verification of this measure in the appendix for the experiments run. The closer this statistic is to 1, the closer you are to creating an isometry in the latent space. \\n\\nAlso, we can address your questions about the sampling procedure. We are sampling $x' \\\\sim p(x' | x)$ when we input $x$ into the random walk VAE. This $x'$ will be in the neighborhood of $x$. The algorithm for sampling procedure is to draw a collection of $x'$ from the seed $x$, and then use these $x'$ as seeds to draw new points $x''$. After several iterations of this procedure, we will have sampled points everywhere on the data manifold. This is demonstrated in Figure 3, where the bottom line is one iteration of the sampling procedure, and the top line is the collection of points after enough iterations to converge to the distribution $p(x)$ on the manifold.\\n\\nWe further address your concern about how we removed the dependence of the first term in (3) on x'. It is difficult to make theoretical guarantees about the tightness of the ELBO --- the lack of an approximation guarantee is a crucial property given up by variational inference methods (compared to MCMC techniques) in the interest of computational efficiency. However, we can make an intuitive argument. Note that $z' = \\\\psi(x')$. Therefore, assuming we approximate the diffusion map to a reasonable accuracy, we do not lose much information when the dependence on x' is dropped.\\n\\nFinally, you noted that we should compare our method to more recent iterations of GANs that attempt to treat mode collapse. Perhaps it can be made more clear, but we do in fact do this: note that we use the Wasserstein GAN rather than the original GAN in our experiments.\\n\\nThank you again for your comments. We are making the proposed changes to the manuscript, and will update you when they are available early next week.\"}", "{\"title\": \"Response to Blind Review #2\", \"comment\": \"Thank you for your thoughtful review. We will respond below to the points you raised below.\\n1) We will definitely proofread the text thoroughly before the next revision. We appreciate your thorough reading and proposed edits.\\n2) We acknowledge that our method bears resemblance to that proposed in Rey \\u201819. Both start from the same idea of sampling from manifold-supported latent spaces. However, the similarities end there. Crucial differences in the implementation of this idea result in drastically different algorithms. We mention this work briefly in Section 3. The key difference is that Rey \\u201819 requires explicit knowledge of the manifold, including the projection map, the scalar curvature, and the volume of the manifold. These must be exactly specified beforehand. Conversely, our method only requires the dataset and a suitable kernel to capture pairwise similarities in the data. Moreover, the latent prior is user-defined in Rey \\u201819. In other words, the user must have explicit knowledge of the topology of the data. Conversely, ours is learned automatically from the data by the diffusion map. For this reason we are able to intrinsically prevent prior mismatch.\\n3) Our posterior distribution is indeed Gaussian, but in the diffusion embedding space. Note that Euclidean distances in the diffusion embedding space approximate diffusion distances over the data manifold. Please see the original Diffusion Maps paper for more details, but in short, the diffusion distance is a measure of the similarity between two points based on the behavior of a diffusion process starting from either point. Therefore, our Gaussian distributed posterior defines a random walk that intrinsically respects this measure. This is a crucial property of our method that is carried over by its close ties to manifold learning.\\n4) As stated in R1\\u2019s review, we plan to add an experimental verification of the bi-Lipschitz measure in the appendix for the experiments run. The closer this statistic is to 1, the closer we are to creating an isometry in the latent space. In general, it is currently difficult to objectively evaluate the quality of generative models. The Inception Score and Frechet Inception Distance (FID) both work only on ImageNet models. Moreover, note that we optimize a very different likelihood: the neighborhood likelihood p(x\\u2019|x). (We are learning the random walk neighborhood, rather than the entire generative model.) Thus there is no direct comparison to existing VAE models.\\n\\nWe are making the proposed changes to the manuscript, and will update you when they are available early next week.\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: In this paper, a diffusion-based VAE is proposed. The authors introduced a non-linear dimension reduction method, diffusion map to standard VAE to encode neighborhood manifold information into the latent space before encoding. They proposed a lower-bound objective, similar to original VAE for content generation.\", \"pros\": \"1) A new VAE method is proposed to incorporate local manifold information into the latent space. 2) A sufficient condition to measure the consensus between latent and input data distributions. 3) Three empirical studies on the visualization of generated images.\", \"cons\": \"1) The writing could be significantly improved. There are a bunch of grammar errors and confusing notations. For example, \\u201cwith many default priors the posterior/likelihood pair q(z|x)/p(x|z) can be viewed as an approximate homeomorphism\\u201d -> \\u00a0\\u201cwith many default priors,\\u00a0 the posterior/likelihood pair q(z|x)/p(x|z) that can be viewed as an approximate homeomorphism\\u201d. \\u201cIn this paper address issues in variational inference and manifold learning\\u201d -> \\u201cIn this paper, we address issues in variational inference \\u2026.\\u201d. \\u201cfeedfoward pass\\u201d -> \\u201cfeedforward\\u201d \\u2026. In algorithm 1, \\u201cX is a random batch from X\\u201d. Conditional probability are mixed with joint probabilities, \\u201cp(y|x) = p(x,y)\\u201d. I suggest the authors do careful proofreading.\\n\\n2) The novelty in this paper is limited.\\u00a0 The diffusion VAE was proposed in Rey\\u201919 https://arxiv.org/abs/1901.08991 with a similar random walk procedure with transition kernels on the manifold of input. However, the authors neither did any comparison with it nor provided convincing advantages over them.\\u00a0\\n\\n3) The authors claimed the standard VAE has too many assumptions on the priors, likelihood, and posteriors. However, their framework also assumed Gaussian distribution on posteriors and likelihood, only eliminating the prior distribution, but at the expense of introducing an assumed kernel and eigendecomposition approximation.\\u00a0\\n\\n3) The experiments are very limited, containing only 3 visualization results of 3 image generation tasks. The Fig 2 is difficult to read and interpret. How about the log-likelihood estimates from your approach compared with others?\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"The paper proposes a new generative model for unsupervised learning, based on a diffusion random walk principle inspired by the manifold learning literature. The basic idea is to (probabilistically) map points to a latent space, perform a random walk in that space, and then map back to the original space again. Learning of the suitable maps is achieved by casting the problem in a variational inference framework.\", \"The paper is generally well-written, and clearly states out its goals and motivation. Sections 1 - 3 in particular give a nice overview of the broader context of the paper, and its aim of borrowing ideas from manifold learning and variational autoencoders. The particular aim on using concepts from manifold learning to avoid mode collapse - corresponding to the underlying homeomorphism losing its bijectivity - is in particular intriguing.\", \"The method itself is intuitive at a high level, although I did have some difficulty with Section 4:\", \"in 4.1, one begins by considering the local evidence. This requires drawing a point from U_x, which is defined to a be set. I presume this means one draws uniformly from this set?\", \"Eqn 3 does not apparently have the same structure as Eqn 2. In particular, the first term in (2) is a function of x, but for (3) it is not a function of x'. How does conditioning affect the ELBO?\", \"I was not sure how to interpret the statement that p\\u03b8(x'|x) \\u2248 \\u03c8^{-1}(q(z'|x)). Do you mean the distribution is strongly concentrated around this value? Note also an extra \\\"]\\\" in the latter.\", \"Eqn 5 should presumably be an equality? Also, it was not clear what the d in |.|_d^2 means, and why one does not use ||.||^2.\", \"At a higher level, given that x' ~ U_x originally, why do we now draw x' ~ p(x'|x)?\", \"Arriving at 4.2, it was not clear what \\\"The sampling procedure\\\" refers to, i.e., which of the steps in 4.1 it is seeking to specify or augment. It would be useful to clearly lay out the objective function that is being optimised, and how this section fits into that.\", \"In 4.3, it seemed as if the discussion of the neighbourhood reconstruction error would be better placed in 4.1 itself. It appears to be a justification of the already-derived Eqn 5.\", \"Algorithm 2, is there a need to introduce Z_t? It is a bit confusing that, e.g., Z_1 is first written to in iteration 1 by g(Z_0, \\u03b5), and then by \\u03c8(X_1) in the second iteration.\", \"The authors also claim a contribution to be the identification of a principled measure to identify mismatch between latent and data distributions. This \\\"bi-Lipschitz\\\" property is only introduced in Sec 5.1, and the discussion is not too approachable to someone unfamiliar with the area. In particular:\", \"it is not clear how precisely the discussion in this section relates to the VDAE algorithm described in the previous section.\", \"precisely what quantity we are to compute so as to verify this condition remains elusive. The abstract and introduction made me expect that the property is practically verifiable, but it was not clear from this section whether it is so.\", \"the conclusion or key takeaway of this subsection was unclear. I gather that Jones et al. established the existence of a neighbourhood wherein one can define a bi-Lipschitz mapping to R^d for suitable d. But how does this relate to latent and data space mismatch?\", \"The experiments show that the proposed method can generate meaningful samples for synthetic manifold data, as well as on the MNIST dataset. I would've preferred more discussion of the results in Sec 4.1. I also was hoping for a clearer illustration of mode-collapse problems on standard benchmarks for GANs, with comparison of results to, e.g., those of Wasserstein-GANs (beyond those in Sec 6.1) or other proposals that aim to mitigate mode collapse.\"], \"minor_comments\": [\"Fig 1, the text in the middle panel is hard to read in black and white.\", \"SpectralNet is mentioned a few times but never formally introduced.\", \"notationally, Section 4 is a little heavy. I would suggest considering to omit the subscript \\u03b8's in the function \\u03c8 and its inverse.\", \"when mentioning the \\\"reparametrisation trick\\\", please provide a citation.\", \"what is \\\\mathbb{X} in Theorem 2?\", \"\\\"completementary\\\" -> \\\"complementary\\\"\", \"\\\" rejection sampling Azadi et al. (2018); Turner et al. (2018).\\\" -> \\\" rejection sampling (Azadi et al. 2018; Turner et al. 2018).\\\"\", \"some form of Conclusion would be appropriate.\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"** Summary\\nThe paper studies the problem of density estimation and learning accurate generative models. The authors start from the observation that this problem has been approaches either using variational inference models, that scale very well but whose approximations may lead to degenerate results in practice, and diffusion maps, that scale poorly but are very effective in capturing the underlying data manifold. From here, the authors propose integrating the notion of random walk from diffusion maps into VAEs to avoid degenerate conditions. The proposed method is first defined in its generality, a practical implementation is presented, theoretical guarantees are provided, and empirical evidence of its effectiveness is reported.\\n\\n** Evaluation\", \"the_starting_point_of_the_paper_seems_very_solid\": \"diffusion maps are capturing the geometry of data very effectively and bringing some of those characteristics into the more scalable approach of VAE is an interesting approach. In the proposed method, this translates into introducing a learned diffusion map from manifold to an Euclidean space into the inference part. As a result, the lower bound optimized by the method now contains local information about the accuracy of the one-step random walk. How this can be translated into a practical implementation is also convincing. My main concern is that the overall method is now approximating many different elements in the original formulation, such as the diffusion map, its inverse, and the covariance of the random walk. Although theory seems to support that as these approximation become more accurate the overall result is reliable, in practice I wonder how they could combine and deteriorate the final result.\\n\\nThe empirical validation is relatively simple but in my opinion it provides enough insights about the advantages of the proposed method compared to VAEs and GANs. More solid and extensive evaluation is definitely needed in the future to have a more thorough comparison and a more careful assessment of the limits of the proposed method, but at this stage, I think the evaluation is sufficient.\"}" ] }
S1g8K1BFwS
Probability Calibration for Knowledge Graph Embedding Models
[ "Pedro Tabacof", "Luca Costabello" ]
Knowledge graph embedding research has overlooked the problem of probability calibration. We show popular embedding models are indeed uncalibrated. That means probability estimates associated to predicted triples are unreliable. We present a novel method to calibrate a model when ground truth negatives are not available, which is the usual case in knowledge graphs. We propose to use Platt scaling and isotonic regression alongside our method. Experiments on three datasets with ground truth negatives show our contribution leads to well calibrated models when compared to the gold standard of using negatives. We get significantly better results than the uncalibrated models from all calibration methods. We show isotonic regression offers the best the performance overall, not without trade-offs. We also show that calibrated models reach state-of-the-art accuracy without the need to define relation-specific decision thresholds.
[ "knowledge graph embeddings", "probability calibration", "calibration", "graph representation learning", "knowledge graphs" ]
Accept (Poster)
https://openreview.net/pdf?id=S1g8K1BFwS
https://openreview.net/forum?id=S1g8K1BFwS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "qEuv1CdS4S", "ryg1q3W3jr", "rJeJL3ZhsB", "HJxEVhZ3sB", "rygfk2bhoS", "HygS2oZnjr", "HkeLm0hpqH", "BkgKWAKc9S", "HJxwSZbY9r", "BJgfSVlTKB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733821, 1573817479010, 1573817415213, 1573817388253, 1573817305581, 1573817261350, 1572879901890, 1572670977506, 1572569406885, 1571779641628 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1841/Authors" ], [ "ICLR.cc/2020/Conference/Paper1841/Authors" ], [ "ICLR.cc/2020/Conference/Paper1841/Authors" ], [ "ICLR.cc/2020/Conference/Paper1841/Authors" ], [ "ICLR.cc/2020/Conference/Paper1841/Authors" ], [ "ICLR.cc/2020/Conference/Paper1841/AnonReviewer6" ], [ "ICLR.cc/2020/Conference/Paper1841/AnonReviewer5" ], [ "ICLR.cc/2020/Conference/Paper1841/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1841/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes a novel method to calibrate a knowledge graph embedding method when ground truth negatives are not available. Essentially, the method relies on generating corrupted triples as negative examples to be used by known approaches (Platt scaling and isotonic regression).\\n\\nThis is claimed as the first approach of probability calibration for knowledge graph embedding models, which is considered to be very relevant for practitioners working on knowledge graph embedding (although this is a narrow audience). The paper does not propose a wholly novel method for probability calibration. Instead, the value in experimental insights provided.\\n\\nSome reviewers would have liked to see a more in-depth analysis, but reviewers appreciated the thoroughness of the results in the clear articulation of the findings and the fact that multiple datasets and models are studied. \\n\\nThere was an animated discussion about this paper, but the paper seems a useful contribution to the ICLR community and I would like to recommend acceptance.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Insightful reviews, thanks. Some general comments.\", \"comment\": \"Thank you for the comments. We have posted a personal reply to each reviewer. Let us also add a few general comments here:\\n\\n* [Usefulness and importance of calibration]: Calibration will not impact rank-order metrics for link prediction, such as MRR. The usefulness of calibration lies on being able to trust the output of knowledge graph embedding models and even quantify this trust.\", \"this_has_great_importance_in_practice_when_discovering_new_links_in_biological_networks\": \"better calibrated probabilities help human experts (biologists) validating discoveries and make a ML pipeline based on graph embeddings more trustworthy.\\nAs shown in the paper, an additional minor application is in the task of triple classification, where calibrated probabilities replace the need to learn arbitrary per-relation decision thresholds.\\n\\n* [Novelty] We believe the paper is novel because it proposes the first framework for calibrating knowledge graph embedding models, something that to the best of our knowledge has never systematically been done before. It is certainly true that we rely on well-established calibration techniques, but we would like to point out that i) this is the first paper that covers this topic as first-class-citizen, and ii) the proposal of calibrating without positives is novel, as well as the use of sample weights to correct the distributional distortions created by the corruption generation procedure.\\n\\n\\n* We have made the following changes to the paper:\\n\\nA) We expanded Table 3 results (impact of loss functions) in Sec 5.1 to include the other two datasets (FB13 and YAGO39K): We thank the reviewers for suggesting this additional experiment. In fact, we notice no correlation between calibration results and MRR. In other words, configurations that lead to the best predictive power are not necessary the best calibrated.\\n\\nB) We have clarified in 5.1 that better or worse calibration has no impact on ranking metrics such as MRR. We only evaluate the hypothesis of embedding quality being a common cause of both MRR and calibration quality.\\n\\nC) We added results and comments for two new experiments in appendix A.3, the impact of \\\\eta and the embedding size on calibration: results show among all that the embedding size has higher impact than \\\\eta.\\n\\nD) We added the histograms that show the total count of instances for each bin used in the calibration plots. Figures and comment are in appendix A.2.\\n\\nE) We have edited parts of the preliminary and related work, expanding the related work and condensing the preliminaries, as suggested. For example, now we mention KG2E (Gaussian embeddings) and many more additional recent papers.\\n\\nF) We have fixed the typo on equation 6, thanks.\\n\\nG) In the appendix, we added a table with MRR, MR, Hits@10 for all the datasets used. As pointed out in (B) above, we do not claim any causal relation of calibration on such task metrics.\\n\\nH) Added per-relation decision thresholds in appendix A.5.\\n\\nI) Move calibration-related preliminaries to appendix A.1.\\n\\nJ) All images are now black&white printout friendly.open\"}", "{\"title\": \"Thanks for your comments\", \"comment\": \"Thank you for the review. We have improved the preliminary section as suggested.\"}", "{\"title\": \"Our paper novelty and other comments\", \"comment\": \"Major:\\n\\n1 [Novelty]. The paper is novel because it proposes the first framework for calibrating knowledge graph embedding models, something that to the best of our knowledge has never been done before, besides en passant comments in one workshop paper (Krompa\\u00df and Tresp, 2015). The proposal of calibrating without positives is certainly a novel contribution as well, plus the use of sample weights to correct the distributional distortions created by the corruption generation procedure.\\n\\n2. [Shorten preliminaries]: We shortened the preliminaries as suggested, and moved calibration-related background to appendix A.1. Our main goal is first of all raise awareness on the problem of calibration in the graph representation learning community, and we believe some preliminaries are useful for better engagement with the reader, given that our community has not addressed the issue until now.\\n\\n3. [insufficient literature review] As pointed out at the beginning of Section 2, a comprehensive survey is out of the scope of this work. Nevertheless, we added many extra references in Section 2.\", \"minor\": \"[Table2, analysis of results]: yes, experiments show that in general Isotonic regression is better than Platt scaling. We are not sure to understand what reviewer #4 means by \\\"the results are again the conclusion. Is this because of the optimization issue?\\\". The message we want to send in Table2 is that our calibration works across different datasets and models. We always obtain better Brier score and log loss. We also show that our heuristic based on synthetic negatives always obtains better calibration scores, regardless of the dataset and model adopted. We hope to have answered your doubts.\"}", "{\"title\": \"Our reply to your suggestions\", \"comment\": \"- [Unsubstantiated claim: impact of calibration on link prediction] There is a small misunderstanding: we did not claim there is a causal effect between calibration and rank metrics such as MR, MRR or Hits@K (in fact, calibration does not change the rank-order of the link prediction results). Our initial submission limited to hint at a possible correlation between calibration scores and MRR (Table3 caption). \\nIn fact, as suggested by reviewer #4, we carried out additional experiments which are included in the latest revision attached, and our updated results in Table 3 suggest that there is no correlation between calibration results and MRR, i.e. \\\"better\\\" embeddings (i.e. embeddings that lead to higher link prediction MRR) are not necessarily easier to calibrate. \\nWe have clarified this in the main text (Sect 5.1 and Table3 caption). \\n\\nOn the other hand, calibration does affect triple classification. More precisely, it affects the way we choose the decision threshold \\\\tau. We show that with calibrated probabilities you only need one natural decision threshold \\\\tau=0.5 to maximize accuracy, while other methods require arbitrary per-relation thresholds.\\n\\n\\n- [no evidence of real-world use of calibration] As pointed out in the introduction, the usefulness of calibration lies on being able to trust the output of knowledge graph embedding models and even quantify this trust. This has great importance when discovery new links in biological networks: better calibrated probabilities help human experts (biologists) validating discoveries and make a ML pipeline based on graph embeddings more trustworthy. \\nMoreover, a minor application is triple classification, where calibrated probabilities replace per-relation thresholds (Section 5.1, Table 4).\\nWe improved the part in the introduction where we talk about real-world examples of why calibration is important.\\n\\n- [insufficient literature review] As pointed out at the beginning of Section 2, a comprehensive survey is out of the scope of this work. However, we have added extra references and enriched the prior art section.\\n\\n- [Main claim of the paper is calibration study]: We would like to point out again that this is the only goal of our work. We do not aim at showing causality between calibration and predictive power - as stated above. We have made clearer this point in the text (Sect 5.1). \\n\\n- [Add link prediction metrics results] In the appendix, we added a table with MRR, MR, Hits@10 for all the datasets used. As pointed out in A.1) above, we do not claim any causal relation of calibration on such task metrics.\", \"minor_comments\": \"[Typo in Equation 6] Fixed, thanks.\\n\\n[Reference to Nickel et al. 2016 in introduction] We only refer to this work to point the reader to the first paper to suggest the use of a sigmoid function to turn scores into probabilities.\\n\\n[Additional experiments, knowledge injection]: in fact, calibration does not affect the embeddings per-se, as it consists in a downstream operation carried out after training. If Platt scaling is adopted, then new weights are learned, but these are separate from the embeddings, which are not touched at this stage. That means calibration will not have any impact on such experiment.\"}", "{\"title\": \"Our reply to your suggestions\", \"comment\": \"Main points:\\n\\n1. We ran experiments to assess the impact of the embedding size and negatives/positive ratio \\\\eta. We added such additional results to appendix A.3. Results show that the embedding size has higher impact than the negative/positive ratio \\\\eta. We observe that calibrated and uncalibrated low-dimensional embeddings have worse Brier score. Results also show that any k>50 does not improve calibration anymore. The negative/positive ratio \\\\eta follows a similar pattern: choosing \\\\eta>10 does not have any effect on the calibration score.\\n\\n2. In appendix A.2 We added histograms that show the total count of instances for each bin used in the calibration plots. As expected, calibration considerably helps spreading out instances across bins, whereas in uncalibrated scenarios instances are squeezed in the first or last bins.\\n\\n3.1. Without sample weights, the base rate will be determined implicitly by the negatives/positive ratio eta used for calibration. For example, if we use eta=3 for calibration, this implies a positive base rate \\\\alpha=25%. As this base rate will most likely be wrong, calibration without sample weights leads to meaningless results. \\n\\n3.2. Sample weights allow the user to balance the positives and negatives in a way he or she sees fit for the problem, independent from choices such as the calibration eta (when using corruptions for the calibration). This is indeed similar to dealing with imbalanced datasets, especially in the case where the training dataset distribution does not match the expected test / deployment distribution.\\n\\n4.1. We added extra experiments with HolE, another model implemented in the library we used for the experiments. We can certainly add experiments on other models, but as such implementations do not belong to the same codebase, we fear we will most likely end up with unfair comparisons (as you know this is a well-known problem in this community). All in all, the set of models we used is quite diverse (translation-based, tensor-decomposition-based, with different scoring functions), and it is well representative of models actually used in the wild by practitioners, even outside the boundaries of our community.\\n\\n4.2. That is an interesting direction for future work. While KG2E proposes to use Gaussian distributed embeddings to account for the uncertainty, their model does not provide the probability of a triple being true, so KG2E would also benefit from the output calibration procedure we propose here. It is an open question how to design embedding methods that naturally lead to well-calibrated probabilities. \\n\\n5. Platt scaling was developed originally to calibrate SVMs (Platt et al., 1999), where the output is not a probability, but a continuous score, which is similar to the output scores in knowledge graph embeddings. We could not find a reference that asserts the need of normally distributed class probabilities. Perhaps it was meant that the logits need to be normally distributed, given the connection between Platt scaling and logistic regression. If so, that indeed points to a new direction to investigate the limits of our proposed framework.\\n\\n6. We have added two extra tables in 5.1 with additional results for FB13 and YAGO39k.\\n\\n7. We have tried two variations of corruption generation, all entities and per-batch entities, without any significant changes to the results. We have clarified this in the main text (Sec 4, footnote). We also pointed out that future experiments will experiment with techniques proposed by (Kotnis and Nastase 2017).\", \"extra_points\": \"1. Note that our goal was focusing on calibration, and not on achieving better predictive power. We have tried minimal random search on the hyperparameters without significant effect on triple classification results and we still could not reproduce the SOTA for FB13 and YAGO39K. Thus, we did not change the results of Table 4, besides adding the new HolE model. Some results in Table 4 incidentally achieve SOTA results, but we would rather leave the problem of achieving better predictive power aside.\\n\\n1.1 Table 4 caption states that \\\"for all calibration methods there is one single threshold \\\\tau=0.5\\\". We have added this to the header of Table 4 as well, for clarity. We added the per-relation decision thresholds in the appendix A.5. Note that the thresholds reported in A.5 are not probabilities, as they have been applied to the raw scores returned by the model-dependent scoring functions.\\n\\n2. Fixed, thanks.\\n\\n3. We made the figures more readable and printout friendly, as requested.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #6\", \"review\": \"This is the first work that studies probability calibration for knowledge graph embedding models. In the case where ground-truth negatives are available the authors directly use off-the-shelf established calibration techniques (Platt scaling, isotonic regression). When ground-truth negatives are not available they propose to synthetically generate corrupted triples as negatives and use sample weights to guarantee that the frequencies adhere to the base rate.\\n\\nIn general the paper is well-written and easy to follow. Given that the paper's major contribution is experimental insight, and there are no major technical contributions, I would have liked to see a more in-depth analysis of how some of the key hyper-parameters influence the calibration of a model beyond the type of the loss, and beyond the correlation with embedding quality. Overall, I would be willing to increase the score if the authors perform a more comprehensive experimental analysis.\", \"suggestions_to_improve_the_paper\": \"1) I would expect that especially the negatives per positive ratio \\\\eta, and the dimensionality of the embeddings have a significant impact on model calibration. It would be valuable to experimentally quantify the impact of these key hyper-parameters.\\n2) It is currently difficult to judge how well-calibrated are the models from the reliability diagrams/calibration plots since the total counts are not shown (e.g. total number of instances with mean predicted value between 0.4 and 0.5). That is, it could be that deviation from identity is due to small sample effects, i.e. we are estimating the fraction of positives from a handful of instances. Showing the total counts for each bin will help the reader better understand the calibration of the models.\\n3) Several questions can be clarified regarding the sample weights:\\n3.1) How essential is the proposed weighting scheme? How do the calibration techniques perform when using synthetic negatives with uniform sample weights?\\n3.2) How does the proposed weighting scheme relate to the the general problem of calibrating models that have class imbalance?\\n4.1) Can we observe significant difference in terms of calibration between translational distance models and semantic matching models, i.e. using distance-based scoring functions vs. using similarity-based scoring functions. If so is there any reason for that? To help answer this question the authors could compare additional models from each group (beyond the three models used in the paper). \\n4.2) Are methods that represent entities as random variable to capture uncertainties (e.g. KG2E) better calibrated?\\n5) Platt scaling assumes that per-class probabilities are normally distributed, while isotonic regression makes no assumption about the input probabilities. Given that Platt scaling performs worse in the experiments it would be interesting to investigate whether this can be (partly) explained by a deviation from the above assumption.\\n6) Results reported in Table 3 are for WN11. It would be valuable to report similar results for the other datasets in the appendix.\\n7) it would be beneficial to explore the different procedures proposed in the literature for generating synthetic negatives and their impact on the calibration.\", \"suggestions_to_improve_the_paper_that_did_not_impact_the_score\": \"1) On the triple classification task in Table 4, there is a significant gap between the literature results and the reproduced results on FB13 and YAGO39K. Is there an explanation for this? Furthermore, it would be interesting to investigate how much do the per-relation \\\\tau_i's deviate from 0.5 when they are learned using both non-calibrated and calibrated probabilities.\\n2) In Eq. 6 after the second equality shouldn't there be \\\"N/(w- + N)\\\" instead of \\\"N/(w_{-} + PN)\\\"? Is the additional P a typo?\\n3) It would be nice to make the figures more readable (e.g. when printed in black and white) by using different markers for each line.\", \"edit\": \"Rating updated to 6 after rebuttal.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #5\", \"review\": \"1. Summary\\nThe paper studies probability calibration for three different knowledge graph embedding methods, with a focus on TransE evaluated on the task of knowledge graph triple classification. It studies Brier and log loss performance of Platt scaling and isotonic regression probability calibration on WN11 for TransE and claims that better calibration yields better performance as measured by mean reciprocal rank. Calibration plots for other datasets are also included as evidence. Furthermore, evidence is presented that probability calibration can lead to better performance for the task of triple classification. The main contributions of the paper also include the adaption of sampling techniques introduced by Bordes et al. (2013) adapted for estimating negatives for probability calibrations.\\n\\n2. Decision (See the updated decision in the comment below)\\n\\nProbability calibration is a very relevant issue, particularly in industry and when combining knowledge graph embedding models as external data in other models. Thus I see this work as a valuable contribution to the literature. In particular, I like the analysis from multiple views: Calibration plots, calibration metrics, and model performance. However, there is currently not enough evidence in the paper to make recommendations or judgments about when researchers and practitioners may want to use probability calibration. I also believe the datasets and models are not well tied into the literature, for example, in 2018/2019 I can find 3 papers for triple classification and 9 papers for link prediction as triple/entity ranking and from the data, it is not clear how probability calibration affects the latter. In the current state of the work, I recommend rejecting this work.\\n\\n3. Further supporting arguments\\n\\nAs a researcher and practitioner in this area, I know very well that predictions of most knowledge graph embedding models usually live near the decision boundary so that there is little difference between probability or score of a true positive and false positive. Also talking with people in industry, I heard that word embedding models are currently not that useful practically because they make too many useless predictions. This shows me that probability calibration is an important topic and I see this study as an important contribution to the field which is often mindlessly following evaluation metrics. \\n\\nHowever, from experience I also know that evaluation of knowledge graph embedding methods is not very reliable, that is people often get widely varying results and replication is difficult. Thus it is difficult to trust results if they are not well tied into the literature and compared against multiple datasets and models. This work focuses on three models TranseE, DistMult and ComplEx. DistMult and ComplEx have become models that are viewed as quite reliable to compare against. However, their performance is mostly studied on a learning-to-rank objective on datasets such as FB15k-237 and WN18RR. The authors report Brier score for their synthetic calibration method these datasets, but do not report any modeling results. Inclusions of results on these datasets would greatly improve this work. \\n\\nThe authors also currently focus on establishing that probability calibration improves the performance of the models. They claim that low Brier score or log loss are tied to good performance, but Pairwise and Multiclass-NLL loses achieve similar Brier/log loss performance while the MRR is double for Multiclass-NLL compared to the Pairwise loss. NLL and Multiclass NLL losses have similar MRR but very different Brier/log loss performance. As such I do not think this claim is sufficiently substantiated. I do not believe it is necessary to establish that better probability calibration is correlated with better model performance. I view the careful study of probability calibration and its effects per se as more useful.\\nAs mentioned above, I also believe the results on WN11, FB13, and YAGO39k to not be sufficient to evaluate the effect of probability calibration.\\n\\n4. Additional feedback\\n\\nI really like this work. I think adding more results would make this paper great and I would be happy to change my acceptance decision.\\n\\nAs mentioned above I believe including results on FB15k-237 and WN18RR would make the results easier to interpret. Please also add more results to the table (no need to rerun those experiments, take them from other papers). I really like the analysis of Brier Score/Log loss and MRR. I think if you would extend this it would give very valuable insights into how probability calibration relates to performance.\\n\\nOne additional experiment which I do not deem critical, but which would improve your work further would be to tie probability calibration into a more practical setting. A setting that is also very interesting to researchers is if probability calibration would affect the results in tasks where you use word embedding models as an external \\\"knowledge source\\\". I really like Kumar et al., 2019[1] since their word embedding model integrated into an entity linking model beats a strong BERT baseline. But I think a study of any task/model of your choice that integrates a knowledge embedding model would be a valuable addition to your work.\\n\\nAgain as I mentioned above, I do not believe it is critical to show improve performance on these tasks, a study of the effects of probability calibration is valuable in its own right. You might want to slightly pivot into this direction if you have sufficient evidence to make judgments about the effects of probability calibration.\", \"further_small_details\": \"In the introduction, you make specific claims and justify them by citing a survey paper (Nickel et al., 2016). It would be easier for the reader to look up these claims in the source rather than in the survey paper. I believe there is a typo in your derivation in equation (6): the denominator of the second term should be just w-N + N or N(w- + 1).\\n\\n[1] Zero-shot Word Sense Disambiguation using Sense Definition Embeddings: https://www.aclweb.org/anthology/P19-1568/\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper focuses on the calibration of the knowledge graph embedding task with Platt scaling and isotonic regression. This paper is well-written, well-motivated and well-organized. However, my major concern is the novelty of this paper or the contribution.\", \"major_concerns\": \"1. This paper lacks novelty. In this paper, the authors only apply the existing techniques (e.g. Platt Scaling, Isotonic Regression) to tackle the calibration issue, which makes a minor contribution. I suggest that the authors could provide their own method specified to knowledge graph tasks rather than leverage the off-shelf methods.\\n\\n2. The related work could be enhanced, while the preliminaries could be reduced. Actually, in the area of knowledge graph or natural language processing, the preliminary of this paper is a bit trivial.\", \"minor_concerns\": \"1. In Table 2, we can conclude that Iso will be better than Platt in general. However, in the case of FB13 (ComplEx) and YAGO (TransE), the results are again the conclusion. Is this because of the optimization issue? I suggest the authors clearly state the experimental analysis.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, the authors deal with the calibration problem in graph embedding models. They used Platt scaling and isotonic regression in the situation when there is ground truth negatives. They also work when there is no ground truth negatives. In this situation, they proposed a calibration heuristics for synthetically generated negatives. Overall, the approach is not very innovative, but the problem they tackled is under studied. The presentation of the whole paper is ok, although it falls onto preliminary side. Since I did not identify any technical problem so far, I will vote for a weak acceptance, unless I observe more technical issues during the discussion.\"}" ] }
BkgStySKPB
Contrastive Multiview Coding
[ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ]
Humans view the world through many sensory channels, e.g., the long-wavelength light channel, viewed by the left eye, or the high-frequency vibrations channel, heard by the right ear. Each view is noisy and incomplete, but important factors, such as physics, geometry, and semantics, tend to be shared between all views (e.g., a "dog" can be seen, heard, and felt). We hypothesize that a powerful representation is one that models view-invariant factors. Based on this hypothesis, we investigate a contrastive coding scheme, in which a representation is learned that aims to maximize mutual information between different views but is otherwise compact. Our approach scales to any number of views, and is view-agnostic. The resulting learned representations perform above the state of the art for downstream tasks such as object classification, compared to formulations based on predictive learning or single view reconstruction, and improve as more views are added. On the Imagenet linear readoff benchmark, we achieve 68.4% top-1 accuracy.
[ "Representation Learning", "Unsupervised Learning", "Self-supervsied Learning", "Multiview Learning" ]
Reject
https://openreview.net/pdf?id=BkgStySKPB
https://openreview.net/forum?id=BkgStySKPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "fmuqVMOM6", "SkePXK2PsS", "SkeYethwsB", "S1eC_7mmsS", "ByxjhmgRYB", "SkgXlgdptr", "r1lX1jDjtH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733792, 1573533982998, 1573533937332, 1573233526399, 1571845042685, 1571811307339, 1571678939453 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1840/Authors" ], [ "ICLR.cc/2020/Conference/Paper1840/Authors" ], [ "ICLR.cc/2020/Conference/Paper1840/Authors" ], [ "ICLR.cc/2020/Conference/Paper1840/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1840/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1840/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes to use contrastive predictive coding for self-supervised learning. The proposed approach is shown empirically to be more effective than existing self-supervised learning algorithms. While the reviewers found the experimental results encouraging, there were some questions about the contribution as a whole, in particular the lack of theoretical justification.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"Dear Reviewer 2,\\n\\nThank you very much for your review. We would like to explain more about our intuition here.\\n\\n\\u201cHowever, multi-views may provide redundancy information. What is the core information that affect the representation quality?\\u201d\", \"our_hypothesis_is_that_each_view_has_two_parts_of_information\": \"(a) nuisance factors, like sensor noise, that can not be predictive of other views, and (b) information shared with other views. Our learning objective (see Eq.2 and Eq.6) asks the learned latent representation to focus on part (b) such that mutual information between views gets maximized.\\n\\nMoreover, for each view, the information bits in part (b) are not equal. Some information bits, such as the information of object category (e.g., dog), are shared by many views, while some are shared by only a few. Therefore, if we contrast one single view with many other views, each bit of part (b) will be ordered by the number of times it is shared with those contrasted views. Our conjecture is that the category-level semantics tend to be shared across many views, and thus are prioritized by our method. As a result, the learned representations convey sufficient semantic information.\\n\\nTherefore, we are leveraging the redundant information between different views/modalities to educate or teach each other. This mechanism actually has been explored in the field of developmental psychology. One such reference is [a], which argues that human infants utilize the redundancy between the senses in order to build up representations that are mutually predictive of each other. Indeed, if there is no redundant information across views, we cannot learn a good representation in such a way.\\n\\n[a] Linda Smith. The Development of Embodied Cognition: Six Lessons from Babies. 2005.\\n\\nPlease don\\u2019t hesitate to let us know for any further feedback. Thanks!\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"Dear Reviewer 1,\\n\\nThank you for the constructive suggestions.\\n\\nWe will take your advice into account as we revise the paper, and in particular are working to make the introduction clearer, and to state up front concretely what we do. We will upload a revised version once it\\u2019s available.\\n\\nPlease don\\u2019t hesitate to let us know for any additional comments. Thank you!\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"Dear Reviewer 3,\\n \\nThank you for your constructive review.\\n \\nWe agree that many methods for multiview learning have been developed since the 1990s. Here we are not claiming the first framework or theory for unsupervised multiview learning, rather we want to empirically illustrate that multiview learning methods (instantiated as contrastive learning here) can beat recent state of the art self-supervised methods, specifically in a large- scale setting, e.g., ImageNet. Our paper further contributes experiments that explicate various properties of multiview learning in the large-scale setting, such as the relative performance of contrastive versus predictive objectives, and the relationship between mutual information between views and the quality of representations learned from these views.\\n \\n1. We agree that the concept of conditional independence might explain some of the empirical results. We want to clarify that during our unsupervised training stage, we did not condition on labels, which self-supervised methods assume not to be available. \\n \\n\\u201cThis concept could be used to explain some empirical findings in this paper. Since it is expected, there is even no need in conducting experiments\\u201d. \\nThe connection is not that clear to us at this point. Would you please point to a reference such that we can see the connection?\\n \\n\\u201cMeanwhile, self-supervised learning is the case when the input data to the designed learning system is also the target of the system.\\u201d \\nWe want to clarify that our target is not to predict the input (predictive way), rather it is instantiated in a contrastive way, which yields significantly better results than the predictive approach, as shown in the paper.\\n \\n2. Thank you very much for pointing us to [1], which directly relates to our work and we are more than happy to add a citation to it (note we did point to another of De Sa\\u2019s other papers which also shares similar ideas). We agree that the high-level idea of leveraging co-occurrence is similar, but the learning objectives and detailed instantiation are very different. The update rule of [1], as shown in Figure 6 of [1], is different from our current SGD-based update rule, and it seems difficult to implement in modern deep networks with large-scale data. Indeed, different learning objectives can make a big difference in performance. For example, another previous work [a] did cross-view prediction, while we do cross-view contrastive learning. Our objective leads to a significant improvement over cross-view prediction, (e.g., our objective achieves 42.6% accuracy on ImageNet and 86.88% accuracy on STL-10, while cross-view prediction gives 35.4% and 72.35% accuracies, respectively).\\n\\n[a] Split-brain autoencoders: Unsupervised learning by cross-channel prediction. CVPR 2017\\n \\n\\u201cThe method itself has already been proposed many years ago as mentioned in the related work section in the paper, and the generalisation was also described in prior work.\\u201d \\nWould you please point to us which older methods are identical to or almost the same as ours? As discussed in our paper, our method is indeed an extension of 2018\\u2019s Contrastive Predictive Coding, to the multiview setting, but we are not aware of earlier work that uses the same specific formulation (we also looked at the workshops [2] and [3] pointed out by you).\\n \\n3. We agree that CCA has a solid theoretical justification, but this does not imply mutual information maximization is not well justified. Our conjecture is that, capturing mutual information between the latent representations of two views brings about more powerful representations than only capturing their linear correlations as CCA does. Thank you for pointing out DGCCA, which indeed we overlooked and will cite in the revision. We performed an experiment testing the transfer performance of DGCCA on STL-10. We find that DGCCA only yields 22.8% accuracy, which is significantly lower than the 86% accuracy achieved by our method. One reason for the poor performance of DGCCA on this experiment may be that we were only able to use a small batch size (128 images) due to memory constraints. In the original DGCCA paper, a larger batch size (>= 2000) was used but only demonstrated on text datasets, which are less memory intensive than image datasets. We feel it would be an interesting direction to adapt DGCCA to be effective on large-scale image datasets, but consider this to be non-trivial and out of the scope for our current paper.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This interesting paper on an important topic; however, its readability could be dramatically improved, especially for the reader less familiar with the problem.\\n\\nIn order to make the paper more accessible, the authors should reorganize the introduction by breaking it down into two parts:\\n1) a more traditional introduction \\n- one intuitive paragraph about multi-view coding\\n- one intuitive paragraph with an illustrative example on how the proposed approach will help solve a problem; at the same intuitive level, compare-and-contrast it with existing approaches \\n- one intuitive, in-detail paragraph on how the proposed approach works\\n- one paragraph summarizing the main findings/results \\n2) a second, new section, that will turn the current Figures 1 & 2 into a complete description of an illustrative example (the current, detailed \\\"captions\\\" are a good start, but they should be fleshed out into a full, detailed section of the paper)\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposed a new self-supervised learning methods by utilizing contrastive predictive coding technique. The proposed algorithm is more effective than existing self-supervised learning algorithm. The presented results are encouraging.\\n1. In section 3.2, the authors show that a large number of views would improve the representation quality. However, multi-views may provide redundancy information. What is the core information that affect the representation quality?\\n\\nIn fact, I am not an expert on self-supervised learning and contrastive predictive coding, so my reviewer confidence is low.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper presented a multi-view learning method that is based on negative sampling in contrastive learning. The core idea is to set an anchor view and the sample positive and negative data points from the other view and maximise the agreement between positive pairs in learning from two views. When more than two views are presented, the learning objective is a sum over all possible combinations of two views. The performance of the proposed model is good, and the ablation study is interesting.\", \"comments\": \"1. The core concept, or at least one of the core concepts, in multi-view learning is the conditional independence.\\n\\nNormally, the underlying assumption in multi-view learning is that, given the class label, the samples from multiple views are conditionally independent from each other. Therefore, the goal is to learn distinctive representations from different data sources/disjoint populations, so then after learning, the ensemble of them is able to capture a set of diverse aspects of the data. A \\\"side-effect\\\" of learning from multiple views is that individual views indeed get improved by learning from others. Meanwhile, self-supervised learning is the case when the input data to the designed learning system is also the target of the system. \\n\\nThe paper presented an idea for self-supervised learning from multiple views, which is not exactly the same, but still in the same regime. This concept could be used to explain some empirical findings in this paper. Since it is expected, there is even no need in conducting experiments. \\n\\n\\n\\n\\n2. My main concern of this paper is the novelty, however, the empirical results are strong.\\n\\nThe paper mainly presented a simple yet effective method for self-supervised learning from two views, and the generalisation is a sum over all possible combinations of two views. The method itself has already been proposed many years ago as mentioned in the related work section in the paper, and the generalisation was also described in prior work, which makes me doubt the novelty of the paper. \\n\\nThe earliest work to the best of my knowledge is [1], and later on there are a couple workshops [2,3] on multi-view learning which largely settled the field of learning from multiple views from neural networks', kernels', and bayesian perspectives. Many things mentioned in this paper have already been discovered at that time. \\n\\n3. The theoretical justification is not as strong as the generalised CCA.\\n\\nCCA has been applied in the field of multi-view learning and self-supervised learning for long, and it was initially proposed for comparing the correlation between two sets of samples of two random variables. A successful generalisation is the generalised CCA which is capable of learning from multiple views. The formula of GCCA as referred in [4] is simple and elegant, and then the extension of using neural networks is also straightforward. Since people has relatively clearer understanding of CCA itself, the generalised version or the kernel version of it is also well-understood. \\n\\nA nice theoretical understanding of contrastive unsupervised learning is provided in [5], and I recommend the authors to study.\\n\\n\\n[1] de Sa, Virginia R. \\\"Learning classification with unlabeled data.\\\" Advances in neural information processing systems. 1994.\\n[2] ICML Workshop, \\\"Learning With Multiple Views\\\". 2005\\n[3] NIPS workshop, \\\"Learning from multiple sources\\\". 2008\\n[4] Benton, Adrian, et al. \\\"Deep generalized canonical correlation analysis.\\\" ICLR workshop 2017.\\n[5] Arora, Sanjeev, et al. \\\"A theoretical analysis of contrastive unsupervised representation learning.\\\" ICML 2019.\"}" ] }
rkgNKkHtvB
Reformer: The Efficient Transformer
[ "Nikita Kitaev", "Lukasz Kaiser", "Anselm Levskaya" ]
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O($L^2$) to O($L \log L$), where $L$ is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.
[ "attention", "locality sensitive hashing", "reversible layers" ]
Accept (Talk)
https://openreview.net/pdf?id=rkgNKkHtvB
https://openreview.net/forum?id=rkgNKkHtvB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "7Gj4cisk2F", "HqIOW_IoZF5", "_EUfTTeFYAF", "-mb4fn7hTQA", "-RDsy4G4oNp", "C-JCWiDpu", "UBmD0l1o4L", "D2TghbVzY8", "SkxpfcEjjr", "H1g3oF4sjS", "SJxEEtVosB", "Syg6AbTp5H", "H1e_gXhRYS", "r1lXpbp3YS", "S1eMPDpGuS", "HJlyuh5zuB", "rkekrgvMOB", "SygGB-bzur", "HylsnfKbOB", "HyxXfav-_H", "Byx13YvZOB", "Hygh8BWbOS", "r1xWXg6ldH" ], "note_type": [ "official_comment", "comment", "official_comment", "comment", "comment", "comment", "comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment", "official_comment", "comment", "comment", "official_comment", "comment", "official_comment", "comment" ], "note_created": [ 1654626324467, 1654592024532, 1654535728511, 1654527958932, 1590458701256, 1582878576937, 1578558234621, 1576798733762, 1573763604999, 1573763492209, 1573763371910, 1572880853106, 1571894000010, 1571766714978, 1570064218166, 1570053223342, 1570037814787, 1570013498121, 1569981106870, 1569975563305, 1569974694520, 1569949011906, 1569931288998 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper1838/Authors" ], [ "~Alexander_Mathiasen2" ], [ "ICLR.cc/2020/Conference/Paper1838/Authors" ], [ "~Alexander_Mathiasen2" ], [ "~James_Tian1" ], [ "~Junhao_Wang3" ], [ "~Benjamin_Börschinger1" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1838/Authors" ], [ "ICLR.cc/2020/Conference/Paper1838/Authors" ], [ "ICLR.cc/2020/Conference/Paper1838/Authors" ], [ "ICLR.cc/2020/Conference/Paper1838/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1838/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1838/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1838/Authors" ], [ "~Aurko_Roy1" ], [ "ICLR.cc/2020/Conference/Paper1838/Authors" ], [ "~Hyunjik_Kim1" ], [ "~Aurko_Roy1" ], [ "ICLR.cc/2020/Conference/Paper1838/Authors" ], [ "~Aurko_Roy1" ], [ "ICLR.cc/2020/Conference/Paper1838/Authors" ], [ "~Jack_William_Rae1" ] ], "structured_content_str": [ "{\"title\": \"Clarification\", \"comment\": \"I believe we tried both concatenation (which increses the final number of weights) and adding and we could not see the difference. I do not have access to these runs any more though - so it's just from my memory at this point, so please take it with a grain of salt.\"}", "{\"title\": \"Final clarification\", \"comment\": \"Thanks for clarifying. How do you reduce the dimension in the end? y2? sum(y1,y2)? Did you compare different approaches?\"}", "{\"title\": \"Clarifying d_model\", \"comment\": \"In reversible layers, we do not directly double d_model. Instead, at the beginning of the reversible block, we duplicate the d_model-sized vector, so x becomes [x, x] concatenated. This happens after the embedding and similarly we can reduce dimensionality before the final projection - so the parameter count is not affected.\"}", "{\"title\": \"Does the Reformer have more parameters than the baseline?\", \"comment\": \"From paper:\", \"page_2\": \"\\\".. show that it performs the same as the normal Transformer when using the same number of parameters; we achieve this by having both x1 and x2 have size d_model. \\\"\", \"page_8\": \"\\\" The two models have identical parameter counts, and the learning curves likewise appear to be nearly the same. \\\"\\n\\nI see how the parameters of Attention and MLP does not increase. But what about\\n(1) the embedding layer and\\n(2) the final projection layer?\\n\\nQuestion 0. Why does the parameters of the initial embedding layer not increase if we double d_model?\\n\\nApologies for any misunderstanding.\"}", "{\"title\": \"Sharing QK for sequence-to-sequence tasks\", \"comment\": \"Hi,\\n\\nVery interesting paper! Just one question.\\n\\nGiven the transformer was originally a sequence-to-sequence model consisting of both an encoder and a decoder where the decoded message need not be the same length as the encoded message, how does the reformer still work in that circumstance? Wouldn't the fundamental difference between queries (which come from the decoder) and keys (which come from the encoder) there make sharing the QK space not possible? Put another way, doesn't sharing QK space limit you to self-attention? \\n\\nSometimes there is some ambiguity in terminology where e.g. BERT uses only the encoder blocks of a transformer and still calls itself a transformer. Is the reformer only reproducing the encoder blocks?\\n\\nThanks!\"}", "{\"title\": \"Will pre-trained Reformer on large English corpus be released?\", \"comment\": \"Will pre-trained Reformer on large English corpus (like BERT) be released? And if so, what is the estimated timeline?\"}", "{\"title\": \"Question about equation (2)\", \"comment\": \"Is it possible that equation (2) should read\\n\\n o_{i} = \\\\sum_{j\\\\in P_{i}} \\\\exp{(q_{i} \\\\cdot k_{j} - z(i, P_{i}))v_{j}}\\n\\ni.e., that k_{i} should be k_{j}? Same question for the repeated occurrences of equation (2) in the paper.\"}", "{\"decision\": \"Accept (Talk)\", \"comment\": \"Transformer models have proven to be quite successful when applied to a variety of ML tasks such as NLP. However, the computational and memory requirements can at times be prohibitive, such as when dealing with long sequences. This paper proposes locality-sensitive hashing to reduce the sequence-length complexity, as well as reversible residual layers to reduce storage requirements. Experimental results confirm that the performance of Transformer models can be preserved even with these new efficiencies in place, and hence, this paper will likely have significant impact within the community.\\n\\nSome relatively minor points notwithstanding, all reviewers voted for acceptance which is my recommendation as well. Note that this paper was also vetted by several detailed external commenters. In all cases the authors provided reasonable feedback, and the final revision of the work will surely be even stronger.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Thank you for your comments and questions\", \"comment\": \"Thank you for your feedback and questions regarding the paper, which we address one-by-one below. We\\u2019re updated the technical sections of the paper to increase clarity; please let us know if there are still any sections that you find difficult to parse.\\n\\n1. How is causal masking implemented?\\n\\nTo mask out attention to the future, we associate each query/key vector with a position index, where the position indices are then sorted using the same permutation as the QK sort. Position indices are compared for each query-key dot product, and the attention probability is masked to zero if the query comes before the key.\\n\\n2. Attention-in-place\\n\\nThank you for pointing out that this was unclear. We have updated the paper to elaborate on this point.\\n\\nIn a typical Transformer implementation, positions can attend to themselves. There is a dot product between the query vector at position i and the key vector at position i; if this dot product is high then the value vector at position i will contribute to the output of the attention layer. This behavior isn\\u2019t very useful because local information is already propagated through the residual connections, but standard attention can learn to drive this attention probability to zero by making q_i and k_i orthogonal. Shared-QK attention, on the other hand, can\\u2019t reduce this weight because the query and the key are the same vector. To address this issue, we don\\u2019t allow attention-in-place for the Reformer.\\n\\n3. Backprop through LSH attention, and sorting.\\n\\nWe use sorting as an implementation for allowing items that map to the same hash bucket to attend to each other. Similar items get mapped to the same hash bucket with high probability, which allows similar item pairs to participate in both the forward and backward passes. Each hash bucket may contain a certain number of unrelated items, in which case there will be a gradient signal that either up-weighs or down-weighs attention to these items.\\n\\nWe don\\u2019t differentiate through the hash bucket assignment procedure, or the choice of what order to sort the items into. Rather, these operations take query/key vectors as input where LSH maps nearby vectors to the same bucket with high probability. Therefore, the sorting re-adjusts any time parameter updates to cause relevant vector pairs to have higher dot product, and \\u201cunhelpful\\u201d vector pairs to have lower dot products.\\n\\n4. Additional tasks.\\n\\nThank you for your recommendation that we evaluate on other tasks. Prompted by your recommendation we started working on applying the Reformer to machine translation (we didn\\u2019t do that before since sequences are short in translation data-sets so it was not a prime target for Reformer). Thus far we have trained a decoder-only Reformer on concatenated English-then-German sentence pairs, and we do not observe any difference compared to a regular Transformer LM. We\\u2019re in the process of constructing and tuning a more standard encoder-decoder approach that likewise uses the Reformer architecture. In the final version of our paper, we\\u2019ll report BLEU numbers and comparisons for English-German translation -- the current runs make us believe that they will be the same as for Transformer.\"}", "{\"title\": \"Thank you for your thoughtful feedback\", \"comment\": \"We thank the reviewer for thoughtful feedback on our paper. We have posted an update to address some of the comments, which we detail below.\\n\\n1. Effect of reversible layers\\n\\nWe updated the figures in the paper to cover longer training durations. As expected, reversible layers perform the same as regular Transformer layers on enwik8.\\n\\n2. Sharing QK\\n\\nThis operation is needed so that we can batch LSH attention on current hardware. Absent any hardware requirements, we could do unshared LSH attention as illustrated in Figure 2(b). Each hash bucket in the unshared condition may contain a different number of queries, a different number of keys, and moreover there is no relationship between the number of queries and the number of keys. Computing one bucket at a time would be too slow, and it\\u2019s unclear how to batch buckets of highly variable sizes. With shared-QK, as in Figure 2(c-d), we can batch effectively because the entries we want to calculate cluster near the main diagonal (after sorting). Let us stress though that this is purely a speed optimization which we did due to the realities of current hardware architectures. It works, but one could indeed hope that one day it will not be necessary.\\n\\n3. Enwik8 results\\n\\nWe\\u2019re happy to report that, with further tuning, our 12-layer model reaches 1.05 bits/dim on enwik8. Adjusting optimizer settings and dropout played a big role in improving perplexity for this task.\\n\\n3. Time per iterations\\n\\nThank you for your suggestion. We\\u2019ve updated the right part of Figure 5 to sweep over a larger range of hash numbers and sequence lengths. Although full attention is fast for short sequences, its O(n^2) scaling makes it rather slow at long sequence lengths, even when compared to the 8-hash LSH variant.\\n\\n4. Hyperparameters\\n\\nThe random matrix R has i.i.d. unit Gaussian entries, following Andoni et al. (https://arxiv.org/pdf/1509.02897.pdf; page 4). The number of hash buckets was chosen such that each bucket would have 64 entries on average. Making the hash buckets smaller hurts accuracy, whereas increasing it doesn\\u2019t seem to do much other than making the model slower.\\n\\n5. Variance between runs.\\n\\nThank you for your pointing this out. For now, we can report that the variance between runs, at convergence, is minimal: we see no variance when rounding to two decimal points.\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"We thank the reviewer for feedback and comments on our paper. We have updated the paper to address some concerns and we\\u2019re working on preparing additional experiments and results to more thoroughly characterize the behavior of the proposed method, which will address all other questions.\\n\\nWe posted a revised version of the paper with updated results figures. In particular, we\\u2019ve completed the curves and updated our illustration of the wall clock time used by different attention methods. This makes it clearer at what length the LSH attention starts saving time compared to full attention and at which number of hashes (Figure 5).\", \"as_for_the_question_on_metrics\": \"we will expand the results to include machine translation in the final version (we didn\\u2019t do this initially since sequences are quite short in translation datasets and as such don\\u2019t make for ideal targets for the Reformer). We did not get the complete results yet, but we started training a Reformer language model on concatenated English-then-German sentence pairs and we do not observe any major difference compared to a regular Transformer LM. We are also putting together and tuning a more conventional encoder-decoder approach that uses the Reformer architecture and we will include a comparison of BLEU between such Reformer and the Transformer in the final version of our paper.\\n\\nWe are also happy to report that, with further tuning, a 12-layer Reformer model can achieve 1.05 bits/dim on the enwik8 test set. In terms of other metrics, this corresponds to 77.8% byte-level accuracy.\"}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"This manuscript presents a number of algorithmic techniques to reduce the computational and space complexity of Transformer, a powerful and very popular deep learning model for natural language processing (NLP). Although Transformer has revolutionized the field of NLP, many small groups cannot make a full use of it due to lack of necessary computational resources. As such, it is very important to improve the space and computational complexity of this popular deep model. The techniques presented in this manuscript seem to be very reasonable and the experimental results also indicate that they are effective. My major concern is that the authors shall present more detailed experimental results. In addition to bits per dim, it will also better if the authors can evaluate the performance in terms of other metrics.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"This paper presents a method to make Transformer models more efficient in time and memory. The proposed approach consists mainly of three main operations:\", \"Using reversible layers (inspired from RevNets) in order to prevent the need of storing the activations of all layers to be reused for back propagation;\", \"Using locality sensitive hashing to approximate the costly softmax(QK^T) computation in the full dot-product attention;\", \"Chunking the feed-forward layers computations to reduce their cost.\", \"This approach is first applied to a toy dataset to analyze its complexity, then tested on enwik8 language modelling task and imagenet-64 image generation task for ablation study and performance assessment.\", \"The problem approached by the paper is interesting and the proposed approach is novel to the best of my knowledge. The paper is well structured and clearly written a part from some small typos (see minor comments below).\", \"While the analysis of complexity is sound and convincing, and the fact of being able to train larger Reformers is very interesting, I have some questions and concerns about the approach and experiments.\", \"Effect of reversible layers: It is clear for the experiment of Imagenet64 that the effect is negligible, but the experiment on enwik8 in the paper seems unfinished. Did the authors manage to finish the training, and does it confirm the observation?\", \"Sharing QK: I am a bit confused about the effect and usefulness of this operation. Can the authors comment on why it is needed for LSH attention? It seems to me that the same operations can be achieved with different Q and K. Indeed, doing so, the authors slightly reduce the capacity of the model. The observed non-significantly decreased performance can be an effect of using only 3-layers. This may explain why the results reported for larger models in figure 5 show higher bpc than similar size state of the art models.\", \"Time per iterations: Can the authors report the time per iteration for the larger hash rounds (8 and 16) that are closer to full attention? For the highest reported number (4), from a quick and not precise look at figure 4, it seems that the performance achieved by the proposed method after 140k iterations is achieved by the full attention after ~40k iterations. The gain in time per iteration for this particular number of hash rounds can be lost by the loss in performance.\", \"Can the authors detail how they chose the hyperparameters of their approach? e.g. the size of hash buckets, the distribution used to generate the random matrix R ..\", \"The reported results can be made stronger by reporting average/error bars across several trial to show consistency.\"], \"minor\": \"typos:\\nDimension of matrix R [d_k, d_b/2] -> [d_k, b/2]\", \"last_paragraph_of_page_6\": \"state of these art -> state of the art\\n\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\", \"after_rebuttal\": \"I have read the authors answer, and found they addressed my concerns. I'm therefore increasing my score.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents an attempt to reduce the memory complexity of Transformers. The authors call their model the Reformer. It presents a LSH based self-attention mechanism, along with reversible adaptation of Transformers. The Locality sensitive hashing scheme reduces complexity from L^2 to L which is pretty neat.\\n\\nTackling the quadratic complexity of self-attention is indeed an important and nice direction. I think the LSH based attention quite novel and is a natural solution to reducing the complexity of the self-attention module. However, I think the technical description could be improved as the current form is quite confusing and difficult to parse.\\n\\nThe experiments are a little on the weaker side. Authors presented results on imagenet, enwiki and a synthetic task. I am mainly concerned if the Reformer works on tasks such as machine translation or other NLP tasks. The paper does not present much evidence that the effectiveness of LSH is broad and versatile.\\n\\nMy current vote is a weak accept, based on some preliminary understanding and the general novelty of the idea. \\n\\nI do have some questions/issues/comments:\\n\\n1) Given that there is some form of QK sorting, how is it possible to mask the future? Is this because tokens are sorted within buckets?\\n2) Can the authors clarify what \\\"Causal masking on the Transformer is typically implemented to allow a position i to attend to itself.\\\" mean?\\n3) I'm a little confused about how the sorting is being done. Can this be done in an end-to-end differentiable manner?\\n4) Can the authors present some results on other tasks? While neat, I think other tasks (e.g., MT or QA) can be investigated to further ascertain that the LSH attention works well. Current experimental results are not too convincing.\"}", "{\"comment\": \"Thank you very much for your interest! Our choice of F and G is to maintain parity with the original Transformer, which allows us to verify that reversibility doesn't degrade model quality. There's a very large design space of alternative ratios between self-attention and feed-forward layers (as well as their relative order) that we didn't explore for this work.\\n\\nRegarding your question of F=Attention, G=Attention, it sounds like you're suggesting removing feed-forward layers from the model and replacing them with self-attention only. In our experience feed-forward layers are generally faster than attention, and making them wider is the most computationally-efficient way of increasing parameter count. LSH attention closes the asymptotic complexity gap between the two layer types, but feed-forward layers still have an edge in terms of constant factors.\", \"title\": \"Choice of F and G for reversible layers\"}", "{\"comment\": \"Thanks for the update, makes sense!\", \"title\": \"Thanks\"}", "{\"comment\": \"Thank you very much for your calculation!\\n\\nThe term you mentioned, O(l*n_c + l^2/n_c), is correct for the basic implementation we chose for practical reasons, to minimize constants rather than the asymptotic time. One of the most common hashes used throughout LSH though is based on plane cutting: after a product with N vectors the hash is N bits, the sign of these products. That yields n_c = 2^N hash buckets in time O(N) = O(log(n_c)). The term then is O(l*log(n_c) + l^2/n_c), which minimizes to O(l*log(l)).\\n\\nSo the asymptotic complexity of the method we present is O(l*log(l)) rather than O(l*sqrt(l)). We believe that plane cutting hashes will yield the same experimental performance, but we will add the option to our implementation and verify that.\", \"title\": \"Update on asymptotic complexity vs implementation\"}", "{\"comment\": \"Hi, I have a question about the choice of F and G for reversible attention in Equations (7-9). So you choose F=Attention and G=FeedForward. Is this just to match the number of parameters with the original Transformer? Have you also tried F=Attention, G=Attention? If so, how does it compare? If not, do you expect this to be more expressive, perhaps at the cost of more parameters?\", \"title\": \"Choice of F and G for reversible attention?\"}", "{\"comment\": \"I see, thanks for the clarification! An alternative analysis could be O(l*n_c) (for computing hash via random projection) and O(l*l_c)=O(l^2/n_c) (for attention in the chunks), with total cost O(l*n_c + l^2/n_c). This expression could be minimized by choosing n_c = sqrt(l), and you would get total complexity O(l^{1.5}) as in Child et al [1]\\n\\nCool idea though!\\n\\n[1] https://arxiv.org/abs/1904.10509\", \"title\": \"Thanks for the clarification\"}", "{\"comment\": \"Thanks you very much for your interest!\\n\\nIf n_c stands for the number of hash buckets, then (as we explain in the paper) we will split the sequence into chunks of length l_c = 2l/n_c (since we use chunks twice the size of expected bucket). In most experiments we picked n_c so that l_c = 64. Note that we attend to the current and previous chunk, so with l_c = 64 we perform full 64x128 attentions, and there are l/64 of them. So the cost of that part is (2l/n_c)^2 but since we pick n_c so that l_c is constant, it can also be denoted simply by O(l), where the main constant factor is the 64x128 matrix multiplication and, more importantly, memory access.\\n\\nThe above calculation, as you note, does indeed *not* include the computation of the hash id. Hash id is computed by multiplying activations of length l by a random matrix and picking the argmax. This is again of the order O(l) with the constant n_c, as you say. In theory, if n_c were very large, this could grow prohibitively. In that case one could use projection hashes -- e.g., multiply by 2 different matrices into size sqrt(n_c) and use the 2 hashes as higher and lower bits of the hash. In practice though, this matrix multiplication is quite cheap -- even upto n_c=1024 the cost of this matmul is negligible compared to the cost of memory access during hashing -- that's why we did not emphasize it in the analysis.\", \"title\": \"Cost clarifications\"}", "{\"comment\": \"I am a bit confused by the complexity of LSH attention in Table 2. In particular, if the number of buckets is denoted by n_c then the total cost would be O(l^2/n_c) (average bucket occupancy) together with O(l*n_c) for the cost of computing the random projections. Do you include the latter in the total cost - i.e. the cost to compute the hashes?\", \"title\": \"Complexity of LSH attention\"}", "{\"comment\": \"Hi Jack, thank you very much for additional information! As for sharing queries and keys: we did see slower training at first just copying the hyperparameters, but it reached the same accuracy later in training with appropriate learning rate. As for comparisons to SOTA results: we used 12 layers and a default Transformer configuration using the Adafactor optimizer without any tuning other than learning rate. Al Rfou et al. report 1.11bpc for a similar 12-layer configuration but with tuning, extra losses and a different optimizer, while the numbers you cite are for a highly-tuned model with 24 layers and a different architecture if I understand correctly. The purpose of our paper is to introduce new techniques and show they match the baseline Transformer perplexity with lower memory and training time, we leave extensions to other Transformer variants for future work (as there are quite many of them and more by the day).\", \"title\": \"Thank you for the information\"}", "{\"comment\": \"Hello, I enjoyed reading your paper and think this area of research is very exciting. One minor concern/query that I had, which hopefully can be addressed by the time of author response / paper update, was why the results on enwik8 are so far from prior published transformer results. E.g. there have been transformers published with 0.99bpc (TransformerXL), and several others around the 1.0-1.06 mark. From Figure 5 it appears as though the models will not obtain results lower than 1.2 bpc. Is it because the models have not converged, or is it because the test data is different (i.e. you don't use the 90MB/5MB/5MB train/valid/test split)?\\n\\nFurthermore I was not able to replicate the positive benefit of sharing the key and query weight matrices. Namely, I used a 24 layer TransformerXL baseline --- exactly the same setup as the published paper --- which obtains 0.992 bpc and then tried tying the weights between the query and key parameters; this lead to a model with 1.012; which is 0.02bpc drop. The non-shared variant had a faster drop in training and validation learning curves. I don't mean for this to detract from the paper - but just to add another data-point on this observation. Perhaps you don't have to share the weights for queries and keys (still normalizing them of course, so you can use the spherical LSH).\", \"title\": \"Query about shared queries\"}" ] }
S1gEFkrtvH
BasisVAE: Orthogonal Latent Space for Deep Disentangled Representation
[ "Jin-Young Kim", "Sung-Bae Cho" ]
The variational autoencoder, one of the generative models, defines the latent space for the data representation, and uses variational inference to infer the posterior probability. Several methods have been devised to disentangle the latent space for controlling the generative model easily. However, due to the excessive constraints, the more disentangled the latent space is, the lower quality the generative model has. A disentangled generative model would allocate a single feature of the generated data to the only single latent variable. In this paper, we propose a method to decompose the latent space into basis, and reconstruct it by linear combination of the latent bases. The proposed model called BasisVAE consists of the encoder that extracts the features of data and estimates the coefficients for linear combination of the latent bases, and the decoder that reconstructs the data with the combined latent bases. In this method, a single latent basis is subject to change in a single generative factor, and relatively invariant to the changes in other factors. It maintains the performance while relaxing the constraint for disentanglement on a basis, as we no longer need to decompose latent space on a standard basis. Experiments on the well-known benchmark datasets of MNIST, 3DFaces and CelebA demonstrate the efficacy of the proposed method, compared to other state-of-the-art methods. The proposed model not only defines the latent space to be separated by the generative factors, but also shows the better quality of the generated and reconstructed images. The disentangled representation is verified with the generated images and the simple classifier trained on the output of the encoder.
[ "variational autoencoder", "latent space", "basis", "disentangled representation" ]
Reject
https://openreview.net/pdf?id=S1gEFkrtvH
https://openreview.net/forum?id=S1gEFkrtvH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "H7_03fz3S", "rkeK3Tkijr", "r1g81ERqsH", "B1g_Pk6ciB", "B1gMzgvqoB", "HJgw2OGcjH", "B1lk1AaDor", "SygFi36wir", "HJlaLopwsr", "BJg9KSpwoS", "ryxC77nvsr", "HJloCRuPsH", "SkxON3_Pir", "rJxBtFuDjH", "HylrrLuDsr", "SygjGNOvor", "H1gu7TPPir", "r1llLeEDir", "HyghbSTIsr", "HJgjySaUjH", "ryxnaET8or", "HkgnZixS9S", "B1xdl-vCFB", "H1gCgZOrKB", "rJxFoqfsPS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1576798733732, 1573744049513, 1573737437827, 1573732192495, 1573707785853, 1573689518734, 1573539287103, 1573538977464, 1573538645079, 1573537154492, 1573532453583, 1573519059230, 1573518384479, 1573517692999, 1573516860577, 1573516307499, 1573514527972, 1573498951749, 1573471491874, 1573471458877, 1573471427900, 1572305668188, 1571873007637, 1571287285707, 1569561248839 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1837/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1837/Authors" ], [ "ICLR.cc/2020/Conference/Paper1837/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1837/Authors" ], [ "ICLR.cc/2020/Conference/Paper1837/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1837/Authors" ], [ "ICLR.cc/2020/Conference/Paper1837/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1837/Authors" ], [ "ICLR.cc/2020/Conference/Paper1837/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1837/Authors" ], [ "ICLR.cc/2020/Conference/Paper1837/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1837/Authors" ], [ "ICLR.cc/2020/Conference/Paper1837/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1837/Authors" ], [ "ICLR.cc/2020/Conference/Paper1837/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1837/Authors" ], [ "ICLR.cc/2020/Conference/Paper1837/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1837/Authors" ], [ "ICLR.cc/2020/Conference/Paper1837/Authors" ], [ "ICLR.cc/2020/Conference/Paper1837/Authors" ], [ "ICLR.cc/2020/Conference/Paper1837/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1837/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1837/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1837/Authors" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes a new way to learn a disentangled representation by embedding the latent representation z into an explicit learnt orthogonal basis M. While the paper proposes an interesting new approach to disentangling, the reviewers agreed that it would benefit from further work in order to be accepted. In particular, after an extensive discussion it was still not clear whether the assumptions of Theorem 1 applied to VAEs, and whether Theorem 1 was necessary at all. In terms of experimental results, the discussions revealed that the method used supervision during training, while the baselines in the paper are all unsupervised. The authors are encouraged to add supervised baselines in the next iteration of the manuscript. For these reasons I recommend rejection.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you for the thorough rebuttal and for adding the comparison to VQ-VAE, very helpful.\\n\\n1. Thanks for making this clear. Unfortunately, I actually feel that the fact that you supervise $c_i $ but do not make this clear is rather problematic. \\nAll baselines you compare against, and the history of the field of disentanglement learning is addressing the issue of *unsupervised* representation learning. Checking again, I see that you do not mention \\\"unsupervised\\\" or \\\"supervised\\\" anywhere in the paper, so this is more of an assumption on my side, but I feel you should have make that more explicit, or report unsupervised results in the Appendix.\\n\\nIt also puts you in competition with a plethora of other works which actually leverage supervision (fully supervised or semi-supervised). For example the Multimodal VAE literature has been doing this for a while [1-7], which are missing from the Related work.\\n\\nUnfortunately, due to this issue, I do not feel comfortable with approving the manuscript in this state, as this would require quite a rewrite and change of baselines.\", \"references\": \"[1] For a good minimal extension of the VAE framework to introduce supervision, and their Related work section: https://arxiv.org/abs/1906.01044\\n[2] Siddarth et al 2017, https://arxiv.org/abs/1706.00400\\n[3] DC-IGN: https://arxiv.org/abs/1503.03167\\n[4] JMVAE: https://arxiv.org/abs/1611.01891\\n[5] BiVCCA: https://arxiv.org/abs/1610.03454\\n[6] TELBO: https://arxiv.org/abs/1705.10762\\n[7] MVAE: https://arxiv.org/abs/1802.05335 \\n[8] Beta-TCVAE: https://arxiv.org/abs/1802.04942\\n\\n\\n--\", \"other_points\": \"4. Thanks for the added plot. It was particularly informative to indicate specific sets of c_i that switch their behaviour. It would be good to see this for a single image, as currently this makes it hard to know what all the other c_i end up doing. Are they just capturing variability around a canonical c_j (which would be like the prototype / mean vector like in VQ-VAE)?\\n\\n5. I understood what the term does, I was wondering if you observed this lack of disentanglement happening *in practice*?\\n\\n--\", \"as_a_point_towards_addressing_the_issue_that_the_other_reviewers_have_with_the_elbo_formulation\": \"The proposed ELBO derivation is not helpful in practice, because looking at the loss terms 9-11, none of them assume the fully factorised form shown in equation 8.\\nInstead, BasisVAE simply forces the encoder to have a specific parametrisation (a linear combination of K vectors: z = \\\\sum_i c_i m_i), but this is never strictly enforced (even the basis assumption of $M_B$, which would be required for Equation 8 to be really used, is only a regularisation term...).\\nThis choice of parametrisation may be unable to capture some distributions p(x), which is a trade-off to decide for the user.\\n\\nHence personally I would remove Theorem 1 as it does not help and is confusing at best for people that expect a VAE to capture any distribution p(x).\"}", "{\"title\": \"Answers to Reviewer #3\", \"comment\": \"We considered conditional independence in this paper in relation to disentanglement. That is, with $x$ and corresponding $z=\\\\Sigma c_i z_i$, the feature changed by $z_1$ and the feature changed by $z_2$ are not related to each other. Therefore, when conditional by $x$, the probability that the property represented by $z_1$ and $z_2$ $p(z_1 , z_2|x)$ is expressed as the probability that the property represented by z_2 multiplied by the probability that the property represented by z_1 is expressed $p(z_1 |x)p(z_2 |x)$.\\n\\nTherefore, the decoder needs to get $z_i$ from one of the encoder's outputs, $c_i$, and generate $x$ from it, and then this process is proceeded and the loss is calculated for all $i$. But, for the convenience, decoder generates $x$ from the linear combination $z=\\\\Sigma c_i z_i$.\"}", "{\"title\": \"Encoder does not assure the assumption\", \"comment\": \"The conditional independence $p(z_1, z_2|x)=p(z_1|x)p(z_2|x)$ should be derived from the configurations of the prior $p(z)$ and decoder $p_\\\\theta(x|z)$.\\nHow you sample $z$ from the encoder (or your variational distribution) is irrelevant of the conditional independence of the *true* posterior.\\n\\nThis is why I indicate that the decoder (or maybe the prior) needs a special structure.\"}", "{\"title\": \"Answers to Reviewer #3\", \"comment\": \"Thank you for your respond!\\n\\n1. We can derive $p(z_1, z_2 | x)=p(z_1 |z_2 , x)p(z_2 | x)$ with a conditional probability. In the assumption that the $z_i$ are independent conditioned by $x$, since $p(z_1 | z_2 , x) =p(z_1 |x)$, $p(z_1 , z_2 |x)$ can be factorized into $p(z_1 |x)p(z_2 |x)$. The $z$ in the line 9 of Algorithm 1 is sampled from $N(M_B \\\\cdot c^T,\\\\Sigma_{f(x)})$. In this process, we intended that the encoder outputs coefficient $c_i$ for independent $z_i$, and the decoder generates data by inputting $z$ which is a linear combination of $c_i$ and $z_i$. Therefore, decoder takes $z$ which is a linear combination of all components of $z$.\\n\\n2. Thank you for your comments. To avoid the confusion, we'll correct that word.\"}", "{\"title\": \"Point 1. (Issue 1 of Reviewer #2)\", \"comment\": \"Thank you for the response.\\n\\nAs in the discussion with Reviewer #2, I am not yet convinced that p(z_1, z_2 | x) factorizes into p(z_1|x)p(z_2|x). \\nThe generative model uses decoder $x \\\\sim p_\\\\theta(\\\\cdot | z)$ (line 9 of Algorithm 1) where the decoder network takes all components of $z$ as its input. \\nIn this case, the conditional independence $p(z_1|x)p(z_2|x)$ should be carefully justified. (I think the decoder needs some special structure.)\", \"regarding_point_3\": \"I interpreted the binary function as a function that returns a binary value {0,1}.\\nSimply calling \\\"negative log likelihood\\\" or \\\"reconstruction function\\\" can avoid the possible confusion.\"}", "{\"title\": \"Answers to Reviewer #2\", \"comment\": \"Thank you for your consecutive reviews!\\n\\nAs we mentioned, by definition, in that space, a latent variable covers only one generative factor and does not affect each other and we interpreted it as independence. \\n\\nIn many existing disentangled representations, it is confirmed that even for the same data x, different z_i changes individual characteristics (e.g. background color, gender, etc.) that do not affect each other.\"}", "{\"title\": \"Potentially false premise\", \"comment\": \"The statement \\\"Theorem 1 shows that the existing ELBO can be separated into independent z_i's.\\\" is only true if we believe Theorem 1's premise that \\\"z_i are independent conditioned by x\\\" is true for VAEs. Can you explain why \\\"z_i are independent conditioned by x\\\" is true for VAEs?\"}", "{\"title\": \"Answers to Reviewer #2\", \"comment\": \"Theorem 1 shows that the existing ELBO can be separated into independent z_i's.\\nBased on these observations, we set the output of the encoder to coefficient c_i for independent z_i instead of one integrated z, as in normal VAE, even though this actually violated to VAE. \\nBy setting the loss as equations (9) ~ (11), we have trained the data representation to separate the z_i from each other (ie, to satisfy the disentanglement).\"}", "{\"title\": \"What is the significance of Theorem 1?\", \"comment\": \"Does this mean that \\\"z_i are independent conditioned by x\\\" is an explicit assumption in the premise of Theorem 1? If so, OK, I accept that Theorem 1 is correct.\\n\\nBut what, then, is the significance of Theorem 1? Since the premise for Theorem 1 is violated in a VAE, then Theorem 1 can't be applied to a VAE.\"}", "{\"title\": \"Answers to Reviewer #2\", \"comment\": \"Theorem 1 is true if z_i are independent conditioned by x.\\n\\nWe found that in Figure 1, only one feature changes with z in the normal VAE. This is represented by z = c1z1 + c2z2 in a two-dimensional representation, meaning that only one feature is adjusted according to c, and z1 and z2 are disentangled, but not on a standard basis. \\nWe proceed on the assumption that z_i are independent when disentangled. I apologize that this has made you very confused. We will add detailed and in-depth assumptions and content about what you pointed out.\\n\\nThanks again for the good point.\"}", "{\"title\": \"Please Rewrite Theorem 1\", \"comment\": \"It looks like we're not on the same page. I believe this confusion is arising from our disagreement over how to read Theorem 1. All I can say with confidence at the moment is that, in the standard VAE setup, the jump from Eq 5 to 6 is wrong.\\n\\nIf you wish to convince me otherwise, please restate Theorem 1 and its proof as rigorously as possible.\"}", "{\"title\": \"Answers to Reviewer #2\", \"comment\": \"I'm sorry for using confusion expression. We mean \\\"Multiple levels\\\" that the \\\"multiple disentangled latent vector\\\".\\nBasically, the proposed model is related to the disentanglement representation. By definition, in that space, a latent variable covers only one generative factor and does not affect each other [1, 2], which can be interpreted as independence. We put the latent space disentangled in Theorem 1 and each factor at that time is z_1, ..., z_n. (This is evidenced by experiments with only one latent variable changed in many disentanglement representation studies [3, 4].)\\n\\n[1] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives.\\nIEEE Trans. on Pattern Analysis and Machine Intelligence, 35(8):1798\\u20131828, 2013.\\n[2] K. Ridgeway. A survey of inductive biases for factorial representation-learning. arXiv preprint\", \"arxiv\": \"1612.05299, 2016.\\n[3] Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., ... & Lerchner, A. (2017). beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. ICLR, 2(5), 6.\\n[4] Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., & Abbeel, P. (2016). Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in neural information processing systems (pp. 2172-2180).\"}", "{\"title\": \"Response\", \"comment\": \"It is true that for hierarchical VAEs whose PGM is $z_1 \\\\to z_2 \\\\to \\\\cdots \\\\to x$, that $p(z_i \\\\mid x, z_j) = p(z_i \\\\mid z_j)$ when $i < j$.\\n\\nHowever, your model, from what I can tell, is not a hierarchical VAE. So I don't quite understand your claim that your \\\"latent variables are split into multiple levels z_1, ..., z_n\\\". \\n\\nFurthermore, this is still not the same as your claim that $p(z_i, z_j \\\\mid x) = p(z_i \\\\mid x)p(z_j \\\\mid x)$.\\n\\nAnd regarding the PixelVAE paper, the assumption that $q(z \\\\mid x)$ is factorized is an explicit assumption on the variational inference model, which is not the same object as the true posterior $p(z \\\\mid x)$ of the generative model in the VAE.\\n\\nCan you clarify what you meant?\"}", "{\"title\": \"Answers to Reviewer #2\", \"comment\": \"Thank you for your quick response!\\nI am pleased to be able to conduct this constructive discussion with you.\\n\\nOur latent variables are split into multiple levels z_1, ..., z_n. The joint posterior over all of these is a simple fully factorized Gaussian (e.g. conditioned on x, z_2 is independent of z_1), unlike normalizing flows which are used to make the posterior distribution more flexible. \\n\\nBesides, as in [1], if you look at the equation associated with -L (x, q, p) on page 4, you can see that the same assumption is used when moving from the first expression to the second.\\n\\n[1] Gulrajani, I., Kumar, K., Ahmed, F., Taiga, A. A., Visin, F., Vazquez, D., & Courville, A. (2016). Pixelvae: A latent variable model for natural images. International Conference on Learning Representation.\"}", "{\"title\": \"Potential misuse of Naive Bayes assumption\", \"comment\": \"Assuming that all the latent variables are statistically independent conditional on the data is a very big assumption.\\n\\nIn the standard Naive Bayes setup, this assumption is fundamentally baked into the model class (by virtue of the PGM, every model within the Naive Bayes model class provably satisfies the Naive Bayes assumption: whereby the observed features are assumed to be independent when conditioned on the underlying class). \\n\\nIn contrast, your proof assumes that p(z_{1:k} | x) = prod_i p(z_i | x) within a VAE model class, which is not something that's actually guaranteed by the VAE model class. \\n\\nI am therefore quite uncomfortable with the analysis in Section 3.1. As of the moment, I am strongly inclined to believe that Theorem 1 is wrong. Please let me know what you think.\"}", "{\"title\": \"Answers to Reviewer #2\", \"comment\": \"Thank you for your respond!\\n\\nWe can check the equation in the derivation of the naive Bayes classifier [1-3].\\nThe \\\"naive\\\" conditional independence assumptions in the naive Bayes classifier come into play on our derivation: assume that all latent variables in \\\\mathbf{z} are mutually independent, conditional on the data \\\\mathbf{x}. Under this assumption,\\np(z_i|z_{i+1},...,z_n,x) = p(z_i|x)\\n\\n[1] Ceci, M., Appice, A., & Malerba, D. (2003). Mr-SBC: a multi-relational naive bayes classifier. In European conference on principles of data mining and knowledge discovery, 95-106.\\n[2] Hilden, J. (1984). Statistical diagnosis based on conditional independence does not require it. Computers in biology and medicine, 14(4), 429-435.\\n[3] Domingos, P., & Pazzani, M. (1997). On the optimality of the simple Bayesian classifier under zero-one loss. Machine learning, 29(2-3), 103-130.\"}", "{\"title\": \"Response to Issue 1\", \"comment\": \"Thanks for the response. I'd like to address Issue 1 first.\", \"i_checked_your_appendix_c_and_noticed_the_following_claim\": \"$p(z_2 | x, z_1) = p(z_2 | x)$\\n\\nI don't think this claim is correct in general (see \\\"explaining away\\\" effect in v-structures). Can the authors clarify this step?\"}", "{\"title\": \"Answers to Reviewer #3\", \"comment\": \"Thank you for your comments. They are very helpful for us to conduct more finished works. According to the reviewer\\u2019s comments, we have addressed them as follows.\\n\\t1. It is enough to show p(x\\u2502z_1,z_2 )=(p(x\\u2502z_1 )p(x\\u2502z_2 ))/(p(x)) for derivation from (5) to (6). We have added it in Appendix C.\\n\\t2. We derive from Equation 8 that a latent variable z can be decomposed into several independent variables z_i, generating the same data x from them with the encoder, and constructing an ELBO. In the BasisVAE, z_i corresponds to the basis element b_i, and it is adjusted by the coefficient c_i output of the encoder.\\n\\t3. A binary function is a function that takes two arguments and becomes cross-entropy as in VAE or weighted l1 error Laplacian Pyramid as in Bojanowski et al.\\n\\t4. Sorry for the typos. N(f(x),\\\\Sigma_f(x)) should be replaced with N(M_B*f(x),\\\\Sigma_f(x)). We have corrected it.\\n\\t5. Fig. 6 shows the result when only one c is 1 and the others are 0. It is shown that the basis elements have one distinct characteristic and only one characteristic changes in Fig. 8 when changing the strength of the basis element (i.e., c). More examples are shown in Figure 11. These results are seen in MNIST and 3DFace datasets as well as CelebA datasets in Figures 5 and 7. In addition, we also demonstrate the performance by showing the quantitative evaluation of disentanglement in Table 3.\"}", "{\"title\": \"Answers to Reviewer #1\", \"comment\": \"Thank you for your comments. They are very helpful for us to conduct more finished works. According to the reviewer\\u2019s comments, we have addressed them as follows.\\n1.\\tWe conducted the experiments with supervised learning, but we have obtained similar results when repeating all the experiments with unsupervised learning. In response to the reviewer's comment, we have also added a comparison with the VQ-VAE model.\\nFigure 6 can be verified according to the relationship between the distribution of coefficient c and the characteristics of the input image.\\n2.\\tWe set n_x to 40 according to our previous work. For larger n_x values, there was no significant difference, but in small cases, more than two generative factors appear on one basis element.\\nAs shown in Figure 2, f(x)=(c,\\\\sigma), i.e., encoder outputs the coefficient and \\\\sigma simultaneously as in VAE. Besides, the basis matrix B can be trained with equation (11) as in VQ-VAE.\\nAs mentioned in Section 4.2, The layer structure of the model is almost similar, and sampling z is performed using encoder f(x) and \\\\sigma with no basis compared to the proposed model. In betaVAE, beta is set to 100 times the coefficient of the reconstruction error.\\n3.\\tThank you for the good comment. We already quantitatively assessed the reconstruction performance and listed it in Table 1 and confirmed that it showed the best performance. In fact, our model puts forward the theory of decomposing the latent space and built the basis to perform it, and makes the main contribution to the advantages (especially on disentanglement) that can be obtained by constructing the latent variable from the linear combination of the bases.\\n4.\\t Thank you for the good comment. We describe in appendix D the results of investigating differences in c_i distributions for \\\"blonde women\\\", \\\"black-haired women\\\" and \\\"black-haired men\\\". We will continue to add the comparisons of distribution for the various samples. \\n5.\\tBy removing L_B, the basis elements are not orthonormal to each other, so the Cartesian coordinate system is not set by default with that kind of basis. Thus, there will be more relationships between the basis elements, and the disentanglement will disappear.\\n6.\\tSorry for the typos. N(f(x),\\\\Sigma_f(x)) should be replaced with N(M_B*f(x),\\\\Sigma_f(x)). We have corrected it.\\n7.\\tTo avoid the confusion, we have corrected it. Thank you for your comments.\"}", "{\"title\": \"Answers to Reviewer #2\", \"comment\": \"Thank you for your comments. They are very helpful for us to conduct more finished works. According to the reviewer\\u2019s comments, we have addressed them as follows.\\nIssue 1\\n\\t1. It is enough to show p(x\\u2502z_1,z_2 )=(p(x\\u2502z_1 )p(x\\u2502z_2 ))/(p(x)) for derivation from (5) to (6). We have added it in Appendix C.\\n\\t2. We derive from Equation 8 that a latent variable z can be decomposed into several independent variables z_i, generating the same data x from them with the encoder, and constructing an ELBO. In the BasisVAE, z_i corresponds to the basis element b_i, and it is adjusted by the coefficient c_i output of the encoder.\\n\\nIssue 2\\n\\t1. The output of the encoder is coefficient c_i, which is multiplied by the basis matrix and added to \\\\epsilon * \\\\sigma to produce a latent variable z. We have shown that latent space can be decomposed in Thm 1, which shows that latent variable z can be represented as a linear combination of several basis elements. It can be done with less constrains than conventional disentanglement representation, resulting in more effective method.\\n\\t2. M satisfying M.T * M = I may have many cases besides identity matrix I. In the case of the conventional disentanglement representation method, M = I is made so that a single latent unit is associated with a single generative factor. However, in the proposed method, a single basis element is associated with a single generative factor, which is free from the second constraint mentioned in Section 1.\\n\\nIssue 3\\n\\t1. We have slightly simplified the disentanglement-specific metric used in betaVAE as the performance of the simplest logistic regression (LR) using the coefficient c (or latent variable z) extracted through the encoder. As mentioned by the reviewer, rotation is applied. Nevertheless, the results show that the proposed model has the simplest design of latent space, which makes it easier to distinguish generative factors.\\n\\t2. Sorry for the confusion. In the first original, average was in %???. We have made the appropriate modifications to avoid the confusion.\\n\\nAccording to the comments, we have made up the lack of explanation in main contents and added more stuffs such as the results of VQ-VAE for comparison and the distribution of coefficient c_i at the appendix.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\nThis paper claims to achieve disentanglement by encouraging an orthogonal latent space.\", \"decision\": \"Reject. I found the paper difficult to read and the theoretical claims problematic.\", \"issue_1\": \"The Theorem\\nCan the authors explain how they got from Eq 5 to Eq 6? It seems that the authors claim that:\\np(x | z1 z2 \\u2026 zn) = p(x | z1) \\u2026 p(x | zn) / p(x)**(n - 1)\\nI have difficulty understanding why this is true. It would suggest that\\np(x | a b) = p(x | a) p(x | b) / p(x). \\nSuppose a and b are fair coin flips and x = a XOR b. Then\\np(x=1 | a=1 b=1) = 0\\np(x=1 | a=1) = 0.5\\np(x=1 | b=1) = 0.5\\np(x=1) = 0.5\\nCan the authors please address this issue?\\n\\nEven if Equation 8 is somehow correct, can the authors explain why BasisVAE provably maximizes the RHS expression in Eq 8? In particular the object p(x | z_i) is the integral of p(x, z_not_i | z_i) d z_not_i, which is quite non-trivial.\", \"issue_2\": \"The Model\\nThe notation is a bit confusing, but it looks like the proposed model is basically a standard VAE, but where the last layer of the mean-encoder is an orthogonal matrix. I do not think the authors provided a sufficient justification for how this model relates back to Theorem 1. \\n\\nFurthermore, it is unclear to me why an orthogonal last-layer is of any significance theoretically. Suppose f is a highly expressive encoder. Let f(x) = M.T g(x) where g is itself a highly expressive neural network. Then M f(x) = g(x), which reduces to training a beta-VAE (if using Eq 12). From a theoretical standpoint, it is difficult to assess what last-layer orthogonality is really contributing.\", \"issue_3\": \"The Experiments\\nExperimentally, the main question is whether the authors convincingly demonstrate that BasisVAE achieves better disentanglement (independent of whether BasisVAE is theoretically well-understood). \\n\\nThe only experiment that explicitly compares BasisVAE with previous models is Table 3. What strikes me as curious about the table is the standard deviation results. They are surprisingly small. Did the authors do multiple runs for each model? Furthermore, the classification result is not equivalent to measuring disentanglement. There exists examples of perfectly entangled representation spaces can still achieve perfect performance on the classification task (any rotation applied to the space is enough to break disentanglement if disentanglement is defined as each dimension corresponding to a single factor of variation).\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"[updated rating due to supervision of $c_i$, which was not made clear enough and would require other baseline models]\\n\\nThis paper proposes a modification of the usual parameterization of the encoder in VAEs, to more allow representing an embedding $z$ through an explicit basis $M_B$, which will be pushed to be orthogonal (and hence could correspond to a fully factorised disentangled representation). It is however possible for different samples $x$ to use different dimensions in the basis if that is beneficial (i.e. x is mapped to $z = f(x) \\\\cdot M_B$, where f(x) = (c_1, ... , c_n) which sums to 1.). This stretches the usual definition of what a \\u201cdisentangled representation\\u201d means, as this disentanglement is usually assumed to be globally consistent, but this is a fair extension.\\nThey show that this formulation can be expressed as a different ELBO which can be maximized as for usual VAEs.\\n\\nI found this paper interesting, but I have one clarification that may modify my assessment quite strongly (hence I am tentatively putting it on the accept side). Some implementation details seem missing as well. Otherwise the presentation is fair, there are several results on different datasets which demonstrate the model's behaviour appropriately.\\n\\n1.\\tThe main question I have, which may be rather trivial, is \\u201care the c_i supervised in any way?\\u201d.\\nWhen I first read the paper, and looking at the losses in equations 9-11, I thought that this wasn\\u2019t the case (also considering this paper is about unsupervised representation learning), but some sentences and figures make this quite unclear:\\n\\ta.\\tIn Section 3.2, you say \\u201cWe train the encoder so that c_i = 1 and c_j = 0 if the input data has i-feature and no j-feature\\u201d. Do you?\\n\\tb.\\tHow are the features in Figure 6 attached to each b_i?\\nI.e. how was \\u201c5_o_clock_shadow\\u201d attached to that particular image at the top-left?\\n\\tIf the c_i are supervised, this paper is about a completely different type of generative modeling than what it compares against (it would be more comparable to VQ-VAE or other nearest-neighbor conditional density models).\\n2.\\tThere is not enough details about the architecture, hyperparameter and baselines in the current version of the paper.\\n\\ta.\\tWhat n_x (i.e. dimensionality of the basis) do you use? How does this affect the results?\\n\\tb.\\tHow exactly are f(x), \\\\Sigma_f(x) parametrized? They mention the architecture of the \\u201cencoder\\u201d in Section 4.1, but this could be much clearer.\\n\\tc.\\tHow do you train M_B? I assume they are just a fixed set of embeddings that are back-propagated through?\\n\\td.\\tWhat are the details about the architecture of the baselines, and their hyperparameters? E.g. what is the beta you used for Beta-VAE?\\n3.\\tThe reconstructions seem only partially related to their target inputs (e.g. see Figure 4). This seems to indicate that instead of really reconstructing x, the model chooses to reconstruct \\u201ca close-by related \\\\tilde{x}\\u201d, or even perhaps a b_i. This would make it behave closer to VQ-VAE, which explicitly does that. How related are reconstructions/samples to the b_i?\\n4.\\tCould you show the distribution of c_i that the model learns, and how much they vary for several example images? \\nHow \\u201cpeaky\\u201d is this distribution for a given image (this feeds into to the previous question as well)?\\nThe promise of the proposed model is that different images pick and choose different combinations of b_i, which hopefully one should see reflected in the distributions of c_i per sample, across clusters, or across the whole dataset.\\n5.\\tWhat happens when L_B is removed? I.e. what is the effect of removing the constraint on M_B being a basis, and instead allow it to be anything? This seems to make it closer to a continuous approximation to VQ-VAE?\\n6.\\tIs Equation 10 correct? Should the KL use N(f(x) \\\\cdot M_B, \\\\Sigma_f(x)), as in equation 9 above?\\n7.\\tSimilarly, in Section 4.2.3, did you mean \\u201cc_i = 1 and c_j = 0 for i != j\\u201d?\\n\\nIf the model happens to be fully unsupervised, I think that these results are quite interesting, and provide a good modification to the usual VAE framework, I find that having access to the M_B basis explicitly could be very valuable.\\n\\nThere is still an interesting philosophical discussion to be had about when one would like to obtain a \\u201cglobal basis\\u201d for the latent space (i.e. Figure 3 (b)), or when one would prefer more local ones. I can see clear advantages for a non-local basis, in terms of generalisation and compositionality, which your choice (i.e. Figure 3 (c) ) would prohibit.\", \"references\": \"[1] VQ-VAE: Aaron van den Oord, Oriol Vinyals, Koray Kavukcuoglu, \\u201cNeural Discrete Representation Learning\\u201d, https://arxiv.org/abs/1711.00937\"}", "{\"rating\": \"1: Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes BasisVAE for acquiring a disentangled representation of VAE.\\nThough the topic is much of interest for this conference, I cannot support its acceptance because the paper leaves many aspects unexplained in the model design. \\n\\nIn particular, the following points need justified and clarified.\\n1) Theorem 1 is difficult to follow. \\nThe claim of the theorem is unclear. \\nI suppose it says ELBO can be written as a sum with respect to z_i given p(z)=\\\\prod_i p(z_i), but the statement is not clear enough from the text. \\nProof of Lemma 1 is logically incomplete. Discuss the cases n>2.\\nDerivation of equation (6) from (5) seems erroneous: p(x|z_1, ..., z_n) = \\\\prod_{i=1}^n p(x|z_i) / p^{n-1}(x) does not hold in general even if z_i's are independent p(z_1, ..., z_n)=\\\\prod_{i=1}^n p(z_i).\\n\\n2) Connection between the objective function and Theorem 1 is unclear. \\nBasisVAE uses a linear combination of Eqs. (9,10,11) as its objective function. \\nHow Theorem 1 motivates this formulation?\\n\\n3) Reconstruction error (9). \\nThe text says \\\\ell of Eq. (9) is the binary function and configured as in (Bojanowski et al. 2017). \\nHowever, Bojanowski et al. used a weighted l1 error Laplacian Pyramid representation. \\nFurthermore, the original VAE formulation uses a conditional log-likelihood log p(x|z) for the reconstruction term. \\nHow is binary function \\\\ell related the likelihood?\\n\\n4) KL regularization term (10).\\nFor computing this term, the output of encoder c=f(x) should be converted into z. \\nNotation of N(f(x), \\\\Sigma) is confusing. \\n\\n5) Figure 6 shows diversity in many factors. \\nFigure 6 is not as impressive for disentangled images since many factors change by varying a single basis. \\nIs this an expected result?\"}", "{\"comment\": \"Apologies to the readers - we identified a formatting error in the first paragraph of Section 3.1. Theorem 1 and Lemma 1 have been not written separately, but together in the main text.\", \"title\": \"Formatting error causing inconvenience to read\"}" ] }
BygXFkSYDH
Target-Embedding Autoencoders for Supervised Representation Learning
[ "Daniel Jarrett", "Mihaela van der Schaar" ]
Autoencoder-based learning has emerged as a staple for disciplining representations in unsupervised and semi-supervised settings. This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional. We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets---encoding the prior that variations in targets are driven by a compact set of underlying factors. As our theoretical contribution, we provide a guarantee of generalization for linear TEAs by demonstrating uniform stability, interpreting the benefit of the auxiliary reconstruction task as a form of regularization. As our empirical contribution, we extend validation of this approach beyond existing static classification applications to multivariate sequence forecasting, verifying their advantage on both linear and nonlinear recurrent architectures---thereby underscoring the further generality of this framework beyond feedforward instantiations.
[ "autoencoders", "supervised learning", "representation learning", "target-embedding", "label-embedding" ]
Accept (Talk)
https://openreview.net/pdf?id=BygXFkSYDH
https://openreview.net/forum?id=BygXFkSYDH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "X4P5WYPW9O", "rJxQBqE2jB", "r1lGkcPqir", "BJx_czntor", "Byl-KGhtir", "rylTtJ4IiS", "SJlD_kNLsH", "Bkg1vJE8iS", "ryeYS1NUiS", "SJlyEyNUsr", "rkeoH0QLiH", "H1l7eAX8jH", "rkxmqp7Lsr", "SJe1uTQUir", "rkx24TQIjH", "BkxL-a7Lor", "rkxHb3XLoS", "Bkl_9i78iH", "HkeuGjX8jr", "BJlFO57UoH", "H1gkm4ET5H", "rkecJZjucH", "ryx7d3w75H", "r1x_NdV6FB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733704, 1573829179076, 1573710297984, 1573663376495, 1573663352736, 1573433220952, 1573433199105, 1573433174672, 1573433153449, 1573433126944, 1573432898977, 1573432810842, 1573432715341, 1573432678578, 1573432627712, 1573432573721, 1573432316837, 1573432207583, 1573432079802, 1573431920659, 1572844567033, 1572544738348, 1572203626827, 1571797040365 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Area_Chair1" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/Authors" ], [ "ICLR.cc/2020/Conference/Paper1836/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1836/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1836/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1836/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Talk)\", \"comment\": \"The paper presents a general view of supervised learning models that are jointly trained with a model for embedding the labels (targets), which the authors dub target-embedding autoencoders (TEAs). Similar models have been studied before, but this paper unifies the idea and studies more carefully various components of it. It provides a proof for the specific case of linear models and a set of experiments on disease trajectory prediction tasks. The reviewer concerns were addressed well by the authors and I believe the paper is now strong. It would be even stronger if it included more tasks (and in particular some \\\"typical\\\" tasks that more of the community is focusing on), and the theoretical part is to my mind not a major contribution, or at least not as large as the paper implies, because it analyzes a much simpler model than anyone is likely to use TEAs for.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Dear Reviewer 4\", \"comment\": \"We are sincerely grateful for your time and energy in the review process. In light of our responses (Nov 11) and revisions (Nov 13), we would appreciate if the reviewer kindly let us know of any leftover concerns in the very limited time remaining. With our responses and revisions, we humbly hope that (similar to Reviewer 2) the reviewer would kindly consider revising their rating. Thank you.\"}", "{\"title\": \"Reviewers, any additional feedback following author responses?\", \"comment\": \"Dear Reviewers, thanks for your thoughtful input on this submission! \\u00a0The authors have now responded to your comments. \\u00a0Please be sure to go through their replies and revisions. \\u00a0If you have additional feedback or questions, it would be great to get them this week while the authors still have the opportunity to respond/revise further.\\n\\nAlso, there is a wide range of scores for this submission. Please consider whether the author responses and/or comments of other reviewers affect your recommendation. Thanks!\"}", "{\"title\": \"Revised Paper (continued)\", \"comment\": \"*** (c.) Additional Related Work ***\\n\\n[Reviewer 3]\\nThank you again for the suggestion to include extreme multi-label classification. These now appear at the end of Section B.2, including all four works cited in our original response, as well as a reference to the data and compilation webpage for further detail.\\n\\n[Reviewer 4]\\nThank you again for the suggestions of both [Dalca, CVPR 2018] and [Mostajabi, CVPR 2018]. As promised, the former fits nicely into Table 6, and the latter into Table 7. We agree that they are both broadly related to target-embedding. However, as explained in detail in response (1.1), (1.2), and (M.2), the former is highlighted under \\\"unsupervised, unpaired\\\", and the latter is highlighted under \\\"not jointly trained\\\"---both of which are important distinctions central to our theoretical and empirical analyses. These works are also mentioned in Section 4 (as well as Appendix B), with these distinctions made clear.\\n\\n*** (d.) Additional Sensitivities ***\\n\\n[Reviewer 1]\\nAs part of our response (2.4), we explored the idea of performance degradation due to out-of-distribution test samples. The preliminary results (along with a detailed description) are now included in Table 37 in Section E.5.\\n\\n[Reviewer 2]\\nAs part of our response to question (3.5.b), we explored the idea of rearranging the order of the training stages. The results (along with a detailed description) are now included in Table 38 in Section E.5. These are in addition to the 7 focuses of our empirical analysis in Section 5, summarized in response (3.3.a).\"}", "{\"title\": \"Revised Paper\", \"comment\": \"We thank the reviewers again for their thoughtful comments.\\n\\nWe have updated the paper to more clearly and accurately reflect our specific positioning and contributions. Broadly, this includes (a.) fine-tuned language in the abstract, introduction, and conclusion, (b.) specific suggestions for additional clarifications, (c.) specific suggestions for additional related work, as well as (d.) additional sensitivities for thoroughness. We are grateful for all questions and suggestions, which improve the clarity and thoroughness of our arguments and presentation.\\n\\n[Note on Revisions]\\n*Blue* indicates any additions and revisions.\\n*Purple* indicates purely formatting-related changes (i.e. existing words italicized for emphasis).\\n\\n[Note on Related Work]\\nWe deeply appreciate the suggestions for additional related work, and have included all such works. Given our framework-level focus, the more works there are that relate to label-embedding in general, the more strongly our thesis is motivated and supported. Since our work intersects three (very) broad areas of research, it is inevitable that any list of related works---however large---will not be 100% exhaustive. That said, while our related work section already spans 4 pages (and the bibliography already spans 5), we do provide concise and detailed summary tables that organize their distinctions and relevance in order to contextualize our work.\\n\\n*** (a.) Language and Presentation ***\\n\\n[Reviewers 1, 2, and 4]\\nThe abstract, introduction, and introductory language to Section 3 (immediately before and after the section title) now most clearly reflects our positioning and three main contributions: (1) formalizing TEAs as a *general framework* that unifies several recent applications of label-embedding in disparate domains, (2) providing a first theoretical *learning guarantee* for linear TEAs by demonstrating uniform stability, and (3) first demonstrating empirically the further generality of TEAs (beyond feedforward instantiations for static classification) to the *temporal setting* for multi-variate, recurrent sequence forecasting, both regression and classification. We especially emphasize the importance of *joint* training to our analyses, as well as our focus on the purely *supervised* setting (e.g. now italicized in the abstract). The conclusion is also updated to more clearly reflect the fact that static classification applications can be regarded as existing instances of the framework. Broadly, this adds to responses (3.1) and (3.3) for Reviewer 2, and responses (1.3), (M.1), and (M.3) for Reviewer 4.\\n\\n*** (b.) Additional Clarifications ***\\n\\n[Reviewer 1]\\nThe introduction now contains more examples of tasks with high-dimensional output, per our response (1.1). Since we extensively discuss the setting of disease trajectory forecasting later on, this is now replaced with the existing static classification applications to image tagging, text annotation, and image segmentation.\\n\\n[Reviewer 2]\\nSection 2 now contains a better explanation of training and inference, per our response (3.5.a). In particular, we explicitly mention which parameters are trained in which stages, such that the reader does not need to wait until Appendix C for this general information. That said, we also include explicit mention of Algorithm 1 for quick reference, as well as pointing to the much more detailed diagrams in Figure 4 in Appendix C for step-by-step illustrations of training and inference.\\n\\n[Reviewers 1, 2, and 4]\\nIn the introduction to Section 3, as well as a separate paragraph preceding Section 3, we doubly emphasize our focus as a *framework-level* analysis. In addition to pp. 5-6 and 8, we additionally reiterate earlier here that existing static classification applications can be abstractly regarded as instantiations of this framework, such that there should not be any such confusion from the beginning to the end of the paper. In particular, this addresses response (3.1) for Reviewer 1, and adds to responses (3.1) and (3.3) for Reviewer 2, and responses (1.3), (M.1), and (M.3) for Reviewer 4.\\n\\n[Reviewer 2 and 4]\\nTo better highlight the significance of our theoretical result, we mention explicitly at the end of Section 3 how this enables us to unambiguously identify and quantify the benefit, in contrast with standard intuition-based arguments. First, we have included an additional detailed remark (Remark 4) at the end of Appendix A. In addition, a shorter version of the argument can be found at the end of Section 3, which points to Remark 4. (The original Remarks 4 and 5 are now renumbered as 5 and 6). This specifically reflects the added analysis in response (3.2) for Reviewer 2, and in responses (2.1) and (2.2) for Reviewer 4.\\n\\n[Reviewer 2]\\nSection 5 now contains a better exposition of the source of gains, per our response (3.4). In particular, the details of each of the experimental settings are expanded with more comparisons and explanations.\"}", "{\"title\": \"Response to Reviewer 4 [Part 5/5]\", \"comment\": \"*** Miscellaneous ***\\n\\n(M.1) Language: As mentioned previously, we will duly amend some of our language in the introduction and conclusion to more clearly and accurately position our contributions. However, we would like to point out that we never claim to be the *first* to \\\"motivate and formalize\\\" the general idea of autoencoding targets per se. Now, we are specifically \\\"the first to formalize and quantify the theoretical benefit\\\" of TEAs (p. 1), and we stand by this claim. Separately and before that, Section 2 serves to \\\"motivate and formalize\\\" TEAs by way of unifying multiple multiple application papers in different domains under a single framework; these existing works are cited extensively, and we do not claim to precede them. This already includes [Girdhar, ECCV 2016], which we explicitly identify as an instantiation of TEAs (p. 5, 6, and 8). Answer (1.1) to (1.3) gives more detail.\\n\\n(M.2) Section 4: We would like to thank you again for pointing out the two papers. As mentioned previously, we are glad to cite them. However, in light of answers (1.1) through (1.3), we don't believe this warrants a full \\\"rewrite\\\". As explained, neither [Dalca, CVPR 2018] nor [Mostajabi, CVPR 2018] are directly comparable. We will add references to them in Section 4, and include the former in Table 6 (but under \\\"unsupervised / unpaired\\\"), as well as the latter in Table 7 (but under \\\"not joint\\\"). However, this does not involve *rewriting* any part of the existing (and very comprehensive related works), the full version of which already spans over 3 pages with 3 summary tables. Finally let us reiterate that, should there be further existing applications more relevant to TEAs, we are very glad to mention them to better support our thesis.\\n\\n(M.3) Concluding sentence: We agree that this may be misleading. We will append a clarification to the existing phrase \\\"potentially applicable to any [...] task\\\", so that it will become \\\"potentially applicable to any [...] task beyond classification and image processing applications\\\".\\n\\n* Aside from the two papers that Reviewer 4 referenced, all citations can be found in the original bibliography.\\n* [Nov 14 Update]: Both papers are now included in the updated manuscript (text in Section 4, text in Appendix B, as well as entries for comparisons in summary Tables 6 and 7).\"}", "{\"title\": \"Response to Reviewer 4 [Part 4/5]\", \"comment\": \"*** (3) Empirical Contribution ***\\n\\n(3.1) We agree that datasets for specific domains---such as imaging applications---can be especially challenging. However, we would like to point out that the experiments in Section 5 are far from \\\"toy\\\"---in fact, our empirical contribution serves a very specific purpose in the context of existing applications, as well as our theoretical findings. Section 5 (p. 6) positions this very clearly: Empirical work is limited to the *static* domain, including (1) multi-label classification in e.g. [Yeh, AAAI 2017], as well as (2) specific imaging application with convolutional architectures, e.g. [Girdhar, ECCV 2016]. What has *not* been explored at all is the utility of target-embedding in the *temporal* setting---for multivariate sequence data, especially via recurrent architectures and for both regression and classification. We are the first to study this, and we focus on an important application area (i.e. forecasting disease trajectories) with multiple real-world datasets. This domain was carefully selected as a particularly appropriate testbed, due to the fact that medical knowledge in this domain gives us confidence that the requisite prior for TEAs is satisfied---i.e. that variations in targets are driven by a lower-dimensional set of underlying factors (p. 1, 3, and 5); see scientific papers cited (p. 6). These points are all explained (with more detail) in the beginning of Section 5 (pp. 6-7), as well as a negative example to highlight the importance of the prior (Appendix E.4).\\n\\n(3.2) We also wish to kindly point out that forecasting multi-variate disease trajectories is far from \\\"toy\\\"; in fact, given our emphasis on early diagnosis (i.e. with intentionally limited windows of input), the forecasting task is deliberately set up to be challenging (p. 7)---especially compared with typical time-series prediction problems. The experimental setup is also far from \\\"toy\\\": Given our theoretical analysis, we first (1) verify our findings for the linear models, and then (2) extend our investigation to nonlinear, recurrent models. In addition, we (3) pick apart the sources of gains from both joint and staged training, as well as (4) comparing the alternate variations of TEAs present in existing work. Moreover, we (5) examine the incremental effect of additional norm-based regularization, (6) the sensitivity of TEAs to the strength of prior, as well as (7) the comparative sample complexity of prediction with and without target-embedding. Datasets are carefully chosen to obtain a variety of binary, continuous, and mixed-target settings. Furthermore, each individual outcome is reported across 10 random train-test splits, with extended results reported by timestep in a 10-page section of the appendix---inclusive of a negative example to highlight the importance of correctness of the prior. For these reasons, we submit that our experimental approach is highly comprehensive, and achieves both the goal of verifying our theoretical result, as well as being the first to demonstrate benefit in the recurrent, multi-variate sequence setting and for both regression and classification---which highlights the further generality of TEAs beyond feedforward instantiations.\"}", "{\"title\": \"Response to Reviewer 4 [Part 3/5]\", \"comment\": \"*** (2) Theoretical Contribution ***\\n\\n(2.1) We would like to point out that a theoretical analysis of the linear setting is *not* of limited use---especially in a setting with no precedent. Our objective is to distill the essence of the target-embedding idea, such that we can isolate its theoretical benefit. To do so, a rigorous analysis deliberately and necessarily begins with a simplest incarnation of TEAs: the linear case. This is intentional and commonplace: As a matter of fact, (1) the seminal work on the generalization benefit of multi-task learning using Rademacher complexity operates in the linear setting [Maurer, JMLR 2006], and (2) the most recent landmark analysis by uniform stability is also performed in the linear setting [Liu, TPAMI 2016]. (3) For supervised feature-embedding as an auxiliary task, the first such analysis of generalization from stability is done in the linear setting [Le, NIPS 2018]. Similarly, (4) for multi-label classification, the generalization properties of label-embedding with norm-based regularization is first analyzed using the ERM framework---in the linear setting [Yu, ICML 2014].\\n\\n(2.2) The significance our analysis is that it allows us to *unambiguously* interpret its benefit as a regularizer. Now, it is often easy to argue on an intuitive level for the \\\"regularizing\\\" effect of some such additional loss term. For example, [Mostajabi, CVPR 2018] also refers to the \\\"regularizing\\\" effect of their label autoencoder (again, not trained jointly), but this expression is used loosely and intuitively, without (1) *identifying* or (2) *quantifying* the precise mathematical mechanism. In stark contrast, in our analysis the complete loss can be summarized and rewritten as $L(\\\\mathbf{\\\\Theta})=L_{p}(\\\\mathbf{\\\\Theta})+R_{1}(\\\\mathbf{\\\\Theta})+R_{2}(\\\\mathbf{\\\\Theta})$---that is, a combination of the primary prediction loss plus additional regularization, where $R_{1}(\\\\mathbf{\\\\Theta})=\\\\frac{1}{N}\\\\sum_{m=1}^{M}\\\\ell_{r}(\\\\mathbf{\\\\Theta}\\\\mathbf{W}_{e}\\\\mathbf{b}_{m},\\\\mathbf{b}_{m})$ and $R_{2}(\\\\mathbf{\\\\Theta}) = \\\\frac{1}{N}\\\\sum_{n=1}^{N}\\\\ell_{r}(\\\\mathbf{\\\\Theta}\\\\mathbf{W}_{e}\\\\mathbf{y}_{n},\\\\mathbf{y}_{n})-\\\\frac{M}{N}L^{B}_{r}(\\\\mathbf{\\\\Theta})$. In particular, the proof of Theorem 1 depends critically on $R_{1}(\\\\mathbf{\\\\Theta})$ to achieve the upper-bound on instability. As a result, (1) this precisely *identifies* the regularizer in question, while (2) our uniform stability result *quantifies* the generalization benefit. This fact is already implicit in the analysis of Appendix A, but we can certainly mention it explicitly at the end of Section 3 for better emphasis.\\n\\n(2.3) This gives important theoretical insight into the empirical gains from TEAs. Significantly, we establish the fact that a tight generalization bound is obtained with absolutely nothing but the simple addition of the *joint* reconstruction loss (i.e. no additional unlabeled data, no explicit norm-based regularization, etc). Now, of course domain-specific application papers (cited previously in this response) have employed similar ideas in contexts of varying complexity, with potentially sophisticated architectures. Their *empirical* objective is to achieve SOTA in their specific domain (e.g. 2D image to 3D voxel prediction, segmentation, etc). In contrast, our *theoretical* objective as a first analysis of the TEA framework itself is---importantly---to *remove* the confounding effects of such tailored models (e.g. pretrained models, specific nonlinearities, custom losses, etc.) in order to distill the crux of the benefit in terms of generalization. (Empirically, of course, our experiments do cover both linear and nonlinear cases; see next section).\"}", "{\"title\": \"Response to Reviewer 4 [Part 2/5]\", \"comment\": \"*** (1) Context in Existing Work ***\\n\\n(1.1) We are happy to mention the additional image segmentation applications you reference, since they broadly relate to the notion of label-embedding. However, their focus is *very* different than ours in formalizing and analyzing TEAs. Our clearly specified focus is on the purely *supervised* setting (see title), where the embedding component is trained *jointly* in the TEA objective (see Equation 2)---this is fundamental to our theoretical result. Unlike the existing works we already cite, both of these additional works operate squarely outside of this setting. First, [Dalca, CVPR 2018] focuses entirely on the *unsupervised*, unpaired setting. We agree that it will go nicely in Table 6 as related work on target-embedding, but under \\\"unsupervised / unpaired\\\" in the \\\"setting\\\" column, which is very different. Second, both models in [Mostajabi, CVPR 2018] are *not jointly* trained at all. In particular, for the first model (Figure 1), \\\"the decoder parameters are frozen\\\", and \\\"parameters internal to the decoder are never updated\\\" after the initial phase (p. 4). So, this corresponds to a variant of the \\\"No Joint\\\" sensitivity we already investigate (plus an additional direct-prediction path). Similarly, their second model (Figure 4) regresses embeddings instead, but again the \\\"encoder parameters are ... frozen\\\" (p. 4). The benefit they observe thus derives solely from staged training. This will go nicely in Table 7, but certainly not under \\\"joint training\\\". In stark contrast, the fact that all components in TEA are jointly trained is central to our argument in deriving uniform stability for the generalization. (Of course, in our source-of-gains analysis, we also empirically demonstrate that the combination of joint and staged training performs the best).\\n\\n(1.2) That said, we actually agree with your high-level sentiment: It is also our understanding that the general concept of target-embedding has previously been used in practical applications. This is what we desire. In fact, the empirical efficacy of existing domain applications is precisely what motivates our theoretical contribution: To \\\"provide a unified perspective on recent applications\\\" (p. 1) of this idea to problems in multi-label classification, e.g. [Yeh, AAAI 2017] and 3D voxel prediction, e.g. [Girdhar, ECCV 2016]. These applications that we already cite (plus more in Table 7) are jointly trained in the supervised setting. Having them is good, and the more empirical results there are, the more relevant is our theoretical contribution. After all, while the effectiveness of the general target-embedding idea has been experimentally observed in a couple of application domains, there has not been any attempt (at all) at rigorous mathematical justification such as ours (Section 3). In this sense, our work is positioned analogously to [Le, NIPS 2018] and [Liu, TPAMI 2016], who respectively (theoretically) quantify the generalization benefit of multi-task learning and supervised feature-embedding---both in light of the fact that there *is* evidence that these paradigms are of use.\\n\\n(1.3) Lastly, we would like to emphasize that we don't claim to be the first to apply target-embedding. Even for TEAs, we explicitly mention throughout the paper [Yeh, AAAI 2017] and [Girdhar, ECCV 2016] that can be interpreted as specific instantiations of the general framework (p. 5, 6, and 8). In fact, we specifically point out that their joint-training variant corresponds to the TEA(L) setting of the framework (p. 8). (For greater clarity, we will add an earlier citation in Section 2 in the description of staged training). There is an important distinction between TEAs *per se*, versus specific *architectural* instantiations present in different application domains. We are the first to rigorously analyze the generalization benefit of the former (our theoretical novelty), and we are also the first to extend the latter to recurrent, multi-variate sequence forecasting and for both regression and classification (our empirical novelty). We will duly amend some of our language in the introductory and concluding remarks to more clearly and accurately position our contributions. Moreover, if there exists further relevant applications in this setting, we would be glad to mention them in support of our thesis.\"}", "{\"title\": \"Response to Reviewer 4 [Part 1/5]\", \"comment\": \"Thank you for your thoughtful comments, and for referring to further papers applying target-embedding to image segmentation. Mainly, you point out that (1) target-embedding has previously been proposed and used in practice. In addition, you mention that (2) the theory developed is limited in practice due to the linear setting for analysis, and that (3) the experiments are \\\"toy\\\", considering that imaging applications are \\\"more challenging\\\".\\n\\nWe address each in turn.\\n\\nWe believe the specific positioning and contribution of the paper may not have been the most clear. Therefore we start by emphasizing our focus, in light of your comments. (1) First, we motivate and formalize TEA as a *general* framework, which \\\"provide[s] a unifying perspective on recent applications of autoencoders to label-embedding\\\" in disparate domains (p. 1). (2) This sets the stage for our theoretical contribution, which is to provide a *guarantee of generalization* for linear TEAs by demonstrating uniform stability. This allows us to distill its benefit in the simplest setting, removing any confounding factors from domain-specific architectures. (3) Our empirical novelty (in addition to verifying our claim for the linear case) is to extend validation of this approach to the *temporal* domain---for multi-variate sequence forecasting with recurrent architectures. While we make the point that certain prior works can be interpreted as specific instantiations of TEAs in the *static* setting, we are the first to do so in the recurrent, sequential setting and for both regression and classification---underscoring the further generality of this approach beyond feedforward instantiations.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your thoughtful comments and suggestions.\\n\\nWe agree that the field of *extreme* multi-label classification [3] is relevant as well, especially in the context of our discussion for Table 7. We also agree that the probabilistic methods in [1] and [2] present alternative approaches with advantages in performance and use cases; they will provide more context in the related work discussion. Finally, we also find [4] worth referencing in light of the setting for our experiments. We thank you for pointing out these works: we will reference [1], [2], [3], and [4] in our discussion of related work.\\n\\n[1] Piyush Rai, Changwei Hu, Ricardo Henao, and Lawrence Carin. Large-Scale Bayesian Multi-Label Learning via Topic-Based Label Embeddings. In NIPS, 2015.\\n\\n[2] Ashish Kapoor, Raajay Viswanathan, and Prateek Jain. Multilabel Classification using Bayesian Compressed Sensing. In NIPS, 2012.\\n\\n[3] Kush Bhatia, Himanshu Jain, Purushottam Kar, Manik Varma, and Prateek Jain. Sparse Local Embeddings for Extreme Multi-label Classification. In NIPS 2015.\\n\\n[4] Yan Yan, Glenn Fung, Jennifer G. Dy, and Romer Rosales. Medical coding classification by leveraging inter-code relationships. In KDD 2010.\"}", "{\"title\": \"Response for Reviewer 2 [Part 5/5]\", \"comment\": \"(3.5.a) \\\"More details about training and inference are needed\\\":\\n\\nWe agree that expanding this subsection with more details will enhance clarity of exposition. (Bear in mind that all of these points are detailed in Algorithm 1). We can update the middle portion of the \\\"Training and Inference\\\" section in the main manuscript as follows:\\n\\n---\", \"training_occurs_in_three_stages\": \"In the first stage, the autoencoder is trained (to learn representations); in this stage, the parameters of the encoder and decoder are trained on the reconstruction loss. In the second stage, the prediction arm is trained to regress the learned embeddings (generated by the encoder); in this stage, only the parameters of the predictor are trained (on the latent loss), and the parameters of the encoder (and decoder) are frozen. Finally in the third stage, all three components are jointly trained on both the prediction loss and reconstruction loss; in this stage, the parameters of the encoder, predictor, and shared decoder model are all trained. Note that during training, the shared forward model receives two types of latents as input: encodings of true targets (to compute the reconstruction loss), as well as encodings predicted from features (to compute the prediction loss).\\n---\\n\\n(3.5.b) \\\"What is the effect of the order of training? What will happen if I change it?\\\":\\n\\nGiven Algorithm 1 and the more detailed explanation in answer (3.5.a), it should now become clear that the order cannot be changed. Stage 2 requires the encoder to *already* be trained to provide the requisite embeddings, so it must be preceded by Stage 1. Therefore the only relevant possibilities are: (1) Stages 1-2 by themselves, without Stage 3; this is simply the \\\"No Joint\\\" setting that we conduct on all experiments. (2) Stage 3 by itself, without Stages 1-2; this is simply the \\\"No Staged\\\" setting that we conduct on all experiments. (3) None of the stages altogether; this is simply the \\\"Neither\\\" setting that we conduct on all experiments. (4) Stages 1, 2, and 3 in order; this is simply Algorithm 1 itself.\\n\\nFinally, the only remaining possibility is to have Stage 3 precede Stages 1-2. This makes little sense, since when the reconstruction loss is trained by itself it is likely to \\\"undo\\\" the result of joint training. However, for thoroughness, we have run an additional sensitivity experiment (using UKCF) to confirm this. The following corresponds to the left half of Table 4, with the additional column on the right (and the other columns labeled to reflect the training stages). Verifying our intuitions, the setting \\\"3-1-2\\\" behaves almost identically with the setting \\\"1-2\\\".\", \"table_a\": \"Summary performance by training stages for TEA on linear model with UKCF. Column headers indicate the sequence of training stages executed. Note that \\\"1-2-3\\\" simply corresponds to Algorithm 1. PRC and ROC metrics are reported separately for variables representing infections (I) and comorbidities (C).\\n---------------------------------------------------------------------------------------------------------\\n None 1-2 3 1-2-3 3-1-2\\n---------------------------------------------------------------------------------------------------------\\nPRC(I)\\n0.347+/-0.085 0.402+/-0.026 0.431+/-0.031 0.450+/-0.035 0.404+/-0.027\\n---------------------------------------------------------------------------------------------------------\\nPRC(C)\\n0.433+/-0.083 0.507+/-0.040 0.543+/-0.054 0.559+/-0.060 0.512+/-0.042\\n---------------------------------------------------------------------------------------------------------\\nROC(I)\\n0.710+/-0.072 0.747+/-0.022 0.764+/-0.022 0.767+/-0.026 0.749+/-0.022\\n---------------------------------------------------------------------------------------------------------\\nROC(C)\\n0.700+/-0.075 0.744+/-0.038 0.766+/-0.038 0.767+/-0.042 0.747+/-0.037\\n---------------------------------------------------------------------------------------------------------\\n\\n* Aside from the two papers that Reviewer 4 referenced, all citations can be found in the original bibliography.\"}", "{\"title\": \"Response for Reviewer 2 [Part 4/5]\", \"comment\": \"(3.3.c) \\\"What is the advantage of the proposed framework over these existing work\\\":\", \"pardon_the_repetition\": \"we would like to gently reiterate that our work serves a very different purpose than existing work. We are *not* proposing a model from scratch, to which \\\"other\\\" works can be viewed as \\\"competitors\\\". To the contrary, the empirical efficacy of existing domain applications of target-embedding is *precisely* what motivates our main theoretical contribution: To \\\"provide a unified perspective on recent applications\\\" (p. 1) of this idea, which allows to examine *why* it works. Again, several existing works can be viewed as specific instantiations of TEAs; see answer (3.1). Our mission is to provide *framework-level* analytical insight into why target-embedding works---not to compete against anything within a specific application domain; that would be an entirely different pursuit. There is an important distinction between studying TEAs *per se*, versus specific *architectural* instantiations present in different application domains.\\n\\nWe are the *first* work to rigorously analyze the generalization benefit of the former (our theoretical novelty). In addition to verifying the linear case with experiments, we are also the *first* to extend the latter to recurrent, multi-variate sequence forecasting and both regression and classification (our empirical novelty)---which highlights the further generality of this approach beyond feedforward instantiations. The focus is on isolating the benefit of TEAs, not on specific architectural novelties that may boost performance for various datasets and domains. That said, we do in fact experiment with a multitude of variations (i.e. TEA, the indirect TEA(L), the hybrid TEA(LP), the \\\"No Joint\\\" setting, as well as the \\\"No Staged\\\" setting). These reflect the *framework-level* variation in the general idea of \\\"target-embedding\\\" present in existing work. For instance, we note (p. 8) that the TEA(L) variant corresponds to the framework-level setup in [Girdhar, ECCV 2016] and [Yeh, AAAI 2017], which are jointly learned via the reconstruction loss and a *latent* loss instead---by regressing learned embeddings during the joint training stage (Figure 4(d), in Appendix D). Then, we duly show the performance of this TEA(L) setting in comparison with all the other settings in each experiment (see Tables 4-5, as well as Appendix E). Please kindly also refer to answer (3.3.a).\\n\\n(3.4) \\\"The source of gain [...] should contain more explanations and analysis\\\":\\n\\nThank you for the suggestion. Yes, we agree that a more detailed explanation will improve clarity of exposition. This was originally kept concise due to space limitation, but we can expand the explanation with more detail as follows: \\n\\n---\\nThere are two (related) interpretations of TEAs. First, we studied the *regularization* view in Section 3; this concerns the benefit of joint training using both prediction and reconstruction losses. Ceteris paribus, we expect performance to improve purely by dint of the jointly trained TEA objective. Second, the *reduction* view says that TEAs decompose the (difficult) prediction problem into two (smaller) tasks: the autoencoder learns a compact representation $\\\\mathbf{z}$ of $\\\\mathbf{y}$, and the predictor learns to map $\\\\mathbf{x}$ to $\\\\mathbf{z}$. This suggests a simpler possibility---that of separately training the autoencoder and predictor arms one after the other in two stages. Now, our presentation of TEAs (Section 2 and Algorithm 1) is a combination of both ideas: All three components are jointly trained in a third stage following the first two, similar to [Girdhar, ECCV 2016]. Our goal is now to account for the improvement in performance due to these two sources of benefit; Table 4 does so for the linear case (on UKCF), and Table 5 for the more general nonlinear case (on all datasets). The ``\\\"No Joint\\\" setting isolates the benefit from staged training only. This is analogous to basic unsupervised pretraining (though using targets), and corresponds to omitting the final joint training stage in Algorithm 1. The `\\\"`No Staged\\\" setting isolates the benefit from joint training only (without pretraining the autoencoder or predictor), and corresponds to omitting the first two training stages in Algorithm 1. The ``\\\"Neither\\\" setting is equivalent to vanilla prediction without leveraging either of the advantages from target-representation learning (REG). We observe that while both sources of benefit are individually important for performance, neither setting performs quite as well as when they are combined. See Appendix E.1-2 for extended results.\\n---\\n\\nWe agree that the source of gains is important for understanding the joint and staged training aspects of TEAs. Bear in mind that, since our goal is to thoroughly investigate TEAs overall, we also need to allow adequate space to cover all of the 7 considerations listed in the final paragraph of answer (3.3.a).\"}", "{\"title\": \"Response for Reviewer 2 [Part 3/5]\", \"comment\": \"(3.3.a) \\\"No state-of-the-art models are used in experiments\\\":\\n\\nFirst, we would like to gently reiterate that our focus is on developing a *framework-level* analysis to deepen our understanding the theoretical underpinnings of TEAs (see points (1), (2), and (3) in the first paragraph of our response). In particular, we are *not* in search of SOTA for any specific application domain. Quite to the contrary, our objective is to distill the isolated benefit of TEAs using a *minimal* setting with as few confounding factors as possible. Therefore---unlike in the variety of more application-focused papers we cite---we intentionally refrain from bolting on any additional design choices (e.g. correlation analyses, fine-tuning pretrained models, custom loss functions, and whatever else could push SOTA for each dataset and each domain). After all, we want evidence of improvement with absolutely *nothing but* the simple addition of the joint reconstruction loss.\\n\\nGiven our theoretical findings, our empirical contribution serves two purposes (detailed in Section 5). First, we verify our claims for the linear case (corresponding to our theoretical setting). Second, we extend TEAs to the recurrent, sequential multi-variate setting for both regression and classification. Since the temporal setting has never been explored with TEAs, a standard and popular architecture such as RNNs with GRUs is appropriate. Recall our focus; we don't want other factors getting in the way of this foray. We are focusing on studying the *framework* and its potential generalizability beyond feedforward instantiations, not on excessively optimizing against *specific* architectures for SOTA applications. In this sense, our work is positioned analogously to [Le, NIPS 2018] and [Liu, TPAMI 2016], who respectively investigate---on a *framework* level---the generalization benefit of multi-task learning and supervised feature-embedding.\\n\\nIn fact, should we instead focus on SOTA models, we would be unable to perform the methodical sensitivity analyses in Section 5. After all, in addition to (1) verifying the linear case, and (2) extending to recurrent models, we also wish to (3) pick apart the sources of gains from both joint and staged training, as well as (4) comparing the alternate variations of the TEA framework present in existing work. Moreover, we (5) examine the incremental effect of additional norm-based regularization, (6) the sensitivity of TEAs to the strength of prior, as well as (7) the comparative sample complexity of prediction with and without target-embedding. These careful analyses would all become noisy and intractable should we introduce multiple confounding factors in the form of SOTA models and their various components and configurations.\\n\\n(3.3.b) \\\"It's very likely that some existing work has already adopted the idea\\\":\\n\\nWe completely agree, and we *already* cite all the ones we are aware of. Please kindly also refer to answer (3.1). For instance, as noted throughout the paper (p. 5, 6, and 8), the application work of e.g. [Girdhar, ECCV 2016] and [Yeh, AAAI 2017] can be interpreted as specific instantiations of TEAs (in the static setting). Please see also the works in Table 7. We also give extensive discussions (and summary tables) of how they relate and compare (see comprehensive survey in addition Appendix B.2). Having these is good, and the more empirical results there are, the more they support the relevance of our theoretical contribution. What has *not* been done by existing work is to explore the generalizability of this idea to the *temporal* setting; again, kindly refer to answer (3.1).\"}", "{\"title\": \"Response for Reviewer 2 [Part 2/5]\", \"comment\": \"(3.2) \\\"Models used are relatively simple\\\":\\n\\nSince (3.3.a) poses a very similar question for *experiments* and similarly asks for more advanced models), we assume here that your comment is referring to our *theoretical* result.\\n\\nWe would like to point out that a theoretical analysis of the linear setting is *not* of limited use---especially in a setting with no precedent. Our objective is to distill the essence of the target-embedding idea, such that we can isolate its theoretical benefit. To do so, a rigorous analysis deliberately and necessarily begins with a simplest incarnation of TEAs: the linear case. This is intentional and commonplace: As a matter of fact, (1) the seminal work on the generalization benefit of multi-task learning using Rademacher complexity operates in the linear setting [Maurer, JMLR 2006], and (2) the most recent landmark analysis by uniform stability is also performed in the linear setting [Liu, TPAMI 2016]. (3) For supervised feature-embedding as an auxiliary task, the first such analysis of generalization from stability is done in the linear setting [Le, NIPS 2018]. Similarly, (4) for multi-label classification, the generalization properties of label-embedding with norm-based regularization is first analyzed using the ERM framework---in the linear setting [Yu, ICML 2014].\\n\\nThe significance our analysis is that it allows us to *unambiguously* interpret its benefit as a regularizer. Now, it is often easy to argue on an intuitive level for the \\\"regularizing\\\" effect of some such additional loss term. For example, [Mostajabi, CVPR 2018] also refers to the \\\"regularizing\\\" effect of their label autoencoder (again, not trained jointly), but this expression is used loosely and intuitively, without (1) *identifying* or (2) *quantifying* the precise mathematical mechanism. In stark contrast, in our analysis the complete loss can be summarized and rewritten as $L(\\\\mathbf{\\\\Theta})=L_{p}(\\\\mathbf{\\\\Theta})+R_{1}(\\\\mathbf{\\\\Theta})+R_{2}(\\\\mathbf{\\\\Theta})$---that is, a combination of the primary prediction loss plus additional regularization, where $R_{1}(\\\\mathbf{\\\\Theta})=\\\\frac{1}{N}\\\\sum_{m=1}^{M}\\\\ell_{r}(\\\\mathbf{\\\\Theta}\\\\mathbf{W}_{e}\\\\mathbf{b}_{m},\\\\mathbf{b}_{m})$ and $R_{2}(\\\\mathbf{\\\\Theta}) = \\\\frac{1}{N}\\\\sum_{n=1}^{N}\\\\ell_{r}(\\\\mathbf{\\\\Theta}\\\\mathbf{W}_{e}\\\\mathbf{y}_{n},\\\\mathbf{y}_{n})-\\\\frac{M}{N}L^{B}_{r}(\\\\mathbf{\\\\Theta})$. In particular, the proof of Theorem 1 depends critically on $R_{1}(\\\\mathbf{\\\\Theta})$ to achieve the upper-bound on instability. As a result, (1) this precisely *identifies* the regularizer in question, while (2) our uniform stability result *quantifies* the generalization benefit. This fact is already implicit in the analysis of Appendix A, but we can certainly mention it explicitly at the end of Section 3 for better emphasis.\\n\\nThis gives important theoretical insight into the empirical gains from TEAs. Significantly, we establish the fact that a tight generalization bound is obtained with absolutely nothing but the simple addition of the *joint* reconstruction loss (i.e. no additional unlabeled data, no explicit norm-based regularization, etc). Now, of course domain-specific application papers (cited previously in this response) have employed similar ideas in contexts of varying complexity, with potentially sophisticated architectures. Their *empirical* objective is to achieve SOTA in their specific domain (e.g. 2D image to 3D voxel prediction, segmentation, etc). In contrast, our *theoretical* objective as a first analysis of the TEA framework itself is---importantly---to *remove* the confounding effects of such tailored models (e.g. pretrained models, specific nonlinearities, custom losses, etc.) in order to distill the crux of the benefit in terms of generalization. (Empirically, of course, our experiments do cover both linear and nonlinear cases; see next).\"}", "{\"title\": \"Response for Reviewer 2 [Part 1/5]\", \"comment\": \"Thank you for your thoughtful comments. We give answers to each in turn.\\n\\nWe believe the specific positioning and contribution of the paper may not have been the most clear. Therefore we start by emphasizing our focus, in light of your comments. (1) First, we motivate and formalize TEA as a *general* framework, which \\\"provide[s] a unifying perspective on recent applications of autoencoders to label-embedding\\\" in disparate domains (p. 1). (2) This sets the stage for our theoretical contribution, which is to provide a *guarantee of generalization* for linear TEAs by demonstrating uniform stability. This allows us to distill its benefit in the simplest setting, removing any confounding factors from domain-specific architectures. (3) Our empirical novelty (in addition to verifying our claim for the linear case) is to extend validation of this approach to the *temporal* domain---for multi-variate sequence forecasting with recurrent architectures. While we make the point that certain prior works can be interpreted as specific instantiations of TEAs in the *static* setting, we are the first to do so in the recurrent, sequential setting and for both regression and classification---underscoring the further generality of this approach beyond feedforward instantiations.\\n\\n(3.1) \\\"Datasets [...] cannot prove the effectiveness of this framework\\\":\\n\\nWe wish to kindly point out that it is actually *not* our objective to prove the effectiveness of this framework *from scratch*. The fact that this general idea works well in a number of (static) settings is already known (and we cite and mention them throughout the paper). In fact, the empirical efficacy of existing domain applications is precisely what motivates our main theoretical contribution: To \\\"provide a unified perspective on recent applications\\\" (p. 1) of this idea, which allows us to first focus on examining *why* it works (our theoretical contribution). As noted throughout the paper (p. 5, 6, and 8), the application work of e.g. [Girdhar, ECCV 2016] and [Yeh, AAAI 2017] can be interpreted as specific instantiations of TEAs in the *static* setting---the latter in the (1) multi-label classification setting (with additional refinements), and the former specifically for (2) voxel prediction with convolutional architectures (under the \\\"indirect\\\" variant). (3) Moreover, the various works on label-space reduction [Table 7] can loosely be considered under the umbrella of target-space embedding, as is (4) the work of [Oktay, T-MI 2018] for image segmentation. In that sense, while we unify the essential common thread between these disparate applications under the concept of TEA (Sections 1-2), there is already empirical evidence of the benefit of target-embedding in the commonly considered static setting.\\n\\nNow, what has *not* been empirically explored at all is the utility of target-embedding in the *temporal* setting---for multivariate sequence data, especially via recurrent architectures and for both regression and classification. We are the first to do this, and we find that TEAs generously extend to this setting. This is our empirical contribution (in addition to verifying our claims for the linear case, plus extensive sensitivities). Furthermore, the domain of disease trajectories was specifically and carefully selected as a particularly appropriate testbed, due to the fact that medical knowledge in this domain gives us confidence that the requisite prior for TEAs is satisfied---i.e. that variations in targets are driven by a lower-dimensional set of underlying factors (p. 1, 3, and 5); see scientific papers cited (p. 6). These points are all explained (with more detail) in the beginning of Section 5 (pp. 6-7), as well as a negative example to highlight the importance of the prior (Appendix E.4).\"}", "{\"title\": \"Response for Reviewer 1 [Part 4/4]\", \"comment\": \"*** Section 4 ***\\n\\n(4.1) Compared to [Yeh, AAAI 2017]: The entire line of work on label space reduction for multi-label classification (Table 7, which includes this) actually has a different focus than ours. Their techniques worry about *label reduction*, and about specific loss functions that aim to preserve dependencies within and among spaces; their focus is specifically on object annotation and tagging, and their starting point is *binary relevance*. Now, one such approach has successfully employed autoencoders [Yeh, AAAI 2017], which we can interpret as a specific instantiation of TEAs (although with a number of sophisticated additions to solve their problem). In contrast, we operate on a higher level of abstraction, and we focus specifically on autoencoding *in general*---and the regularizing effect of the reconstruction loss on learning the prediction model; our starting point is *direct prediction*, and the output can be of any form (classification or regression). Our contributions are therefore very different. The focus of [Yeh, AAAI 2017] is to solve a specific *applied* problem and use all the tools to do so (e.g. combining with canonical correlation analysis); the same can be said of [Girdhar, ECCV 2016], which can also be seen as an instantiation of TEAs with additional design choices (e.g. fine-tuning a pretrained AlexNet). Instead, we abstract and analyze the interesting common thread between these (seemingly disparate) static application domains: the fact that target-embedding *by itself* is very useful. You are correct; we do not attempt to rehash the static classification application setting, which existing work already does. Our main novelties are the theoretical analysis (which we also verify with experiments using the linear setting), and the empirical extension to recurrent, sequential setting and for both regression and classification (which we are also the first to do so).\\n\\n(4.2) Compared to [Yu, ICML 2014]: Now, [Yu, ICML 2014] is actually less directly comparable than [Yeh, AAAI 2017]: They cast label-embedding within the generic empirical risk minimization framework---as learning a linear model with a *low-rank constraint*, and specifically focus on missing labels and supporting various losses. However, we mention this work mainly for two reasons: First, this perspective captures the general intuition of a restricted number of latent factors, which is the prior that makes TEAs work, and is (loosely) analogous to the idea of using an explicit low-dimensional latent space for an intermediate mapping. Second, their analysis admits generalization bounds that are based on norm-based regularization, whereas the significance of our analysis is that we achieve our result without needing it (see Remark 3).\\n\\n*** Section 5 ***\\n\\n(5.1) Different domains: Please kindly refer to answers (2.5) and (3.1), where we mention (as we do in the paper) existing applications to static settings. So, we already have evidence that the idea works for voxel prediction, image segmentation, and object annotation and tagging. For our extension to the temporal setting, the domain of disease trajectories was carefully selected as a particularly appropriate testbed, due to the fact that medical knowledge in this domain gives us confidence that the requisite prior for TEAs is satisfied---i.e. that variations in targets are driven by a lower-dimensional set of underlying factors (p. 1, 3, and 5); see scientific papers cited (p. 6). An interesting potential future direction may be its utility for video frame prediction, although existing methods typically require much more specialized architectures per settings and datasets, and would be beyond the scope of this paper. Finally, note that we do *not* in fact expect TEAs to work just about anywhere: We highlight the central importance of the correctness of the prior in Appendix E.4, where we provide a negative example.\\n\\n* All citations can be found in the original bibliography.\"}", "{\"title\": \"Response for Reviewer 1 [Part 3/4]\", \"comment\": \"*** Section 3 ***\\n\\n(3.1) How to situate the claim: As noted throughout the paper (p. 5, 6, and 8), the application work of both [Girdhar, ECCV 2016] and [Yeh, AAAI 2017] can be interpreted as specific instantiations of TEAs in the *static* setting---the latter in the (1) multi-label classification setting (with sophisticated refinements), and the former specifically for (2) voxel prediction with convolutional architectures (under the \\\"indirect\\\" variant). (3) Moreover, the various works on label-space reduction [Table 7] can loosely be considered under the umbrella of target-embedding. In that sense, while we unify the essential common thread between these disparate applications under the concept of TEA (Sections 1-2), there is already empirical evidence of the benefit of target-embedding in static classification. Our point here is, what has *not* been explored at all is the utility of target-embedding in the *temporal* setting---for multivariate sequence data, especially via recurrent architectures and for both regression and classification. We are the first to do this, and we find that TEAs generously extend to this setting, thereby highlighting the further generality of the approach beyond feedforward instantiations; this is our empirical contribution.\\n\\nWe admit that the phrasing can be clearer. The sentence in question appears in Section 3, where the primary focus is on the theoretical contribution. We completely agree that this \\\"preview\\\" of Section 5 may be confusing to appear so early on---esp. before Section 4, and esp. since the motivation is anyway explained in much greater detail in the beginning of Section 5. Depending on space, we will either remove this from the introduction to Section 3, or (at least) include references to [Girdhar, ECCV 2016] and [Yeh, AAAI 2017].\\n\\n(3.2) Reasonableness of assumptions: Indeed, Section 3 takes off from the line of work from [Bousquet, JMLR 2002], [Liu, TPAMI 2016], and [Le, NIPS 2018], which use similar tools and overall strategy. While [Le, NIPS 2018] departs from the generic multi-task analysis of [Liu, TPAMI 2016] via their Assumption 6 (identically, our Assumption 2), we in turn invert our setting from [Le, NIPS 2018] via Assumption 1. Briefly, this assumption will hold with with $\\\\varepsilon=0$ as long as the number of independent latent vectors is at least $|\\\\mathcal{Z}|$. This is virtually guaranteed for any compressive autoencoder, since an encoding arm that maps into the latent space from a higher-dimensional target space, we expect if some subset $\\\\{\\\\mathbf{b}_{1},...,\\\\mathbf{b}_{M}\\\\}\\\\subset\\\\{\\\\mathbf{y}_{1},...,\\\\mathbf{y}_{N}\\\\}$ spans $\\\\mathcal{Y}$ that $\\\\{\\\\mathbf{W}_{e}\\\\mathbf{b}_{1},...,\\\\mathbf{W}_{e}\\\\mathbf{b}_{M}\\\\}$ then also span $\\\\mathcal{Z}$ in order to be maximally reconstructive. Note the central importance of the fact that the target space is *higher-dimensional* in our setting for enabling this assumption; picking vectors from feature space instead would be unreasonable (see Remark 2; see also Remark 1 for the mildness of this assumption). In addition, note that while (for simplicity) we take $\\\\varepsilon=0$ in Appendix A, this need not even be the case (see Remark 5). Assumption 2 is identical to Assumption 6 in [Le, NIPS 2018], and we refer to their original exposition (p. 5) for details. However for further perspective, consider the simpler (but unreasonable) assumption by way of contrast: that *individually* $L^{B}_{r}(\\\\mathbf{\\\\Theta_{*}})-L^{B}_{r}(\\\\kappa\\\\mathbf{\\\\Theta}^{\\\\prime}_{*}+(1-\\\\kappa)\\\\mathbf{\\\\Theta_{*}})\\n\\\\leq\\na\\\\left[L^{\\\\prime}_{r}(\\\\mathbf{\\\\Theta_{*}})-L^{\\\\prime}_{r}(\\\\kappa\\\\mathbf{\\\\Theta}^{\\\\prime}_{*}+(1-\\\\kappa)\\\\mathbf{\\\\Theta_{*}})\\\\right]$ and $L^{B}_{r}(\\\\mathbf{\\\\Theta^{\\\\prime}_{*}})-L^{B}_{r}(\\\\kappa\\\\mathbf{\\\\Theta}_{*}+(1-\\\\kappa)\\\\mathbf{\\\\Theta^{\\\\prime}_{*}})\\n\\\\leq\\na\\\\left[L^{\\\\prime}_{r}(\\\\mathbf{\\\\Theta^{\\\\prime}_{*}})-L^{\\\\prime}_{r}(\\\\kappa\\\\mathbf{\\\\Theta}_{*}+(1-\\\\kappa)\\\\mathbf{\\\\Theta^{\\\\prime}_{*}})\\\\right]$.\\nAlthough this would serve the same purpose, it is much stronger and eminently unreasonable: Consider that $L_{r}^{\\\\prime}$ is higher at $\\\\mathbf{\\\\Theta}_{*}$ than $\\\\mathbf{\\\\Theta}^{\\\\prime}_{*}$ but $L^{B}_{r}$ is the opposite. In the case of Assumption 2 (identically, Assumption 6 in [Le, NIPS 2018]), neither do we require that the reconstruction losses be similar, nor that their differences be similar individually---but only that the combined increase or decrease between the two points be similar. Note that both sides of the inequality are non-negative due to the loss functions being convex. Again, we refer to their original exposition and supplementary proof for context [Le, NIPS 2018].\"}", "{\"title\": \"Response for Reviewer 1 [Part 2/4]\", \"comment\": \"(2.4) Out of distribution: We assume that the question refers to how we might adapt a learned TEA model to data that the original model was not trained on. Although the focus of our analysis is on \\\"generaliz[ing] well to new samples from the same distribution\\\" (p. 2), we agree that this is an interesting question. First, adapting to new data (instead of retraining a model from scratch using the new data) can be done simply by treating the existing model as a pretrained set of parameters: Algorithm 1 can then proceed (on the new data) with parameters initialized with their existing values. Compared with a direct-prediction model, the only difference is (again) the staged and joint training procedure as in Algorithm 1. Of course, specialized techniques (e.g. Bayesian optimization) would have a lot more to say depending on what specifically is the expected shift in the data distribution.\\n\\nSecond, we can also ask the (purely empirical) question of how much each model degrades on out-of-distribution data---without additional training to fine-tune the model to the new data. In this context, we have no reason to expect TEAs to degrade any more or less than the comparators we examine. To quickly test an example, we performed this additional sensitivity (using UKCF) as follows: Each model is trained (only) on male patients and tested (only) on female patients, and vice versa. The average results on held-out samples from in-distribution data and out-of-distribution data allows us to compute the net degradation (i.e. negative difference), which is reported below. While TEAs individually perform better overall on both in-distribution and out-of-distribution samples, none of the *differences* in the specific amount of degradation are statistically significant.\", \"table_a\": \"Summary in-distribution vs. out-of-distribution performance for TEA and comparators with UKCF. PRC and ROC metrics are reported separately for variables representing infections (I) and comorbidities (C).\\n------------------------------------------------------------------------------------------------------------\\n Base REG FEA TEA F/TEA\\n------------------------------------------------------------------------------------------------------------\\nROC(I)\\n0.019+/-0.015 0.020+/-0.014 0.019+/-0.015 0.020+/-0.016 0.017+/-0.014\\n------------------------------------------------------------------------------------------------------------\\nROC(C)\\n0.025+/-0.015 0.029+/-0.015 0.024+/-0.014 0.013+/-0.020 0.019+/-0.018\\n------------------------------------------------------------------------------------------------------------\\nPRC(I)\\n0.022+/-0.020 0.018+/-0.021 0.021+/-0.022 0.033+/-0.022 0.027+/-0.022\\n------------------------------------------------------------------------------------------------------------\\nPRC(C)\\n0.026+/-0.021 0.029+/-0.018 0.026+/-0.019 0.018+/-0.023 0.021+/-0.019\\n------------------------------------------------------------------------------------------------------------\\n* [Nov 13 Update]: These are preliminary numbers using subsets of the data for quick results as an example. We are re-running a larger-scale version in the meantime, such that the splits and samples are comparable with the rest of the existing experiments, and will post an update.\\n\\n(2.5) How well this works across domains: Please kindly refer to answer (3.1). There is already existing evidence (which we cite extensively) that target-embedding works well across (static) application domains where the target space is high-dimensional, such as image segmentation, voxel prediction, and object annotation; see also answer (1.1). In fact, it is precisely the empirical efficacy of these applications that motivates our theoretical investigation. (Of course, what has *not* been explored so far is the efficacy in the temporal setting, which is then part of our empirical contribution).\"}", "{\"title\": \"Response for Reviewer 1 [Part 1/4]\", \"comment\": \"Thank you for your insightful comments and questions. We give answers to each in turn.\\n\\nWe believe the specific positioning and contribution of the paper may not have been the most clear. Therefore we start by emphasizing the focus of our work, in light of some of the questions. (1) First, we motivate and formalize TEA as a *general* framework, which \\\"provide[s] a unifying perspective on recent applications of autoencoders to label-embedding\\\" in disparate domains (p. 1). (2) This sets the stage for our theoretical contribution, which is to provide a *guarantee of generalization* for linear TEAs by demonstrating uniform stability. This allows us to distill its benefit in the simplest setting, removing any confounding factors from domain-specific architectures. (3) Our empirical novelty (in addition to verifying our claim for the linear case) is to extend validation of this approach to the *temporal* domain---for multi-variate sequence forecasting with recurrent architectures. While we make the point that certain prior works can be interpreted as specific instantiations of TEAs in the *static* setting, we are the first to do so in the recurrent, sequential setting and for both regression and classification---underscoring the further generality of this approach beyond feedforward instantiations.\\n\\n*** Section 1 ***\\n\\n(1.1) More examples in the introduction: We agree that more immediate examples of high-dimensional output would be useful. We will explicitly include (in para. 2) examples from the related work that we cite, including 3D voxel prediction [Girdhar, ECCV 2016], image segmentation [Oktay, T-MI 2018], as well as any kind of object annotation such as images, text, and music [Table 7]. (Note that for the latter, the vast majority of work takes data with bag-of-features vectors as input, although in principle this need not be the case, e.g. with more complicated underlying models). This is in addition to the (already mentioned) temporal setting of multi-variate sequence trajectory forecasting that we focus on as our empirical contribution.\\n\\n*** Section 2 ***\\n\\n(2.1) Why do we expect autoencoding to help generalization: In the case of FEAs, the intuition is that what is reconstructive of features is likely to encode what is discriminative (downstream). Specifically, [Le, NIPS 2018] quantifies the generalization benefit for the linear case, capturing the intuition that reconstruction is \\\"more likely to prefer a more robust model amongst a set of similarly effective models\\\". In the case of TEAs, the intuition is similar but entirely opposite---that what is reconstructive of targets is likely to encode what is predictable (upstream). Here, the inverted setting warrants moving the cross-representability assumption into latent space, and we analogously quantify the generalization benefit (Section 3).\\n\\n(2.2) Importance of choice of reconstruction loss: As noted in Theorem 1 (Section 3), the bound is obtained if the reconstruction loss is strongly convex. The (most commonly used) quadratic loss function is 2-strongly convex. The logistic loss function and hinge loss function are convex, but not strongly convex. However, as noted in [Liu, TPAMI 2016], in statistical learning theory we often assume that $h(x)\\\\in[-U,U]$ for hypothesis $h$ and some positive constant $U$, in which case the loss functions may be strongly convex (e.g. the logistic loss function is then $exp(-U)/4$-strongly convex). In our experiments, the datasets are chosen to obtain a variety of binary, continuous, and mixed-target settings, and we empirically observe the benefit of TEAs in all cases. We agree that future research may empirically explore the effect of a wide variety of reconstruction losses---both convex and otherwise.\\n\\n(2.3) How bad is performance if learning is done stagewise: This is a very relevant question, considering the importance of joint training to obtaining uniform stability. We actually examined this as part of our source-of-gains analysis (Section 5). For every setting of our experiments, we performed the \\\"No Joint\\\" setting (i.e. only first two stages of Algorithm 1 are performed like you mentioned, skipping the joint training stage), as well as the \\\"No Staged\\\" setting (i.e. only the final joint training stage is performed, skipping the (pre)-training stages). In all cases, both linear and nonlinear (Tables 4-5, and more detail in Appendix E), we observe that both sources of benefit are important for performance: neither setting performs quite as well as when both are combined (p. 8). In other words, training stagewise is still better than having no target-autoencoding at all, but not as good as when combined with joint-training. (Finally, as expected the \\\"Neither\\\" setting---equivalent to vanilla prediction---performs the worst).\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper examines target-embedding autoencoders (TEAs) in theory and practice. TEAs autoencode the output (rather than input) space and find a mapping from the input to the latent representation of the output. The forward pass of the decoder (for the output space) is shared by the input-to-output computation.\\n\\nTarget-embedding autoencoders (TEAs) have previously been proposed and used in practice (though not necessarily by the \\\"TEA\\\" name). The paper's presentation is confusing on this matter, at it claims to be the first to \\\"motivate and formalize\\\" TEAs; I do not believe it is appropriate to claim such a contribution in light of prior work. [Girdhar et al.] clearly utilizes a target-embedding autoencoder (see [Girdhar et al.] Figure 2). In addition, more recent published work clearly utilizes TEAs (though not named as such) as the centerpiece of their approaches. See, for example:\\n\\n[A] Adrian V. Dalca, John Guttag, Mert R. Sabuncu. Anatomical Priors in Convolutional Networks for Unsupervised Biomedical Segmentation. CVPR, 2018.\\n\\n[B] Mohammadreza Mostajabi, Michael Maire, Gregory Shakhnarovich. Regularizing Deep Networks by Modeling and Predicting Label Structure. CVPR, 2018.\\n\\nFigure 2 of [A] and Figure 1 of [B] both clearly depict applying target-embedding autoencoders on semantic image segmentation problems. [B] operates in the same supervised representation-learning setting proposed here. Notably, [B] utilizes staged training -- learning the autoencoder first -- as discussed in Section 2 of the submitted paper, and finds that to be important for achieving a regularization effect.\\n\\nThe real applications explored by [A] and [B] are perhaps more challenging than the datasets used in experiments here. The concluding sentence of the paper,\\\"Target-representation learning is potentially applicable to any high-dimensional prediction task, and exploring its utility for specific domain-architectures may be a practical direction for future research\\\" should be changed -- prior work has already successfully utilized TEAs in the specific domain of image segmentation.\\n\\nGiven that the paper has missed (not cited) highly related published work that applies TEAs in practice, a rewrite of Section 4 is required. In the appendix, Table 6, Table 7 and Section B.1 also need significant updates. The proposed approach is no longer a unique entry in Table 6 or 7 -- e.g. [B] already contributed \\\"autoencoder component as regularization for learning predictor\\\" (Table 7). Additionally, toy experiments in Section 5 appear less significant a contribution when multiple full-scale systems already employ TEAs.\\n\\nThis paper's theoretical analysis does appear to set it apart from prior work. However, theorems are developed for an extremely limited context (linear TEAs) and it is unclear whether or how they might extend to practical use cases (i.e. TEAs that are nonlinear, deep neural networks).\\n\\n---\\n\\nThe extensive author response and updated paper address many of my original concerns. I have updated my overall rating.\"}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #3\", \"review\": \"This is an extremely well-written and well-motivated paper. The idea of target-embedding autoencoders is extremely relevant for problems where the dimension of the label space is as large (or larger) than the dimension of the input features. The experiments are thorough, the theoretical guarantees are extremely well thought of and derived. The applications to modelling the progression of cystic fibrosis and Alzheimer's are extremely useful and timely. I vote for a strong accept for this paper.\\n\\nI would like to see some references to the extreme multi-label classification problems (http://manikvarma.org/downloads/XC/XMLRepository.html) and some of the other probabilistic approaches attempted in this domain (please see https://papers.nips.cc/paper/5770-large-scale-bayesian-multi-label-learning-via-topic-based-label-embeddings and the references and citations).\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"1. Summary: In this paper, the authors proposed a Target-Embedding Autoendocer (TEA) model for supervised representation learning. Different from the traditional feature embedding autoencoder model, TEA tries to learn a compact latent representation that can reconstruct the target vector. Hypothetically, this model should be especially useful when the target vector has a much higher dimension than the feature vector. The authors analyzed the proved some characteristics of this framework and conducted empirical experiments on three datasets to prove its effectiveness.\\n2. Overall assessment: The motivation of this paper is well justified. It's easy to follow and fun to read, even for a person who is not an expert in this area, like me. However, there still exist some problems in this paper. It needs more improvement to get published in a competitive conference like ICLR.\\n3. Comments:\\n3.1 Datasets used in this paper cannot fully prove the effectiveness of this framework. These datasets are all from very similar domains. The dimension of target vectors is comparable to that of feature vectors. In my view, it's necessary to test on more different types of datasets to prove the usefulness of a model, especially if it is a general framework like TEA.\\n3.2 Models used in this paper are relatively simple. Demonstrate the performance of TEA on more advanced models and more difficult tasks can deliver more insights to the community.\\n3.3 No state-of-the-art models are used in experiments. It's very likely that some existing work has already adopted the idea of target embedding. There also exist much other work on dealing with high dimensional target vector problem. How are the performances of these models? What is the advantage of the proposed framework over these existing work?\\n3.4 The source of gain part on page 8 should contain more explanations and analysis. This part is one of the most important parts of this paper. It can provide quite valuable insights to readers. I hope the author can expand it.\\n3.5 More details about training and inference are needed. The authors only use a few sentences to describe their three staged training process. I still have some questions left after reading it, such as how do you train the shared parts in TEA? Do you update its parameters in all stages? What the effect of the order of training? What will happen if I change it?\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This work introduces the idea of target embedding autoencoders for supervised prediction, designed to learn intermediate latent representations jointly optimized to be both predictable from features and predictive of targets. This is meant to help with generalization and has certain theoretical guarantees.\\n\\nIt is an interesting problem setting to consider where Y is high dimensional instead of X. More examples of this would be useful to provide in the intro. I think this is crucial to understand where this method might be useful. \\n\\nFigure 1 is super informative and very nice!\", \"section_2\": \"Why do we expect that this paradigm of autoencoder based regularization \\u201cgeneralizes\\u201d better?\\nI like the explicit and honest discussion of prior work in this section. \\nOne question is how important is the choice of reconstruction loss function - L2, vs max likelihood gaussian, vs L1, vs cross entropy, etc for performance?\", \"another_question\": \"how bad is performance if the learning is done stagewise - first the Y-Z-Y^ representation is learned and then the X->Z predictor is learned.\\nIf something is out of distribution, how easy are TEA based learners to finetune?\\nOverall the idea seems reasonable - if the targets have some common set of factors, just predict those instead of predicting the full target value which might be harder to get right. It\\u2019s just a question of whether this holds true in many domains and how well this reconstruction loss generalizes across problems?\", \"section_3\": \"\\u201cWe havenoted that TEA components can in principle be instantiated by any architecture. Does its benefit extend beyond the commonly-studied domain of static classification?\\u201d -> not clear what this means? Does this mean this algorithm has been proposed before or is it that it can ALSO work on non static classification tasks? Not clear how to situate this claim\\n\\nThe theoretical section seems to follow largely from Le et al, but with important distinctions on dimensionalities of various spaces involved. I wonder if the authors can comment on how often Assumption 1 and 2 are actually satisified?\", \"related_work\": \"Is the main difference between Yu 2014 and this just in the norm based regularization? I don\\u2019t think so, can this be made more clear. This seems also fairly related to Yeh, is it just a generalization of that paradigm? Or is there more to it? In light of the contribution of Yeh, this seems like slightly more marginal of a contribution? Is the main points of contribution the theoretical analysis and the extended experiments to sequence data rather than static classification?\\n\\nThe results do seem to show a signficant benefit as compared to FEA or base models. It also seems like this is applicable across multiple disease datasets. Do the authors think that this could be applicable to other domains altogether? Would it be quick to run a comparison on these?\\n\\nGenerally seems like a well grounded and meaningful contribution with many improvements. Would be curious to see applications to other datasets and also some improvements/clarifications noted above?\"}" ] }
BJlQtJSKDB
Watch the Unobserved: A Simple Approach to Parallelizing Monte Carlo Tree Search
[ "Anji Liu", "Jianshu Chen", "Mingze Yu", "Yu Zhai", "Xuewen Zhou", "Ji Liu" ]
Monte Carlo Tree Search (MCTS) algorithms have achieved great success on many challenging benchmarks (e.g., Computer Go). However, they generally require a large number of rollouts, making their applications costly. Furthermore, it is also extremely challenging to parallelize MCTS due to its inherent sequential nature: each rollout heavily relies on the statistics (e.g., node visitation counts) estimated from previous simulations to achieve an effective exploration-exploitation tradeoff. In spite of these difficulties, we develop an algorithm, WU-UCT, to effectively parallelize MCTS, which achieves linear speedup and exhibits only limited performance loss with an increasing number of workers. The key idea in WU-UCT is a set of statistics that we introduce to track the number of on-going yet incomplete simulation queries (named as unobserved samples). These statistics are used to modify the UCT tree policy in the selection steps in a principled manner to retain effective exploration-exploitation tradeoff when we parallelize the most time-consuming expansion and simulation steps. Experiments on a proprietary benchmark and the Atari Game benchmark demonstrate the linear speedup and the superior performance of WU-UCT comparing to existing techniques.
[ "parallel Monte Carlo Tree Search (MCTS)", "Upper Confidence bound for Trees (UCT)", "Reinforcement Learning (RL)" ]
Accept (Talk)
https://openreview.net/pdf?id=BJlQtJSKDB
https://openreview.net/forum?id=BJlQtJSKDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "e8xCALjLhP", "r1l3oj5nsB", "BJlI8uR5jS", "rklIb1bFjS", "S1xeFWdSsH", "HJe0YmUrsr", "H1x_y7UHir", "B1lF5ZUHoS", "SJxPJ-IHjH", "BJgtnAHHiS", "SJlF9m02qr", "r1gt3TnWqB", "H1eaaz-3dr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733675, 1573854116119, 1573738574499, 1573617406282, 1573384568383, 1573376902464, 1573376736122, 1573376400632, 1573376223005, 1573375664789, 1572819856669, 1572093360941, 1570669252555 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1835/Authors" ], [ "ICLR.cc/2020/Conference/Paper1835/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1835/Authors" ], [ "ICLR.cc/2020/Conference/Paper1835/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1835/Authors" ], [ "ICLR.cc/2020/Conference/Paper1835/Authors" ], [ "ICLR.cc/2020/Conference/Paper1835/Authors" ], [ "ICLR.cc/2020/Conference/Paper1835/Authors" ], [ "ICLR.cc/2020/Conference/Paper1835/Authors" ], [ "ICLR.cc/2020/Conference/Paper1835/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1835/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1835/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Talk)\", \"comment\": \"The paper investigates parallelizing MCTS.\\nThe authors propose a simple method based on only updating the exploration bonus \\nin (P)-UCT by taking into account the number of currently ongoing / unfinished \\nsimulations. \\nThe approach is extensively tested on a variety of environments, notably \\nincluding ATARI games. \\n \\nThis is a good paper. \\nThe approach is simple, well motivated and effective. \\nThe experimental results are convincing and the authors made a great effort to \\nfurther improve the paper during the rebuttal period. \\nI recommend an oral presentation of this work, as MCTS has become a \\ncore method in RL and planning, and therefore I expect a lot of interest in the \\ncommunity for this work.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Additional comments\", \"comment\": \"Besides the above response, we would like to add some further comments regarding the contribution of our work.\\n\\nFirst, as we pointed out earlier, the simplicity of our algorithm on its implementation side but not on its key idea. Compared to previous works, we address the key challenge of parallelizing MCTS in a more principled manner. Specifically, P-UCT (renamed as WU-UCT) is designed based on the insight that on-going simulations (unobserved samples) will eventually return the results, so their number should be tracked and used to adaptively adjust the UCT selection process. To corroborate that our principled solution performs better than heuristic approaches (e.g., a simple combination of visit count with virtual loss as in [1]), we perform additional experiments to compare with this new baseline (as suggested by Reviewer #4), where the results are copied in the following table. It shows that our principled solution is consistently better than the new baseline, and it does not require task-dependent hyper-parameter tuning.\\n\\nTable. Comparison between WU-UCT and TreeP with both virtual loss (r_vl) and virtual pseudo-count (n_vl). Three sets of hyper-parameters are used in TreeP, and each experiment was repeated two times due to the limited time. (In our final version, we will report the results with 10 runs.) The results of Centipede, Robotank, and NameThisGame are still running and will be reported as soon as they are completed. (These three games are generally more expensive because their game steps are about 10 times of other games.)\\n======================================================================\\nEnv. + WU-UCT TreeP TreeP TreeP\\n + (r_vl = n_vl = 1) (r_vl = n_vl = 2) (r_vl = n_vl = 3)\\n======================================================================\\nAlien + 6536\\u00b11093 4850\\u00b1357 4935\\u00b160 5000\\u00b10\\nBoxing + 100\\u00b10 99\\u00b11 99\\u00b10 99\\u00b11\\nBreakout + 413\\u00b114 379\\u00b143 265\\u00b150 463\\u00b160\\nFreeway + 32\\u00b10 32\\u00b10 32\\u00b10 32\\u00b10\\nGravitar + 5060\\u00b1568 3500\\u00b1707 4105\\u00b1463 4950\\u00b1141\\nMsPacman + 19804\\u00b12232 13160\\u00b1462 12991\\u00b1851 8640\\u00b1438\\nRoadRunner + 46720\\u00b11359 29800\\u00b1282 28550\\u00b1459 29400\\u00b1494\\nQbert + 17953\\u00b1225 17055\\u00b1353 13425\\u00b1194 9075\\u00b153\\nSpaceInvaders + 3000\\u00b1813 2305\\u00b1176 3210\\u00b1127 3020\\u00b142\\nTennis + 4\\u00b12 1\\u00b10 1\\u00b10 0\\u00b10\\nTimePilot + 48390\\u00b16721 52500\\u00b1707 49800\\u00b1212 32400\\u00b11697\\nZaxxon + 39085\\u00b16838 24300\\u00b12828 24600\\u00b1424 37550\\u00b11096\\n======================================================================\\n\\nSecond, our work has a high practical value. Despite its outstanding performance, Monte Carlo Tree Search is time-consuming, which brings an urgent need for an effective parallelization algorithm. This work bridge this gap to broaden the application of MCTS, which is confirmed by Section 5.1, where P-UCT (renamed as WU-UCT) is applied successfully in a real-world production system, where ~16 times speedup is achieved with negligible performance loss.\\n\\nThird, we have refined the paper structure as well as including more experiments to make our arguments stronger, and the results further justify the superiority of P-UCT over comparison approaches. The changes during the rebuttal period are summarized below.\\n\\n(i) Based on your suggestion, we have reduced the number of pages to 8 by moving part of the experiment results that are less relevant to our main point (e.g., the \\u201cuser pass-rate prediction system\\u201d) to the supplementary material. We also changed the layout of some figures (e.g., Figure 2 has been redrawn to be more compact as well as clear) to improve the paper\\u2019s readability.\\n\\n(ii) We have added comprehensive comparisons with more baseline models in the Atari experiments. Comparisons with the sequential UCT indicates that P-UCT achieves the minimal 16% performance degradation among all parallel algorithms (TreeP: 26%, LeafP: 36%, RootP: 32%) while having ~16 times speedup.\\n\\n(iii) We have adjusted the t-test results with the Bonferroni method (using the p-value threshold 0.05/45 = 0.0011), which is a much stronger requirement. Under this stricter condition, P-UCT is still significantly better than comparison approaches.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you for including these additional results! I am very glad to see that WU-UCT still outperforms the TreeP baseline even when it adjusts the visit counts. I think this comparison makes the results even more compelling. I am also glad to see the results roughly hold for the adjusted p-values (I agree Bonferroni is a bit conservative, but I feel it is better to err on the side of conservatism for these types of comparisons, just to be sure).\\n\\nI think this paper is a valuable contribution to be presented at ICLR, and I think that all the changes and additional experiments have changed this from a good paper to a great paper. I will thus be increasing my score.\"}", "{\"title\": \"Response to reviewer #4\", \"comment\": \"Thank you for your helpful comments.\\n\\nFirst, based on your suggestion, we add an additional set of experiments to compare WU-UCT with the new baseline of \\u201cTreeP + pre-adjusted visit count + virtual loss\\u201d [1] with different hyper-parameters. The experiment results are given in the following table, from which we can see that on 9 out of 12 tasks, WU-UCT outperforms this new baseline (with its best hyper-parameters). Furthermore, we also observe that TreeP does not have an optimal set of hyper-parameters that performs uniformly well on all tasks. In other words, for the \\u201cTreeP + pre-adjusted visit count + virtual loss\\u201d, the hyper-parameters need to be tuned separately on each individual task. On the other hand, WU-UCT performs consistently well across different tasks. These additional experimental results along with the discussions are also included in Appendix E of the revised paper.\\n\\nTable. Comparison between WU-UCT and TreeP with both virtual loss (r_vl) and virtual pseudo-count (n_vl). Three sets of hyper-parameters are used in TreeP, and each experiment was repeated two times due to the limited time. (In our final version, we will report the results with 10 runs.) The results of Centipede, Robotank, and NameThisGame are still running and will be reported as soon as they are completed. (These three games are generally more expensive because their game steps are about 10 times of other games.)\\n======================================================================\\nEnv. + WU-UCT TreeP TreeP TreeP\\n + (r_vl = n_vl = 1) (r_vl = n_vl = 2) (r_vl = n_vl = 3)\\n======================================================================\\nAlien + 6536\\u00b11093 4850\\u00b1357 4935\\u00b160 5000\\u00b10\\nBoxing + 100\\u00b10 99\\u00b11 99\\u00b10 99\\u00b11\\nBreakout + 413\\u00b114 379\\u00b143 265\\u00b150 463\\u00b160\\nFreeway + 32\\u00b10 32\\u00b10 32\\u00b10 32\\u00b10\\nGravitar + 5060\\u00b1568 3500\\u00b1707 4105\\u00b1463 4950\\u00b1141\\nMsPacman + 19804\\u00b12232 13160\\u00b1462 12991\\u00b1851 8640\\u00b1438\\nRoadRunner + 46720\\u00b11359 29800\\u00b1282 28550\\u00b1459 29400\\u00b1494\\nQbert + 17953\\u00b1225 17055\\u00b1353 13425\\u00b1194 9075\\u00b153\\nSpaceInvaders + 3000\\u00b1813 2305\\u00b1176 3210\\u00b1127 3020\\u00b142\\nTennis + 4\\u00b12 1\\u00b10 1\\u00b10 0\\u00b10\\nTimePilot + 48390\\u00b16721 52500\\u00b1707 49800\\u00b1212 32400\\u00b11697\\nZaxxon + 39085\\u00b16838 24300\\u00b12828 24600\\u00b1424 37550\\u00b11096\\n======================================================================\\n\\nWe agree with the reviewer that our proposed WU-UCT is a more principled parallel UCT algorithm when compared with the above TreeP variant (count + virtual loss). Conceptually, WU-UCT is designed based on the fact that on-going simulations (unobserved samples) will eventually return the results, so their number should be tracked and used to adaptively adjust the UCT selection process. On the other hand, TreeP uses an artificially designed virtual loss r_vl and a hand-crafted count correction n_vl to discourage other threads from simultaneously exploring the same node. Therefore, WU-UCT achieves a better exploration-exploitation tradeoff in parallelization, which leads to better performance as confirmed by the above experimental results.\\n\\nSecond, following your suggestion, we have adjusted the t-test results by using the p-value threshold 0.05/45 = 0.0011 and have updated the results in the revised paper. The new results indicate that WU-UCT performs significantly better than TreeP, LeafP, and RootP in 4, 5, and 6 games, respectively. However, note that most experiments are only repeated 3 times at this moment, making it extremely hard to reject the null hypothesis under the threshold 0.0011. We are running additional experiments to repeat all experiments in Table 2 ten times (suggested by Reviewer #2), and will update the t-test results again after that. Finally, we would like to point out that the Bonferroni adjustment method is very conservative as it performs a family hypotheses test, whose p-value is corrected according to the probability of rejecting at least one hypothesis. Nevertheless, even under this much stronger requirement, WU-UCT is still significantly better.\\n\\n[1] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., ... & Dieleman, S. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you very much for your detailed response, and for the effort you've put into making all these changes. I will have a closer read through the paper again over the next few days (though I already read through the introduction and it sounds great!), but wanted to clarify a few of my comments sooner rather than later:\\n\\n1. On the difference between TreeP and WU-UCT: based on my reading of the AlphaGo paper, it sounds to me like they update both visit counts and value estimates. Specifically, they say:\\n\\n\\\"At each in-tree step t \\u2264 L of the simulation, the rollout statistics are updated as if it had lost \\u2212n_vl games, N_r(s_t, a_t) \\u2190 N_r(s_t, a_t) + n_vl; W_r(s_t, a_t) \\u2190 W_r(s_t , a_t) \\u2212 n_vl; this virtual loss discourages other threads from simultaneously exploring the identical variation. At the end of the simulation, the rollout statistics are updated in a backward pass through each step t \\u2264 L, replacing the virtual losses by the outcome, N_r(s_t , a_t) \\u2190 N_r(s_t , a_t) \\u2212 n_vl +1; W_r(s_t , a_t) \\u2190 W_r(s_t , a_t) + n_vl + z_t.\\\" (\\\"Backup\\\" section in the appendix).\\n\\nIn Table 5 they say n_vl = 3, so in other words, before performing each simulation they increase the visit counts by 3 and subtract 3 from the number of games that were won. After the simulation is finished, they undo the changes to the visit counts and the value estimate and update them with the actual simulation results. The approach of WU-UCT is still unique, however: I think it is essentially equivalent to updating the visit counts and including a virtual loss which is equal to the current mean estimate (i.e. it is an adaptive virtual loss rather than a fixed virtual loss). But I still think it would be good to clarify this difference, and to compare to this method of implementing the virtual loss.\\n\\n(v) Adjustment for multiple comparisons should be performed with all statistical tests, even pairwise tests, in order to control for the family-wise Type I error rate. Specifically, because you are comparing p-values at the threshold of 0.05, if you perform 100 comparisons, then by chance ~5 of those will comparisons will lead you to reject the null hypothesis (i.e. have p values less than 0.05). To handle this issue, it is best practice to adjust the threshold at which you reject the null hypothesis based on the number of comparisons you are performing. A standard way of doing this is the Bonferroni method (https://en.wikipedia.org/wiki/Bonferroni_correction), in which you would divide your target threshold by the number of comparisons you are performing. Based on my understanding of your tests, you compare WU-UCT to the three other methods on 15 Atari games, so you should set your p-value threshold to 0.05/45 = 0.0011.\"}", "{\"title\": \"Response to reviewer #4 (part 2 of 2)\", \"comment\": \"Third, thank you for the many comments that improve our paper. We have addressed them carefully one by one, as detailed in the following.\\n\\n(i) Based on your suggestion, we changed our algorithm name from P-UCT to WU-UCT, in order to avoid potential confusion with the existing PUCT algorithm.\\n\\n(ii) All experiments (both in Section 5.1 and 5.2) were run for a fixed number of simulations. Specifically, for the \\u201cJoy City\\u201d game experiments, a total of 500 simulations were performed, and for the Atari experiments, 128 simulations were performed. We have clarified this in the revised manuscript (in both Sections 5.1 and 5.2).\\n\\n(iii) Based on your suggestion, we have changed the architecture name to \\u201cmaster-worker\\u201d in the revised paper.\\n\\n(iv) In Figure 7 (c-d) (which is Figure 4(c-d) in the revised version), game steps refer to the number of steps taken to pass the level and has been clarified in Section 5.1. We have added explanations of the term in the main text. We used game steps instead of pass-rate as the performance indicator because it is a more fine-grained performance metric than the pass-rate. Pass-rate can only indicate whether the agent uses less than a predefined number of steps. For example, if a level is given 20 steps and one agent used on average 10 steps and the other used 15 steps (assume all with low variance), then it will be hard to judge the performance difference between the two agents by using pass-rate alone. In contrast, examining the average game step provides a clear view that the first agent is better.\\n\\n(v) All p-values for t-tests are based on the pairwise comparison (i.e., WU-UCT vs. LeafP, WU-UCT vs. TreeP, and WU-UCT vs. RootP). Therefore, we did not include multiple comparisons, and the p-values are not adjusted for multiple comparisons. We used the term \\u201cpaired t-test\\u201d in Section 5.2 to clarify this.\\n\\n(vi) We have changed the 3D bar charts into heatmaps, and it looks nicer. Thank you!\\n\\n(vii) Based on your suggestion, we have formally defined \\u201cuser pass-rate\\u201d in the revised paper.\\n\\n(viii) Based on your feedback, we have modified the first paragraph of the introduction to make it more accessible for readers less familiar with MCTS.\\n\\n(ix) We have changed the notation of \\u201cnode\\u201d from n to s, which improves the paper\\u2019s clarity.\", \"response_to_the_additional_comments\": \"(i) Thank you for the suggestion. Initially, we wanted to use the user-pass-rate prediction system as an important motivating application for our WU-UCT algorithm. But we totally agree that the most important experiments in Section 5.1 are the speedup and performance tests across 1, 2, 4, 8, and 16 expansion and simulation workers. Therefore, following your advice, we moved the details about the user-pass rate prediction system and the corresponding performance to the supplementary material. \\n\\n(ii) Thanks again for sharing the interesting work of [3], and we have added discussions on the paper in our revised manuscript. Specifically, it shows how we can capture human behavior and preference using tree search algorithms. By using a board game as a testbed, it captures human preference using a learnable heuristic function and then performs MCTS using the policy specified by the heuristic function. Interestingly, they showed that the MCTS policy well-mimics the human player\\u2019s policy and made an important attempt to bridge the gap between human decision-making and computer game playing. \\n\\n(iii) We have corrected the citations of the AlphaGo Zero paper.\\n\\n[1] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., ... & Dieleman, S. (2016). Mastering the game of Go with deep neural networks and tree search. nature, 529(7587), 484.\\n[2] Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., ... & Chen, Y. (2017). Mastering the game of go without human knowledge. Nature, 550(7676), 354.\\n\\n[3] van Opheusden, B., Bnaya, Z., Galbiati, G., & Ma, W. J. (2016, June). Do people think like computers?. In International conference on computers and games (pp. 212-224). Springer, Cham.\"}", "{\"title\": \"Response to reviewer #4 (part 1 of 2)\", \"comment\": \"Thanks a lot for your many constructive feedbacks, which greatly improve our paper.\\n\\nFirst, we would like to clarify the difference between our proposed method P-UCT (renamed as WU-UCT based on your suggestion) and Tree Parallelization (TreeP). First of all, all MCTS methods, regardless of being parallel or sequential, would update both the visit counts and the values of the traversed nodes. The key difference is whether they are updated before or after the simulation step is completed. In sequential MCTS, both visit counts and the values are updated AFTER the simulation step is done. In our WU-UCT, the visit counts are updated BEFORE the simulation step completes. To our best knowledge, NONE of the existing TreeP algorithms (or any existing parallel MCTS algorithm) updates the visit counts BEFORE the simulation step finishes. TreeP only updates the values ahead of time using virtual loss. This is also the case for the work [1] and [2]. (Of course, after the simulation step completes, the visit counts in TreeP would be updated in the backpropagation step, just as the sequential MCTS.) For this reason, we do not compare to the variant where both the visit counts and the values are updated ahead of time, since no such variant of TreeP methods exist. As shown in Sections 4-5, our approach (updating counts ahead of time) is better than TreeP (updating values ahead of time by virtual loss) in Sections 4-5. Nevertheless, updating both the values (by virtual loss) and the visit counts BEFORE the simulation step finishes is an interesting case that has not yet been explored. We would like to consider it as a future work. Also, to clarify the algorithm details, we have added the pseudo-codes of our baselines TreeP, LeafP, and RootP in Algorithms 4-6 in Appendix B with detailed descriptions.\\n\\nSecond, we provide additional experiment results for the sequential UCT. The performance of the sequential UCT in the \\u201cjoy city\\u201d game has already been reported in Figure 7, which corresponds to the 1 expansion worker and 1 simulation worker case. For the Atari games, the results of sequential UCT are added as a new column in Table 2. Also, we have added statements in Section 5.2 of the revised paper to show the intention of including the results of the sequential UCT: the performance of sequential UCT is the best we can expect from any parallel UCT algorithm, so we regard it as an upper bound performance of the parallelized algorithms (WU-UCT, TreeP, LeafP, and RootP). For your convenience, we also copy the results of sequential UCT and WU-UCT below. We have completed 12 out of 15 Atari games; the other 3 on-going experiments are more time-consuming (significantly slower than WU-UCT) and we will report them once they are done. On average, WU-UCT has only 16% relative performance loss, which is much smaller than other baselines (TreeP: 26%, LeafP: 36%, RootP: 32%), which supports our analysis in Section 4 that WU-UCT has the closest performance to the sequential UCT.\\n\\n+=============+==========+===========+===========+=========+\\n+ Environment + Alien + Boxing + Breakout + Freeway +\\n+=============+==========+===========+===========+=========+\\n+ UCT + 6820 + 100 + 462 + 32 +\\n+=============+==========+===========+===========+=========+\\n+ WU-UCT + 6538 + 100 + 413 + 32 +\\n+=============+==========+===========+===========+=========+\\n+=============+==========+===========+===========+=========+\\n+ Environment + Gravitar + MsPacman + RoadRunner + Qbert +\\n+=============+==========+===========+===========+=========+\\n+ UCT + 4900 + 23021 + 52300 + 17250 +\\n+=============+==========+===========+===========+=========+\\n+ WU-UCT + 5060 + 19804 + 46720 + 17953 +\\n+=============+==========+===========+===========+=========+\\n+=============+=============+=========+==========+=========+\\n+ Environment + SpaceInvaders + Tennis + TimePilot + Zaxxon +\\n+=============+=============+=========+==========+=========+\\n+ UCT + 3535 + 5 + 52600 + 46800 +\\n+=============+=============+=========+==========+=========+\\n+ WU-UCT + 3000 + 4 + 48390 + 39085 +\\n+=============+=============+=========+==========+=========+\"}", "{\"title\": \"Response to reviewer #3\", \"comment\": \"Thank you for your valuable comments.\\n\\nFirst, we followed your advice to reduce the paper length to 8 pages in the revised version after the following adjustments. (i) We move the experimental results of the \\u201cJoy City\\u201d that are less relevant to our main point (demonstrating the effectiveness and efficiency of P-UCT) to the supplementary material. This includes descriptions of the \\u201cuser pass rate prediction system\\u201d and relative figures and tables. (ii) We did some minor adjustments such as changing the layout of certain figures to save more space. Together, they make a more compact structure for our 8-page paper.\\n\\n(note: according to the suggestion by Reviewer #4, we have changed the algorithm name to WU-UCT to avoid confusion with an existing name PUCT, though in the response we still use P-UCT for your convenience.)\\n\\nSecond, we would like to emphasize the main contribution of this paper: it proposes a simple but effective method for parallelizing Monte Carlo Tree Search. As you pointed out, simplicity is not a disadvantage. Although the proposed approach is simple, the idea behind it is non-trivial. As analyzed in Section 4, by keeping track of the unobserved samples, P-UCT manages to avoid common failure modes (e.g. collapse of exploration and exploitation failure, as detailed in Section 4) of other parallel MCTS algorithms. Moreover, with an in-depth empirical analysis with the (unrealistic) ideal parallel algorithm (Figure 1(b)), we show that P-UCT best mimics the sequential algorithm\\u2019s behavior compared to other parallelization approaches.\\n\\nFinally, we think our paper is a good fit for ICLR for the following reasons. First, MCTS is an important component of model-based reinforcement learning and is often combined with learning approaches to achieve better performance (e.g. [1]). Moreover, MCTS has been combined with reinforcement learning methods to learn better policies (e.g. [2]), which indicates that MCTS has been used as a crucial component in learning algorithms. Therefore, though we only evaluated P-UCT under the planning setting, it can be used as part of a learning algorithm. Additionally, as stated by Reviewer #4, \\u201cwhile significant effort has been made by the RL community to scale up distributed model-free algorithms, less effort has been made for model-based algorithms\\u201d. P-UCT provides another attempt on scaling up MCTS, an important part of model-based reinforcement learning algorithm, so we think it should be a good fit for ICLR audiences.\\n\\n[1] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., ... & Dieleman, S. (2016). Mastering the game of Go with deep neural networks and tree search. nature, 529(7587), 484.\\n[2] Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., ... & Chen, Y. (2017). Mastering the game of go without human knowledge. Nature, 550(7676), 354.\"}", "{\"title\": \"Response to reviewer #2\", \"comment\": \"Thank you for the valuable suggestions.\\n\\nFirst, to improve the statistical significance tests, we launched new experiments to perform 10 runs for each environment and each model in the Atari game task. So far, we have completed 5 Atari games (out of 15 games) and report the results below (and in the revised paper). And we will post the results for all other on-going experiments once they are completed. We will also update Figure 5 and 10 (in the revised manuscript) after we finish the experiments.\", \"table\": \"Additional experimental results on 5 Atari games with 10 independent runs. (note: based on the suggestion by Reviewer #4, we have changed our algorithm name from P-UCT to WU-UCT to avoid potential confusion with an existing algorithm named PUCT. However, in the response below, we will still use P-UCT for your convenience.)\\n+=============+===========+===========+===========+==========+\\n+ Environments + P-UCT + TreeP + LeafP + RootP +\\n+=============+===========+===========+===========+==========+\\n+ Freeway + 32\\u00b10 + 32\\u00b10 + 31\\u00b11 + 32\\u00b10 +\\n+=============+===========+===========+===========+==========+\\n+ Gravitar + 5060\\u00b1568 + 4880\\u00b11162 + 3385\\u00b1155 + 4160\\u00b11811 +\\n+=============+===========+===========+===========+==========+\\n+ MsPacman + 19804\\u00b12232 + 14000\\u00b12807 + 5378\\u00b1685 + 7156\\u00b1583 +\\n+=============+===========+===========+===========+==========+\\n+ RoadRunner + 46720\\u00b11359 + 24680\\u00b13316 + 25452\\u00b12977 +38300\\u00b11191+\\n+=============+===========+===========+===========+==========+\\n+ Zaxxon + 39579\\u00b13942 + 38839\\u00b14128 + 12300\\u00b1821 + 13380\\u00b1769 +\\n+=============+===========+===========+===========+==========+\\n\\nSecond, we would like to clarify that the purpose of including PPO in Table 2 for the Atari experiments (Section 5.2) is to use it as the performance lower bound for all MCTS algorithms. Recall that we used distilled PPO policies (with network distillation) as the roll-out policy for all MCTS algorithms (i.e., P-UCT, TreeP, LeafP, and RootP), which is briefly described in Section 5.2 and detailed in Appendix D. Therefore, the performance of PPO is added here as a reference, which serves as a lower expected bound of UCT algorithms (both sequential and parallelized) since we expect them to perform significantly better than their roll-out policy. Our main focus for Table 2 is the relative performance between different parallel MCTS algorithms, including P-UCT. To avoid confusion, we revised the first paragraph as well as Table 2 to clarify the intention of including PPO. \\n\\nThird, in addition to the \\u201cperformance lower bound\\u201d given by PPO, we also included the results of the sequential UCT in the revised manuscript (suggested by Reviewer #4), and use it as the performance upper bound for all parallel UCT algorithms. This is because, in general, we do not expect any parallel algorithm to outperform its sequential counterpart. These results empirically demonstrate the performance degradation caused by parallelizing UCT. It shows that our P-UCT has much smaller performance degradation compared to other methods.\\n\\nFinally, based on your feedback, we have revised the corresponding statement regarding PPO as \\u201ca state-of-the-art baseline\\u201d. We use PPO as our roll-out policy because it is actually a competitive model-free RL algorithm, which achieves reasonably well performance on the Atari benchmark.\"}", "{\"title\": \"General response to all reviewers\", \"comment\": \"We thank all the reviewers for their useful feedback. As suggested by Reviewers #3 and #4, we move the paragraphs that are related to the \\u201cuser pass-rate prediction system\\u201d (in Section 5.1) to the supplementary material. In addition, we also did minor adjustments to the layout of some figures. Together, we reduce the paper to 8 pages, as suggested by Reviewer #3, which provides a more compact structure for the paper. For the detailed responses to your comments, please refer to our reply posted under each review comment.\\n\\nFurthermore, as suggested by Reviewer #4, we also changed our algorithm name from P-UCT to WU-UCT, to differentiate it from the existing PUCT algorithm in [1] and [2].\\n\\n[1] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., ... & Dieleman, S. (2016). Mastering the game of Go with deep neural networks and tree search. nature, 529(7587), 484.\\n[2] Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., ... & Chen, Y. (2017). Mastering the game of go without human knowledge. Nature, 550(7676), 354.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #4\", \"review\": [\"This paper introduces a new algorithm for parallelizing Monte-Carlo Tree Search (MCTS). Specifically, when expanding a new node in the search tree, the algorithm updates the parent nodes\\u2019 statistics of the visit counts but not their values; it is only when the expansion and simulation steps are complete that the values are updated as well. This has the effect of shrinking the UCT exploration term, and making other workers less likely to explore that part of the tree even before the simulation is complete. This algorithm is evaluated in two domains, a mobile game called \\u201cJoy City\\u201d as well as on Atari. The proposed algorithm results in large speedups compared to serial MCTS with seemingly little impact in performance, and also results in higher scores on Atari than existing parallelization methods.\", \"Scaling up algorithms like MCTS is an important aspect of machine learning research. While significant effort has been made by the RL community to scale up distributed model-free algorithms, less effort has been made for model-based algorithms, so it is exciting to see that emphasis here. Overall I thought the main ideas in paper were clear, the proposed method for how to effectively parallelize MCTS was compelling, and the experimental results were impressive. Thus, I tend to lean towards accept. However, there were three aspects of the paper that I thought could be improved. (1) It was unclear to me how much the parallelization method differs from previous approaches (called \\u201cTreeP\\u201d in the paper) which adjust both the visit counts and the value estimate. (2) The paper is missing experiments showing the decrease in performance compared to a serial version of the algorithm. (3) The paper did not always provide enough detail and in some cases used confusing terminology. If these three things can be addressed then I would be willing to increase my score.\", \"Note that while I am quite familiar with MCTS, I am less familiar with methods for parallelizing it, though based on a cursory Google Scholar search it seems that the paper is thorough in discussing related approaches.\", \"1. When performing TreeP, does the traversed node also get an increased visit count (in addition to the loss which is added to the value estimate)? In particular, [1] and [2] adjust both the visit counts and the values, which makes them quite similar to the present method (which just adjusts visit counts). It\\u2019s not clear from the appendix whether TreeP means that just the values are adjusted, or both the values and nodes. If it is the former, then I would like to see experiments done where TreeP adjusts the visit counts as well, to be more consistent with prior work. (Relatedly, I thought the baselines could be described in significantly more detail than they currently are\\u2014-pseudocode would in the appendix would be great!)\", \"2. I appreciate the discussion in Section 4 of how much one would expect the proposed parallelization method to suffer compared to perfect parallelization. However, this argument would be much more convincing if there were experiments to back it up: I want to know empirically how much worse the parallel version of MCTS does in comparison to the serial version of MCTS, controlling for the same number of simulations.\", \"3. While the main ideas in the paper were clear, I thought certain descriptions/terminology were confusing and that some details were missing. Here are some specifics that I would like to see addressed, roughly in order of importance:\", \"I strongly recommend that the authors choose a different name for their algorithm than P-UCT, which is almost identical (and pronounced the same) as PUCT, which is a frequently used MCTS exploration strategy that incorporates prior knowledge (see e.g. [1] and [2]). P-UCT is also not that descriptive, given that there are other existing algorithms for parallelizing MCTS.\", \"Generally speaking, it was not clear to me for all the experiments whether they were run for a fixed amount of wallclock time or a fixed number of simulations, and what the fixed values were in either of those cases. The fact that these details were missing made it somewhat more difficult for me to evaluate the experiments. I would appreciate if this could be clarified in the main text for all the experiments.\", \"The \\u201cmaster-slave\\u201d phrasing is a bit jarring due to the association with slavery. I\\u2019d recommend using a more inclusive set of terms like \\u201cmaster-worker\\u201d or \\u201cmanager-worker\\u201d instead (this shouldn\\u2019t be too much to change, since \\u201cworker\\u201d is actually used in several places throughout the paper already).\", \"Figure 7c-d: What are game steps? Is this the number of steps taken to pass the level? Why not indicate pass rate instead, which seems to be the main quantity of interest?\", \"Page 9: are these p-values adjusted for multiple comparisons? If not, please perform this adjustment and update the results in the text. Either way, please also report in the text what adjustment method is used.\", \"Figure 7: 3D bar charts tend to be hard to interpret (and in some cases can be visually misleading). I\\u2019d recommend turning these into heatmaps with a visually uniform colormap instead.\", \"Page 1, bottom: the first time I read through the paper I did not know what a \\u201cuser pass-rate\\u201d was (until I got to the experiments part of the paper which actually explained this term). I would recommend phrasing this in clearer way, such as \\u201cestimating the rate at which users pass levels of the mobile game\\u2026\\u201d\", \"One suggestion just to improve the readability of the paper for readers who are not as familiar with MCTS is to reduce the number of technical terms in the first paragraph of the introduction. Readers unfamiliar with MCTS may not know what the expansion/simulation/rollout steps are, or why it\\u2019s necessary to keep the correct statistics of the search tree. I would recommend explaining the problem with parallelizing MCTS without using these specific terms, until they are later introduced when MCTS is explained.\", \"Page 2: states correspond to nodes, so why introduce additional notation (n) to refer to nodes? It would be easier to follow if the same variable (s) was used for both.\"], \"some_additional_comments\": \"- Section 5: I\\u2019m not sure it\\u2019s necessary to explain so much of the detail of the user-pass rate prediction system in the main text. It\\u2019s neat that comparing the results of different search budgets of MCTS allow predicting user behavior, but this seems to be a secondary point besides the main point of the paper (which is demonstrating that the proposed parallelization method is effective). I think the right part of Figure 5, as well as Table 1 and Figure 6, could probably go in the supplemental material. As someone with a background in cognitive modeling, I think these results are interesting, but that they are not the main focus of the paper. I was actually confused during my first read through as it was unclear to me initially why the focus had shifted from demonstrating that parallel MCTS works to \\n\\n- The authors may be interested in [3], which also uses a form of tree search to model human decisions in a game.\\n\\n- Page 9: the citation to [2] does not seem appropriate here since AlphaGo Zero did not use a pretrained search policy, I think [1] would be correct instead.\\n\\n[1] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., ... & Dieleman, S. (2016). Mastering the game of Go with deep neural networks and tree search. nature, 529(7587), 484.\\n[2] Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., ... & Chen, Y. (2017). Mastering the game of go without human knowledge. Nature, 550(7676), 354.\\n[3] van Opheusden, B., Bnaya, Z., Galbiati, G., & Ma, W. J. (2016, June). Do people think like computers?. In International conference on computers and games (pp. 212-224). Springer, Cham.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper introduces a new algorithm for parallelizing monte carlo tree search (MCTS). MCTS is hard to parallelize as we have to keep track of the statistics of the node of the tree, which are typically not up-to-date in a parallel execution. The paper introduces a new algorithm that updates the visitation counts before evaluating the rollout (which takes long), and therefore allows other workers to explore different parts of the tree as the exploration bonus is decreased for this node. The algorithm is evaluated on the atari games as well on a proprietary game and compared to other parallelized MCTS variants.\\n\\nThe makes intuitively a lot of sense, albeit it is very simple and it is a surprise that this has not been tried yet. Anyhow, simplicity is not a disadvantage. The algorithm seems to be effective and the evaluations are promising and the paper is also well written. I have only 2 main concerns with the paper:\\n\\n- The paper is very long (10 pages), and given that, we reviewers should use stricter reviewing rules. As the introduced algorithm is very simple, I do not think that 10 pages are justified. The paper should be considerably shortened (e.g. The \\\"user pass rate prediction system\\\" does not add much to the paper, could be skipped. Moreover, the exact architecture is maybe also not that important).\\n\\n- The focus of the paper is planning, not learning. Planning conferences such as ICAPS would maybe be a better fit than ICLR. \\n\\nGiven the stricter reviewing guidelines, I am leaning more towards rejects as the algorithmic contribution is small and I do not think 10 pages are justified.\"}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces a novel approach to parallelizing Monte Carlo Tree Search\\nwhich achieves speedups roughly linear in the number of parallel workers while \\navoiding significant loss in performance. The key idea to the\\napproach is to keep additional statistics about the number of \\non-going simulations from each of the nodes in the tree. The approach is \\nevaluated in terms of speed and performance on the Atari benchmark and in a \\nuser pass-rate prediction task in a mobile game.\\n\\nI recommend that this paper be accepted. The approach is well motivated and clearly \\nexplained, and is supported by the experimental results. The experiments are reasonably thorough and \\ndemonstrate the claims made in the paper. The paper itself is very well-written, and all-around \\nfelt very polished. Overall I am enthusiastic about the paper and have only a few concerns, detailed below.\\n\\n- I suggest doing more runs of the Atari experiment. Three runs of the experiment does not \\nseem large enough to make valid claims about statistical significance. This is especially \\nconcerning because claims of statistical significance are made via t-testing, which assumes \\nthat the data is normally distributed. Three runs is simply too few to be making conclusions \\nabout statistical significance using t-testing. I think that this is a fair request to make and \\ncould reasonably be done before the camera-ready deadline, if the paper is accepted.\\n\\n- The experiments in Atari compare against a model-free Reinforcement Learning baseline, PPO. \\nWas there a set clock time that all methods had to adhere to? Or alternatively, was it verified that \\nPPO and the MCTS methods are afforded approximately equal computation time? If not, it seems \\nlike the MCTS methods could have an unfair advantage against PPO, especially if they are \\nallowed to take as long as necessary to complete their rollouts. This computational bias \\ncould potentially be remedied by allowing PPO to use sufficiently complex function \\napproximators, or by setting the number of simulations used by the MCTS methods \\nsuch that their computation time is roughly equal to that of PPO.\\n\\n- I would be careful about stating that PPO is a state-of-the-art baseline. State-of-the-art is a big claim, and I'm not quite sure that it's true for PPO. PPO's performance is typically only compared to other policy-based RL methods; it's hard to say that it's a state-of-the-art method when there's a lack of published work comparing it against the well-known value-based approaches, like Rainbow. I suggest softening the language there unless you're confident that PPO truly is considered a state-of-the-art baseline.\"}" ] }
BklmtJBKDB
Conditional Flow Variational Autoencoders for Structured Sequence Prediction
[ "Apratim Bhattacharyya", "Michael Hanselmann", "Mario Fritz", "Bernt Schiele", "Christoph-Nikolas Straehle" ]
Prediction of future states of the environment and interacting agents is a key competence required for autonomous agents to operate successfully in the real world. Prior work for structured sequence prediction based on latent variable models imposes a uni-modal standard Gaussian prior on the latent variables. This induces a strong model bias which makes it challenging to fully capture the multi-modality of the distribution of the future states. In this work, we introduce Conditional Flow Variational Autoencoders (CF-VAE) using our novel conditional normalizing flow based prior to capture complex multi-modal conditional distributions for effective structured sequence prediction. Moreover, we propose two novel regularization schemes which stabilizes training and deals with posterior collapse for stable training and better match to the data distribution. Our experiments on three multi-modal structured sequence prediction datasets -- MNIST Sequences, Stanford Drone and HighD -- show that the proposed method obtains state of art results across different evaluation metrics.
[ "Variational Inference", "Normalizing Flows", "Trajectories" ]
Reject
https://openreview.net/pdf?id=BklmtJBKDB
https://openreview.net/forum?id=BklmtJBKDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "BdYG9tZBDR", "H1xASy1Ujr", "rklY0RRroB", "ryx8qR0Bir", "Bkl4EC0HiH", "BygWcVYatB", "B1gBElBTtr", "SyxVJJ0iKH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733643, 1573412678265, 1573412560837, 1573412494219, 1573412396354, 1571816584886, 1571799085223, 1571704539581 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1834/Authors" ], [ "ICLR.cc/2020/Conference/Paper1834/Authors" ], [ "ICLR.cc/2020/Conference/Paper1834/Authors" ], [ "ICLR.cc/2020/Conference/Paper1834/Authors" ], [ "ICLR.cc/2020/Conference/Paper1834/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1834/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1834/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The novelty of the proposed work is a very weak factor, the idea has been explored in various forms in previous work.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to Review #3 - 1/1\", \"comment\": [\"We thank the reviewer for the comments and address them here in detail.\", \"\\u2018 the paper is well written and clear\\u2019 - Thank you.\", \"\\u2018While none of the ideas, in isolation, are significantly new, the combination can be useful to this particular problem\\u2019 - We thank you for recognizing the significance of our approach. Please also note that we propose the first conditional normalizing flow based model for structured sequence prediction, the first conditional non-linear flows along with two new regularization schemes.\", \"\\u2018estimating variance in pR\\u2019 - As intuition would suggest, reasonably small values of C work well -- as they allow for good data reconstruction and also makes it easy for our conditional flow to fit the marginal posterior. As shown in Table 5 and Figure 10 of Appendix E, our model is robust across a large range of C=[0.05,0.25]. Furthermore, we observe that these values are robust across all three datasets - MNIST Sequences, Stanford Drone and HighD. Therefore, we did not face any challenges in estimating the variance of pR.\", \"\\u2018\\\"dominant mode\\\" detector in cR \\u2019 - Please note that, we do have to directly detect the dominant mode for our condition regularization scheme. The dominant mode is the mode which dominates (explains a large part) the data log-likelihood term. E.g. in case of HighD we observe that ~90% of the data log-likelihood is dominated by the main mode which is the case of the vehicle moving straight on the highway without a lane change. Posterior collapse occurs in case of conditional latent variable models because the model focuses on this dominant mode and chooses to ignore the latent variables (and thus the other minor modes) because this leads to a easier to encode latent distribution. Our condition regularization scheme encourages the model to focus on all modes by ensuring that the latent variables cannot be ignored.\", \"\\u2018prior distribution over the variance\\u2019 - This is an interesting idea. However, it is not straightforward to implement. This is because this would require us to enforce a prior over the Jacobian of our conditional flow prior (as the variance of the posterior is dependent on the Jacobian). It would be challenging to enforce such a prior without affecting the expressivity of our conditional flows. In contrast, our posterior regularization scheme is straightforward to implement, robust and leads to state-of-the-art results.\", \"\\u2018demonstrating the stability and robustness of the method\\u2019 - We have added additional experiments in Table 5 and Figure 10 of Appendix E to further illustrate the stability and robustness of the method. Please note, the results in the main paper are the mean of five independent runs with random initializations. We additionally report the standard deviation of 5 runs in Table 5, across all baselines. We observe low standard deviation across runs demonstrating the stability of our method. Furthermore, we also observe stable performance across values a large range of the posterior regularization hyper-parameter C=[0.05,0.25].\", \"\\u2018Data shuffles and overfitting\\u2019 - We report results on standard the MNIST Sequence and HighD test set as in prior work. Furthermore, we report 5-fold cross validation results on Stanford Drone in Table 2 (following prior) work. These results demonstrate that our method is effective across data shuffles and does not suffer from overfitting.\", \"\\u2018How to decide whether to use cR\\u2019 - We find that in practice we this can be decided on the basis of the training data. E.g. the training sequences can be clustered (with k-means) to determine if there is a dominant mode. E.g. in case of HighD k-means clustering reveals that the dominant mode is the case where the car continues travelling straight along the highway.\", \"Finally, we thank the reviewer for voicing her/his concerns and helping us improve our work. We would be happy to answer any remaining questions.\"]}", "{\"title\": \"Response to Review #1 - 1/1\", \"comment\": [\"We thank the reviewer for the comments and address them here in detail.\", \"\\u2018The paper is clearly motivated and easy to follow.\\u2019 - Thank you.\", \"\\u2018Experiment results on MNIST, Stanford Drone and HighD datasets show the proposed that the model achieves better results than previous state-of-the-art models by significant margins.\\u2019 - Thank you.\", \"\\u2018Volume-preserving flows\\u2019 - We have added results with the volume preserving NICE flows in Table 5 in Appendix E. We observe that even without our posterior regularization scheme (pR) non-volume preserving NICE flows (Dinh et. al. 2015) performs well -- because of the constant Jacobian term. However, our conditional non-linear flows with posterior regularization still perform significantly better (78.9 vs 74.9 -CLL). This is because of the additional expressive power of our conditional non-linear flows combined with the stability offered by our posterior regularization scheme. Please note that the results with Affine flows in Table 1 already includes posterior regularization. We apologize for not pointing this out in the manuscript. We have updated our manuscript to reflect this.\", \"\\u2018comparing the CF-VAE models with and without regularizations\\u2019 - We have added additional results in Figure 10 of Appendix E illustrating the effect of our posterior regularization scheme on each of the four terms of our objective, 1. The data log-likelihood, 2. The entropy of the posterior, 3. The log-likelihood under the base Gaussian distribution of the conditional flow prior, 4. The log-determinant of the Jacobian. First, we see that with our posterior regularization scheme, our CF-VAE focuses on explaining the data well -- the data log-likelihood is best with our posterior regularization (pR) scheme, with C=0.2 having an advantage over C={0.05,0.1}. Furthermore, we see that the Jacobian term of our conditional flow dominates while entropy term decreases -- the contraction of the base density is favoured. We also experimented with re-weighting these terms (although it's no longer a valid lower bound on the true data log-likelihood). This lead to the opposite behaviour -- the entropy term dominates over the Jacobian term at the cost of the data log-likelihood. On the other hand, we observe that all terms of our objective are stable with our posterior regularization scheme, illustrating the advantage of our posterior regularization scheme.\", \"Finally, we thank the reviewer for voicing her/his concerns and helping us improve our work. We would be happy to answer any remaining questions.\"]}", "{\"title\": \"Response to Review #2 - 1/2\", \"comment\": [\"We thank the reviewer for the comments and address them here in detail.\", \"\\u2018In general I like the idea, and the presentation seems solid to a large degree.\\u2019 - Thank you.\", \"\\u2018the statements p(y|x) = p(y|x, z) p(z | x) and p(y|x) = p(y|z) p(z|x)\\u2019 - We thank you for pointing these out these typos. To clarify, these statements are missing the integral over z, e.g. p(y|x) = \\\\int p(y|x, z) p(z | x) dz. Additionally, regrading the second statement, please note that we assume a strong conditional normalising flow based prior that can encode conditioning information in the latent space such that p(y|x,z) = p(y|z). We have updated the text to reflect this.\", \"\\u2018prior work [...] imposes a uni-modal standard Gaussian prior\\u2019 - We apologize for this inclarity and we thank you for pointing out [1,2]. We have updated the manuscript (including the abstract) and included these references. However, there seems to be a misunderstanding here. First, we have included extensive references to prior work on expressive priors in the introduction and related work section. Secondly, please note that [1,2] uses sequential latent variables - a latent variable is sampled at every time-step. Our CF-VAE (following prior work e.g. Lee et. al.,2017; Bhattacharyya et. al., 2018) samples a global latent variable for prediction of the entire future sequence. The references [1,2] do impose uni-modal Gaussian priors at each time-step. Please refer to page 4 of [1] which states \\u201cIn this work, we restrict ourselves to a standard Normal prior\\u201d. Similarly, Equation (5) of [2] states the same. Therefore, we believe that there are significant differences between [1,2] and our work.\", \"\\u2018recently published work [3]\\u2019 - Please note, this work [3] was submitted to arXiv on 23rd August, 2019 (https://arxiv.org/abs/1908.08750). Furthermore, the proceedings were published (to the best of our knowledge) in September 2019 (https://e-nns.org/icann2019/) -- and the content is behind a paywall. Our work was submitted to arXiv on the 24th of August, 2019 (can be independently verified. Please also note that ICLR does allow submission to arXiv). Also please note, ICLR (https://iclr.cc/Conferences/2019/Reviewer_Guidelines - there is no updated version for 2020) has the policy - \\u201cno paper will be considered prior work if it appeared on arxiv, or another online venue, less than 30 days prior to the ICLR deadline.'' Therefore, following the ICLR policy, we consider this as parallel work.\", \"However, we found the work [3] very interesting. We have added [3] as a reference in our manuscript and added a discussion. We believe that the main difference between [3] and our condition regularization scheme is that we employ this regularization to deal with posterior collapse only in case of distributions with dominant modes. We do not always need this regularization to learn rich latent spaces e.g. in case of MNIST Sequences and Stanford Drone datasets. We also found the proposed CDV prior in [3] very interesting. Therefore, we include additional experiments with the proposed CDV prior of [3] in Appendix E.\", \"Please also consider that the condition regularization scheme is not our main contribution. We propose the first conditional normalizing flow based model for structured sequence prediction, the first conditional non-linear flows along with the posterior regularization scheme. Therefore, we believe that our work is significantly distinct from [3].\"]}", "{\"title\": \"Response to Review #2 - 2/2\", \"comment\": [\"\\u2018the expressivity of the model is not reduced\\u2019 - We apologize for the inclarity. Here, by expressivity we refer to whether the marginal posterior distribution of latents q_{\\\\phi}(z|x) is expressive enough to explain the data and whether the prior can match the posterior -- demonstrated by sample quality at test time. We have updated the text. To better analyze this we provide two additional sets of experiments in Table 5 and Figure 10 in Appendix E. We show the data log-likelihood during training for various values of C and without pR, 2. We additionally report the test log-likelihood for the corresponding values of C. Note that, with pR (fixed C) the ELBO would always be less than or equal to the ELBO without pR. However (irrespective of the total value of the ELBO), we see that we consistently obtain better data log-likelihoods during training with pR in Figure 10 (a). As mentioned in the manuscript, a constant value of C encourages our model to concentrate on explaining the data. The objective without posterior regularization is dominated by the Jacobian at the cost of the data log-likelihood (Figure 10 (b) vs Figure 10 (d)) -- while the likelihood under the base distribution is identical. This is further illustrated by our test log-likelihoods which show that our conditional flow prior can scale to deal with different values of fixed C=[0.05,0.30] and thus leads to better sample quality at test time (also see Figure 3).\", \"We apologize for our writing style. We tried to clearly present and highlight our contributions. We have tried to improve our writing style in the current draft.\", \"Finally, we thank the reviewer for voicing her/his concerns and helping us improve our work. We would be happy to answer any remaining questions.\"]}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper demonstrates how normalising flows can be conditioned. The method is then demonstrated on a set of sequential experiments which show improvements over the considered base lines.\\n\\nI recommend rejection of the paper, but I can see me changing that assessment if certain improvements are made. The central points are:\\n- the paper has errors,\\n- the paper does not respect some related work and has been published previously in parts,\\n- the paper has a claim that is unsupported in my view,\\n- the paper is overcrowded with annoying marketing language; the word \\\"novel\\\" appears 16 times according to my pdf viewer.\\n\\nIn general I like the idea, and the presentation seems solid to a large degree. However, the above points are a show stopper for me personally.\\n\\nFor one, the statements \\n\\n- p(y|x) = p(y|x, z) p(z | x) and\\n- p(y|x) = p(y|z) p(z|x),\\n\\nare problematic. I would like the authors to clarify how they arrive at these.\\n\\nThe paper starts with the claim that \\\"prior work [...] imposes a uni-modal standard Gaussian prior on the lagent variables\\\". This is just wrong. The whole literature of stochastic recurrent models does not do this. See [1, 2] for starting points. Since the authors place their work in the setup of sequential prediction, this is what has to be respected.\\n\\nFurther, the authors do not seem to be aware of a recently published work [3] that adresses *exactly* this problem. To quote from their abstract: \\\"To this end, we modify the latent variable model by defining the likelihood as a function of the latent vari- able only and [sic] introduce an expressive multimodal prior to enable the model for capturing semantically meaningful features of the data.\\\"\\n\\nI have two more questions with respect to the proposed regularisations.\\n\\nFirst, I would ask the authors to comment on the relationship of cR and the method proposed in [3]. To me, it appears as if cR is not novel, but has instead been proposed in [3] previously.\\n\\nSecond, pR fixes the variance of q. The authors claim that the normalising flow of the conditional can undo this fixing by adequately scaling the prior. Hence, so the claim, the expressivity of the model is not reduced.\\n\\nThis prohibits the posteriors of two distinct data points to share the same mean but not share the same variance. \\n\\nI request the authors to make a more formal analysis of this, as I do am not convinced how the expressivity of the model is maintained and what influence this has on the ELBO.\\n\\n\\n\\nReferences\\n[1] Bayer, Justin, and Christian Osendorfer. \\\"Learning stochastic recurrent networks.\\\" arXiv preprint arXiv:1411.7610 (2014).\\n[2] Chung, Junyoung, et al. \\\"A recurrent latent variable model for sequential data.\\\" Advances in neural information processing systems. 2015.\\n[3] Klushyn, Alexej, et al. \\\"Increasing the Generalisaton Capacity of Conditional VAEs.\\\" International Conference on Artificial Neural Networks. Springer, Cham, 2019.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #1\", \"review\": \"The work proposes a method to improve conditional VAE with a learnable prior distribution using normalizing flow. The authors also design two regularization methods for the CF-VAE to improve training stability and avoid posterior collapse. The paper is clearly motivated and easy to follow. Experiment results on MNIST, Stanford Drone and HighD datasets show the proposed that the model achieves better results than previous state-of-the-art models by significant margins.\\n\\nHowever, the reviewer has the following comments on improving the paper:\\n\\nThe motivation of the conditional normalizing flow design could be made more clear. The posterior regularization originates from the problem that the log Jacobian term encourages contraction of the base distribution. The log Jacobian term would be zero and would not encourage the contraction of the base distribution if the normalizing flow was volume-preserving, like NICE (http://proceedings.mlr.press/v37/rezende15.pdf, https://arxiv.org/pdf/1410.8516.pdf), which could be to convert into a conditional normalizing flow. On the MNIST results, the CF-VAE model with the proposed conditional normalizing flow even has worse performance than the affine flow model without the regularization. Therefore, clarifying the motivation behind this design choice is important.\\n\\nThe work claims the two regularization methods are used to avoid a low-entropy prior and posterior collapse. But the claims are not fully substantiated in the experimental results. It would be better if the paper explicitly compares the CF-VAE models with and without regularizations in terms of the entropy of prior distribution and KL divergence.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper proposes a combination of conditional VAE wtih normalising flows priors and posterior regularisation strategies to capture the diversity of multi-modal trajectories of complex motion patterns. The paper argues that more flexible priors over the latent space can provide posteriors that more closely resemble the trajectories observed in the training data. To this end, the paper presents a derivation of the evidence lower bound for VAEs with normalising flows and discusses the effect of fixing the variance of the posterior to reduce instability during training. Additionally, it shows that conditioning the regularisation on whether or not the dataset contains a dominating mode leads to more diversity and captures minor modes more effectively. Experiments are reported on sequence datasets of handwritten digits, and two datasets with trajectories of vehicles in traffic.\\n\\nA central point the paper makes is the importance of prior distributions for the latent space in VAEs such that it can capture diverse modes of trajectories. It is well known that more flexible priors such as MoG lead to better generative power as shown in Tomczak and Welling, 2018. The paper focuses on an extension of the work by Ziegler and Rush, 2019 which proposes normalising flows as priors to capture sequences, conditioned on the initial part of the trajectory. This extension is relatively simple, but does address the specifics of the problem well. \\n\\nIn general, the paper is well written and clear. The main innovation, in my opinion, is the combination of several ideas applied to the problem of sequence prediction. While none of the ideas, in isolation, are significantly new, the combination can be useful to this particular problem. However, I would like feedback from the authors on the following two main points below which are the main weaknesses of the paper:\\n\\n1) Posterior regularisation: The posterior regularisation strategies, while intuitive, are very ad-hoc and somewhat contrary to the Bayesian framework. It is difficult to see how the variance in pR and the \\\"dominant mode\\\" detector in cR can be estimated automatically. Within a Bayesian framework it would be much more natural to place a prior distribution over the variance and marginalise it out within the variational inference procedure. For the other regularisation (cR), how is the dominant mode detected? \\n\\n2) Experiments: A major concern reported throughout the paper is the instability of training and the risk for overfitting. I do not think the experiments demonstrate how stable and robust the method is to different initialisations, seeds, training data shuffles, etc. I strongly suggest the authors to run cross validation experiments and report the mean and standard deviation for all methods being compared. Also, how sensitive are the results to different values of C? How to decide whether to use cR or not when don't have access to the ground truth?\"}" ] }
HkeMYJHYvS
High-Frequency guided Curriculum Learning for Class-specific Object Boundary Detection
[ "VSR Veeravasarapu", "Deepak Mittal", "Abhishek Goel", "Maneesh Singh" ]
This work addresses class-specific object boundary extraction, i.e., retrieving boundary pixels that belong to a class of objects in the given image. Although recent ConvNet-based approaches demonstrate impressive results, we notice that they produce several false-alarms and misdetections when used in real-world applications. We hypothesize that although boundary detection is simple at some pixels that are rooted in identifiable high-frequency locations, other pixels pose a higher level of difficulties, for instance, region pixels with an appearance similar to the boundaries; or boundary pixels with insignificant edge strengths. Therefore, the training process needs to account for different levels of learning complexity in different regions to overcome false alarms. In this work, we devise a curriculum-learning-based training process for object boundary detection. This multi-stage training process first trains the network at simpler pixels (with sufficient edge strengths) and then at harder pixels in the later stages of the curriculum. We also propose a novel system for object boundary detection that relies on a fully convolutional neural network (FCN) and wavelet decomposition of image frequencies. This system uses high-frequency bands from the wavelet pyramid and augments them to conv features from different layers of FCN. Our ablation studies with contourMNIST dataset, a simulated digit contours from MNIST, demonstrate that this explicit high-frequency augmentation helps the model to converge faster. Our model trained by the proposed curriculum scheme outperforms a state-of-the-art object boundary detection method by a significant margin on a challenging aerial image dataset.
[ "Computer Vision", "Object Contour Detection", "Curriculum Learning", "Wavelets", "Aerial Imagery" ]
Reject
https://openreview.net/pdf?id=HkeMYJHYvS
https://openreview.net/forum?id=HkeMYJHYvS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "UAfCAtXDUk", "r1grXbxL5H", "Bkx3Oy1VqB", "rkgtgQXRYB" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733614, 1572368669003, 1572233076355, 1571857137149 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1833/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1833/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1833/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper received all negative reviewers, and the scores were kept after the rebuttal. The authors are encouraged to submit their work to a computer vision conference where this kind of work may be more appreciated. Furthermore, including stronger baselines such as Acuna et al is recommended.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper shows the efficiency of curriculum learning and using problem specific features for contour detection.\\n\\nThe authors consider a network trained for class-specific edge detection (e.g. outlining edges of roads in an image). They propose two problem domain tricks to improve the performance:\\n- use curriculum learning by training the network to first detect the \\\"easy\\\" edges, i.e. edges found also using the Canny edge detector\\n- add high frequency wavelet coefficients as additional feature maps to the convnet.\", \"the_two_techniques_prove_important_on_two_tasks\": \"- modified MNIST edge detection\\n- road boundary detection in aerial imaginery.\\n\\nMaybe the most important aspect of the paper is that shows that with little data (the real world aerial imaginary dataset had only 11 labeled tiles) manual feature engineering and smart cost function selection are still relevant. \\nSince this is a common pattern in many application domains, such as specialized medical image processing where labeled data is scarce, the paper is important. However, it is not clear if the proposed chanes are needed when more labeled data is available and how much do they overfit to the small test set.\\n\\nThe paper could be strengthened by analysing the impact of the proposed problem-dependent CL and features versus the amount of available training data. Are they still relevant with 100 labeled images?\\nThese experiments could be even run on the artificial MNIST set.\\n\\nMoreover, some analysis of result significance is needed. On the real-world dataset there is only 1 test case!! How much was the network tuned to properly work on it? Maybe the authors can run a cross-validation to show that the results don't overfit to this one test image?\", \"minor_remarks\": \"You refer to Stephane's Mallat book as Stephane 1999, this is wrong, his first name is Stephane and last is Mallat, please fix the bibliography and use Mallat 1999.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The main idea of the paper is adding a curriculum learning-based extension to CASEnet, a boundary detection method from 2017. In the first phase, the loss emphasizes easier examples with high gradient in the image, and in the second phase, the method is trained on all boundary pixels. This change seems to improve edge detection performance on a toy MNIST and an aerial dataset.\\n\\nA second innovation claimed by the authors is adding Wavelet decomposition-based processing into the net. Unfortunately, this mostly only speeds up learning, as the ablation does not show meaningful improvements relative to the error bounds in later stages of training. Furthermore, the paper lacks a discussion of related work on incorporating wavelet ideas into neural networks. For example: \\n-- Generic Deep Networks with Wavelet Scattering, by Ouyallon et al. \\n-- Invariant scattering convolution networks, by Bruna et al \\nand multiple more recent ones. Without either clear performance gains or more in-depth discussion of this novelty, it is not clear how to take it into account. \\n\\nWhen reading the paper, it appears that \\\"boundary detection\\\" for the cases that the authors are exploring is very directly related to 2-class semantic segmentation (road / non-road), the only difference being that the edge boundaries are weighted much higher in the cross-entropy loss. As such, there is a lot more recent net architecture work for semantic segmentation that should be directly applicable, and should perform much better than CaseNet when adapted to the task. As a result, the experiments and the significance of this paper are rather marginal. \\n\\nIn experimental results, the authors threshold prediction with 0.5, which is suboptimal. The resulting metric, which is just \\\"accuracy\\\" is called incorrectly \\\"average precision\\\". Instead, true definition of average precision should be used, that is not dependent on potentially suboptimal fixed thresholding but on the area under precision-recall curve instead. Finally, it would be helpful to do ablation and confidence bounds also on the main aerial road results, as the 15% gain is significantly more than the gain that appears in the toy dataset.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary: The suggest two improvements to boundary detection models: (1) a curriculum learning approach, and (2) augmenting CNNs with features derived from a wavelet transform. For (1), they train half of the epochs with a target boundary that is the intersection between a Canny edge filter and the dilated groundtruth. The second half of epochs is with the normal groundtruth. For (2), they compute multiscale wavelet transforms, and combine it with each scale of CNN features. They find on a toy MNIST example that the wavelet transform doesn\\u2019t impact results very much and curriculum learning seems to provide some gains. On the Aerial Road Contours dataset, they find an improvement of ~15% mAP over the prior baseline (CASENet).\", \"i_have_several_concerns_with_this_work\": [\"The idea of using wavelet transforms to augment CNNs has been more thoroughly explored in prior work (e.g., see [1]).\", \"No comparison to existing SOTA segmentation models (e.g., [2]). These semantic / instance segmentation models can easily be adapted to the task of boundary detection. I suspect the baseline here is weak.\", \"Section 6 is severely unfinished. The explanation is sparse and there are no quantitative results -- just the output of the model overlaid on one example.\", \"The choice of curriculum learning task is arbitrary, and there are no ablations explaining why this is a reasonable task. For example, what about random subsets of pixels? At the moment, it offers no insight for practitioners.\", \"There are no ablations for the Aerial Road Contours experiments. This seems necessary because it is the only realistic dataset evaluated in this work. The MNIST experimental results appear qualitatively different from the Contours experiment. For example, they show that wavelet features do not make much of a difference, but does it make a difference for Contours?\", \"Altogether, this work unfortunately offers few insights to vision practitioners, let alone general practitioners. Substantial work needs to be devoted to expanding experimental coverage.\", \"[1] Wavelet Convolutional Neural Networks. Shin Fujieda, Kohei Takayama, Toshiya Hachisuka\", \"[2] TensorMask: A Foundation for Dense Object Segmentation. Xinlei Chen, Ross Girshick, Kaiming He, Piotr Doll\\u00e1r\"]}" ] }
SJxzFySKwH
On the Equivalence between Positional Node Embeddings and Structural Graph Representations
[ "Balasubramaniam Srinivasan", "Bruno Ribeiro" ]
This work provides the first unifying theoretical framework for node (positional) embeddings and structural graph representations, bridging methods like matrix factorization and graph neural networks. Using invariant theory, we show that relationship between structural representations and node embeddings is analogous to that of a distribution and its samples. We prove that all tasks that can be performed by node embeddings can also be performed by structural representations and vice-versa. We also show that the concept of transductive and inductive learning is unrelated to node embeddings and graph representations, clearing another source of confusion in the literature. Finally, we introduce new practical guidelines to generating and using node embeddings, which further augments standard operating procedures used today.
[ "Graph Neural Networks", "Structural Graph Representations", "Node Embeddings", "Relational Learning", "Invariant Theory", "Theory", "Deep Learning", "Representational Power", "Graph Isomorphism" ]
Accept (Poster)
https://openreview.net/pdf?id=SJxzFySKwH
https://openreview.net/forum?id=SJxzFySKwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "MgBVQnhadpa", "wRgevPLlN9", "j5lRRct8qm", "cXM73QFK0z", "He-sSfmoat", "sNbwqRtqb1", "r1ePLWdhsH", "SJeRzb_njH", "H1gU0lOnoH", "SkgQ7VmcjB", "r1g1qS8VjH", "r1gcBHIVjr", "B1eQ7B8NjS", "SkgiTN8EjS", "SkgAYSx2qr", "HyeXHFGS9B", "S1ewogbg5r" ], "note_type": [ "official_comment", "official_comment", "comment", "official_comment", "comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1600737290536, 1581105814826, 1580090609617, 1578573388548, 1578469605954, 1576798733583, 1573843279163, 1573843221686, 1573843150339, 1573692442909, 1573311879366, 1573311809537, 1573311770905, 1573311683288, 1572763013639, 1572313403326, 1571979422839 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper1832/Authors" ], [ "ICLR.cc/2020/Conference/Paper1832/Authors" ], [ "~Oh-Hyun_Kwon1" ], [ "ICLR.cc/2020/Conference/Paper1832/Authors" ], [ "~Ziwei_Zhang1" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1832/Authors" ], [ "ICLR.cc/2020/Conference/Paper1832/Authors" ], [ "ICLR.cc/2020/Conference/Paper1832/Authors" ], [ "ICLR.cc/2020/Conference/Paper1832/Authors" ], [ "ICLR.cc/2020/Conference/Paper1832/Authors" ], [ "ICLR.cc/2020/Conference/Paper1832/Authors" ], [ "ICLR.cc/2020/Conference/Paper1832/Authors" ], [ "ICLR.cc/2020/Conference/Paper1832/Authors" ], [ "ICLR.cc/2020/Conference/Paper1832/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1832/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1832/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Paper Revised\", \"comment\": \"This version corrects some typos in the definition of $\\\\Sigma$, it should be $\\\\Sigma_n$\\nArxiv Link -> https://arxiv.org/abs/1910.00452\\nCode -> https://github.com/PurdueMINDS/Equivalence\"}", "{\"title\": \"Reply to comment\", \"comment\": \"Thank you very much for the comment and the interest in our work. We are working to get a complete repository with all the models, which are easily reusable in other settings as well.\"}", "{\"title\": \"Where can I find the source code?\", \"comment\": \"Amazing work!\\nI am looking for the source code for this paper (esp. CGNN and MC-SVD).\\nThe Appendix mentioned that the code is provided but I am not sure where I can find it.\\nCould you let me know where can I find it?\\nThanks!\"}", "{\"title\": \"Response to Question Regarding Experiments in Section 4.3\", \"comment\": \"Thank you for your comment.\\nWith regard to your comment, we use multiple SVD samples (not run until convergence - just one step of optimization) for the results with just MC-SVD (This is also the case for the food web example). SVD is run until convergence only when we specify MC-SVD $^\\\\dagger$ (with the dagger superscript) in Table 1. To address your other question, SVD run until convergence is unique only when the graph is devoid of isomorphic nodes, otherwise even this would require multiple samples.\\nWe will update the manuscript to make this more explict.\\nThanks,\\nAuthors\"}", "{\"title\": \"Regarding Experiments in Section 4.3\", \"comment\": \"Thank you for this insightful work!\\nHowever, I do have a question regarding \\\"Structural node representations from node embeddings\\\" in Section 4.3. The authors mention running multiple SVD (until convergence) \\\"with the sources of randomness being due to a random permutation of the adjacency matrix given as input to the SVD method and the random seed it uses\\\". However, from linear algebra we know the results of SVD are deterministic (up to a sign, which can be easily dealt with). For the food web example, the graph contains two set of eigenvectors (with the same eigenvalue), each set having all-zero elements for a connected component. As a result, the 'randomness' seems to be purely from how to arrange these eigenvectors? (In other words, for a graph with distinct eigenvalues, the results should be exactly the same). In any case, I would suggest a normal matrix factorization using gradient descent be a better example, since the randomness is clearer in MF than SVD.\"}", "{\"decision\": \"Accept (Poster)\", \"comment\": \"The paper shows the relationship between node embeddings and structural graph representations. By careful definition of what structural node representation means, and what node embedding means, using the permutation group, the authors show in Theorem 2 that node embeddings cannot represent any extra information that is not already in the structural representation. The paper then provide empirical experiments on three tasks, and show in a fourth task an illustration of the theoretical results.\\n\\nThe reviewers of the paper scored the paper highly, but with low confidence. I read the paper myself (unfortunately not with a lot of time), with the aim of increasing the confidence of the resulting decision. The main gap in the paper is between the phrases \\\"structural node representation\\\" and \\\"node embedding\\\", and their theoretical definitions. The analogy of distribution and its samples follows unsurprisingly from the definitions (8 and 12), but the interpretation of those definitions as the corresponding English phrases is not obvious by only looking at the definitions. There also seems to be a sleight of hand going on with the most expressive representations (Definitions 9 and 11), which is used to make the conditional independence statement of Theorem 2. The authors should clarify in the final version whether the existence of such a representation can be shown, or even better a constructive way to get it from data.\\n\\nGiven the significance of the theoretical results, the authors should improve the introduction of the two main concepts by:\\n- relating them to prior work (one way is to move Section 5 towards the front)\\n- explaining in greater detail why Definitions 8 and 12 correspond to the two concepts. For example expanding the part of the proof of Corollary 1 about SVD, to make clear what Definition 12 means.\\n- a corresponding simple example of Definition 8 to relate to a classical method.\\n\\nThe paper provides a nice connection between two disparate concepts. Unfortunately, the connection uses graph invariance and equivariance, which is unfamiliar to many of the ICLR audience. On balance, I believe that the authors can improve the presentation such that a reader can understand the implications of the connection without being an expert in graph isomorphism. As such, I am recommending an accept.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Any further questions or comments?\", \"comment\": \"We believe we have addressed all your concerns. Please let us know if you have any further questions or comments.\"}", "{\"title\": \"Any further questions or comments?\", \"comment\": \"Please let us know if you have any further questions or comments.\"}", "{\"title\": \"Any further questions or comments?\", \"comment\": \"We believe we have addressed all your concerns. Please let us know if you have any further questions or comments.\"}", "{\"title\": \"Paper Revised\", \"comment\": \"On further reviewing the proof of Theorem 2, we have made the proof easier to follow by fixing a bug in first paragraph the proof. The condition is easier to understand since it clearly follows from Theorem 1. Moreover, in Theorem 1 we further emphasized that Y is defined over a \\\"set\\\" S (A set is invariant to its ordering). It was obvious since isomorphic sets must have the same distribution over Y, but we feel this may further help future readers.\"}", "{\"title\": \"Response to Official Blind Review #2\", \"comment\": \"Thank you for the comments.\\nTackling this open problem, which brings matrix factorization (and other node embedding techniques) and graph neural networks under the same umbrella, redefines node embeddings (a 115 year-old concept), and fixing a decade of misunderstandings, requires the mathematical machinery of abstract algebra and modern probability theory, i.e., group and measure theories, respectively. We now connect some of the measure theory tools to causality, to help readers familiar with causal models. We found that balancing rigour, intuition, experiments, and page limit was extremely challenging for this paper. In a way, there may not be a perfect combination.\\n\\nHowever, we strive to make the paper an easier read to a general audience, since we believe this to be a fundamental contribution that will survive the test of time. In this regard, we have now added a new section in the appendix with a visual aid, it is worth checking. The added illustrations and discussion further showcases the kind of representations/ embeddings which these techniques learn, while emphasizing why the prevalent understanding of inductive and transductive learning is a misconception.\\n\\nWe hope with our clarifications, the reviewer will see our fundamental contributions, and change to \\u201cAccept\\u201d (a fairer score given the significance of the work).\"}", "{\"title\": \"Response to Official Blind Review #3\", \"comment\": \"Thank you for the positive comments. And we would like to stress your importance in the discussion phase, as the only reviewer who has published papers in the area (the other two reviewers selected \\u201cI do not know much about this area\\u201d).\\n\\nWe would like to emphasize that our work also has tremendous practical implications. Through it, we now understand why standard GNNs fail to predict links, and why methods that fix the issue (RGNN, SEAL, \\u2026) will propose randomized methods (because they rely on node embeddings to learn a joint two-node representation). Moreover, all these years we have been using Monte Carlo methods (matrix factorization, variational GNNs) without treating them as Monte Carlo: e.g., multiple matrix factorizations (MF\\u2019s) should be better than just one, and non-unique factorizations (variance), previously perceived as a foe in numerical methods, is now a potential friend. More precisely, we prove any graph task *must rely* on transforming the Monte Carlo samples (node embeddings) back into a structural representation. More samples => better representations. Hence, a method with more samples (where high-variance samples are not very informative) [e.g., MC-SVD] may be more accurate than a method with fewer samples (where samples are more informative) [e.g., SVD]. To the best of our knowledge, the research community was unaware of this fundamental trade-off. \\n\\nIn addition, we also prove that all tasks that can be performed by node embeddings can also be performed by structural representations and vice-versa - which allow us to do tasks with GNNs and matrix factorization that were perceived to be beyond the reach of these methods, respectively. We also expect our results will spark a number of hybrid MF-GNN papers.\"}", "{\"title\": \"Response to Official Blind Review #4\", \"comment\": \"Thank you for the comments. Indeed, we also believe our work is foundational and will inspire new hybrid (matrix factorization - graph neural network) applied research in the future, besides more theoretical work. As you point out - our theory and results show that a single node embedding sample might be insufficient, and to this end we introduce new practical guidelines to generating and using node embeddings (in the form of MC-SVD and CGNN\\u2019s), empowering them to be used for tasks which were previously perceived to be beyond the reach of factorization methods.\\n\\nRegarding the question about graphs with node/ edge features - the datasets we consider in our quantitative experiments are endowed with node features, whereas, to simplify the exposition, those in the qualitative experiments do not possess node or edge features. Our theory is designed to hold for any type of node and edge attributes (the empirical results showcases with node attributes).\", \"introduction\": \"We have now added a pointer to a new section in the appendix with a visual aid to help explain the concepts of structural representations and positional node embeddings. The added illustrations and discussion further showcases the kind of representations/ embeddings which these techniques learn, while emphasizing why the prevalent understanding of inductive and transductive learning is a misconception. Overall, we found that balancing rigour, intuition, experiments, and page limit was extremely challenging for this paper. In a way, there may not be a perfect combination. However, we strive to make the paper an easier read to a general audience and we welcome suggestions, since we believe this to be a fundamental contribution that will survive the test of time.\\n\\nWe hope with our clarifications, the reviewer will see our fundamental contributions, and change to \\u201cAccept\\u201d (a fairer score given the significance of the work).\"}", "{\"title\": \"Paper Revised\", \"comment\": \"We have added a pointer in the introduction to a new section in the appendix that helps visually illustrate the concepts of structural representations and positional node embeddings. A new lemma (relating causality and noise outsourcing) has been added to help readers familiar with causality understand the implications of Theorem 2 (it also applies to causal models).\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper tries to clarify some confounding concepts around graph/node representation learning by providing a unifying theoretical framework based on invariant theory. The authors define node embedding and structural graph representation based on a minimal number of requirements, which is, in turn, used to derive some useful properties and unify two seemly different concepts. Based on these, a new graph neural network framework is proposed. The experiments on multiple tasks with multiple datasets validate the main claim that a single node embedding is insufficient to capture the structural representation of a graph.\\n\\nOverall, this paper tries to suggest a solution to a somewhat confusing and controversial problem that everyone recognises but hard to figure out a clear resolution. In that sense, this paper would serve as a good starting point by providing some theoretical baselines. I would like to see more discussion on this topic in future.\\n\\nIn many graph problems where each node is endowed with a certain feature set, we often observe the case where two subsets of nodes are isomorphic in terms of link structure but with different edge of node features. It would be good if there are some discussions on these cases.\\n\\nHere's a minor comment on the representation of the paper. Although the concept of node embedding and structural representations get clearer as reading, the introduction, where I couldn't find any reference of these, seems unlikely to clarify the difference between these two concepts. The familiar representation in Section 3 might be the first part where the readers could get some intuition about their differences.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a unifield theoretical framework for node embeddings and structural graph representations, which bridgs methods like matrix factorization and graph neural networks.\\nThe theoretical analysis is sufficient and experimental results are good. The theory alo shown that the concept of transductive and inductive learning is unrelated to node embeddings and graph representations, which clears another source of confusion in the literature.\\nIn my opinion, this is a theory paper and the proof are sufficient.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The authors present mostly theoretical analysis indicating the equivalence of embeddings and structural graph representations. The authors argue that while most of the earlier work consider these to be different, they are actually the same and give theory and empirical results to back up this claim.\\n\\nThis is not an easy paper to read, as the authors immediately jump into heavy notation without much intuition or visual aid. It would be much better to include some figures to help the readers appreciate the work.\\nThis continues throughout the experiments as well, where the authors are not very gentle when it comes to presentation.\\n\\nI gave a weak accept as I do not want my (unfortunately) weak review (due to the paper topic not being my strong point) to have a great effect on the final decision. However, it is clear that the paper can and should be better written, and the paper ideas made closer to the readers. At the moment this is definitely not the case.\\n\\n\\n======= AFTER THE REBUTTAL ===========\\n\\nThank you for working on making the paper more accessible. I am perfectly fine with seeing this paper at the conference, and will change the vote to Accept simply to not block on my vote and to ensure the SPC makes sure to thoroughly consider this paper (in case they do some sort of ranking of paper reviews for deciding who gets accepted). However please note that my confidence remains as low as previously.\\nIt is unfortunate that ICLR did not do any paper-reviewer matching like other conferences and that you are stuck with my weak review. But that is a different story that the organizers should make sure to address asap.\"}" ] }
rkgbYyHtwB
Disagreement-Regularized Imitation Learning
[ "Kiante Brantley", "Wen Sun", "Mikael Henaff" ]
We present a simple and effective algorithm designed to address the covariate shift problem in imitation learning. It operates by training an ensemble of policies on the expert demonstration data, and using the variance of their predictions as a cost which is minimized with RL together with a supervised behavioral cloning cost. Unlike adversarial imitation methods, it uses a fixed reward function which is easy to optimize. We prove a regret bound for the algorithm which is linear in the time horizon multiplied by a coefficient which we show to be low for certain problems in which behavioral cloning fails. We evaluate our algorithm empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning.
[ "imitation learning", "reinforcement learning", "uncertainty" ]
Accept (Spotlight)
https://openreview.net/pdf?id=rkgbYyHtwB
https://openreview.net/forum?id=rkgbYyHtwB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "v0g0bcWdAYd", "ET7lzkMjg9", "X6vMEhBtg", "ryxdF6c2sS", "rygcE4chiS", "S1lhsQchsB", "rkeRVQ52ir", "SJgLU6tycB", "S1gya8oTKB", "Syx2DuzwFr", "HJg9IX6v_B", "S1lv4r5qvS" ], "note_type": [ "official_comment", "comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1588054385457, 1583345257135, 1576798733548, 1573854591628, 1573852210056, 1573852067686, 1573851957979, 1571949902087, 1571825335104, 1571395684450, 1570390865642, 1569527086589 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper1831/Area_Chair1" ], [ "~Akshay_Krishnamurthy1" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1831/Authors" ], [ "ICLR.cc/2020/Conference/Paper1831/Authors" ], [ "ICLR.cc/2020/Conference/Paper1831/Authors" ], [ "ICLR.cc/2020/Conference/Paper1831/Authors" ], [ "ICLR.cc/2020/Conference/Paper1831/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1831/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1831/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1831/Authors" ], [ "~Siddharth_Reddy1" ] ], "structured_content_str": [ "{\"title\": \"Response\", \"comment\": \"To clarify, it is interactive imitation learning in the sense that the algorithm can collect additional data in the environment. This is in contrast to supervised behavior cloning algorithms that only use demonstrations and no additional environment roll-outs.\"}", "{\"title\": \"Minor comment\", \"comment\": \"Hi -- I just want to point out that this paper is _not_ studying interactive imitation learning. It is considering the non-interactive setting, where we cannot query the expert, but we do see the expert's actions.\"}", "{\"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper presents an approach for interactive imitation learning while avoiding an adversarial optimization by using ensembles. The reviewers agreed that the contributions were significant and the results were compelling. Hence, the paper should be accepted.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Updates\", \"comment\": \"We have made a number of updates to the paper in response to the comments, please see our answers below. We have also changed the colors of the plots to be more color-blind friendly.\"}", "{\"title\": \"Response to Reviewer #3\", \"comment\": [\"Thank you for the review. To address the questions/comments:\", \"Major questions/comments:\", \"It is true that we should not expect the policies in the ensemble to perform better than BC, since they are trained on the same limited data. However, the motivation is that even though they may make errors, the errors they make are likely to be different from each other. For example, if we look at several functions sampled from a Gaussian process posterior, these will tend to agree on the training data, but can look very different (both from the true function and each other) outside of the training data. Therefore we do not care so much about the quality of the ensemble policies (measured by how they would perform in the environment), but rather whether they exhibit low variance on the training data and higher variance off of it.\", \"We have updated the text to mention that other methods for posterior approximation are also possible (Bayes by Backprop, MC-dropout), and added additional experiments comparing the ensemble approach to MC-dropout in Appendix D2. It turns out that MC-dropout also works well, similarly to the ensemble method. This shows that our approach is not specific to the ensemble method which we use in most of our experiments.\", \"We have added the number of environment steps to the curves in Figure 2b, which shows the sample complexity. Note that since we use A2C as an RL optimizer in our experiments, we are not particularly sample efficient in terms of environment steps. Our general method is agnostic to the RL optimizer though, so more sample-efficient RL methods (such as model-based methods or others which reuse data more efficiently) could in principle be used as well.\"], \"minor_questions\": [\"Minibatch 4 was a typo, thanks for catching that. We use 16 parallel environments for A2C and have added this to the experiment details.\", \"We initially did not include GAIL in the continuous control experiments because there was not much headroom for improvement over BC. We will add these experiments for the next update.\"]}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for the encouraging comments and we are glad you enjoyed reading the paper. Regarding the GAIL hyperparameters: this was a formatting issue and we have changed the text to refer to Table 2 where the GAIL hyperparameters are listed. We have also added the chain MDP example to the appendix and added references discussed in the previous comment thread.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"Thank you for the detailed review and suggestions for improving the paper. We have made the following changes in response:\\n\\n- We have added references to the two suggested related works (Menda 2018 and Venkatraman 2015).\\n\\n1. We have clarified that Step 9 of the algorithm optimizes the expected clipped cost under the current policy. In our experiments we use A2C, which estimates the expected cost using rollouts from multiple parallel actors all sharing the same policy (16 in our case, we have added this to the experiment details in Appendix C).\\n\\n2-3. We have added ablation experiments in Appendix D1 showing the effect of the different choices for the cost clipping (negative vs. 0, not clipping at all). Having the range of the cost (or reward) include negative and positive values has a large impact on performance. We believe the reason is that if the cost is always positive (or reward is always negative), then an easy way to minimize the cost (or maximize reward) is for the agent to terminate the episode early. Some environments such as Mountain Car are in fact designed this way: all rewards are negative, and the optimal policy is to reach the goal (and thus terminate the episode) as soon as possible. In other environments however, terminating early is highly suboptimal (i.e. the agent dies and cannot collect any more reward). Including both positive and negative costs helps to avoid these issues. \\n\\n4. We train the different models in the ensemble starting from different initializations and using different bootstrap samples of the demonstration data (we have made this more clear in the text). While it is true that the degenerate case of all models converging to the same solution could potentially occur, our experiments and other works which successfully use ensembles for posterior approximation (mentioned in related work) suggest that this is rare in practice. We have also added experiments in Appendix D2 comparing ensembles to MC-dropout for posterior approximation, and found that dropout also works well - this shows that our approach is not specifically tied to the ensemble method. \\n\\n5. We have changed notation to use \\\\kappa^* for the optimum.\\n\\n6. We have specified the agent's start state, and changed the notation to be consistent with the original work.\\n\\n7. Our goal is to show that \\\\kappa^* is upper bounded by a constant independent of T, which translates into a better regret bound than BC when T becomes large. Since \\\\kappa^* is the minimum of \\\\kappa(U) for all subsets U of S, showing that \\\\kappa(U) is upper bounded by a constant for some U means that \\\\kappa^* is also. We have clarified this in the example.\\n\\n8. In Example 1, we have specified that we are using a Beta distribution to represent the posterior, whose parameters are determined by the state-action counts in the demonstration data (Beta/Dirichlets are standard choices for binomial/categorical distributions). For the state s_2 which is never visited, the Beta distribution becomes equivalent to a uniform distribution, which is where we get our value of the variance from. \\n\\n9. Most of the derivations do carry over to the continuous setting, but there are two steps in the last part of the proof of Lemma 1 that use properties of discrete states/actions: that \\\\alpha(U) >= 1, and that \\\\pi(a | s) \\\\leq 1 (note that for continuous actions, densities can become arbitrarily peaked so the last bound, which was used to bound \\\\beta(U), does not hold). We are currently working on the continuous case but our current results are for the tabular case.\", \"additional_feedback\": \"\", \"we_have_made_a_number_of_additional_changes\": \"fixing the citation, changing notation in the example, changing density to mass, added mention of Pinsker's inequality and changed the wording regarding the q threshold.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"Summary of what the paper claims and contributes\\n---\\nThis paper proposes a new interactive imitation learning algorithm to address the covariate shift problem in imitation learning. It explicitly seeks to avoid settings interactive expert feedback (e.g. DAgger). The method is straightforward: 1. First, learn an ensemble of policies via KL-based Behavior Cloning 2. Then, learn a new policy via a new objective that combines the original Behavior Cloning objective with a \\\"disagreement\\\" loss, formed by computing the expected variance of the ensemble evaluated on state-action trajectories under the new policy. The intuition for the method is that by learning an ensemble, it will have low variance on in-distribution demonstration data, and high variance on out-of-distribution other data; by encouraging the policy to seek regions of low variance, it should result in a policy that more closely matches the demonstrator's state visitation distribution than Behavior-Cloning alone. Analysis in the discrete finite case shows that the algorithm achieves regret linear in \\\\kappa*T, where \\\\kappa is an environment- and expert-dependent constant. The analysis is instantiated for a simple MDP, and experiments comparing their algorithm on this restricted environment provide some evidence that the bound is achievable in practice.\\n\\nFurther experiments on a variety of Atari environments and continuous-control tasks from OpenAI Gym also 1) demonstrates that their algorithm outperforms Behavior Cloning in these settings 2) usually approaches expert performance with a small number of demonstrations, and 3) also shows that the uncertainty cost improves over time, indicating the final policy learns to visit states where the ensemble agrees, and that while doing so, improves performance on the underlying task.\\n\\nEvaluation\\n---\\n>Originality:\\nAre the tasks or methods new?\\nThe method is new.\\n\\nIs the work a novel combination of well-known techniques?\\nYes.\\n\\nIs it clear how this work differs from previous contributions?\\nYes.\\n\\nIs related work adequately cited?\", \"there_is_some_missing_discussion_of_related_works\": \"1. EnsembleDAgger (Menda 2018) also uses the variance of ensembles in Imitation Learning, but instead of using it to regularize on-policy learning, it uses it as an improved decision criterion by which to query an expert demonstrator.\\n2. Data as Demonstrator (Venkatraman 2015) uses on-policy learning to create \\\"corrections\\\" of time-series models (See their Fig 1), which is similar to this paper's intuition of seeking to push the learner back to places that are in-distribution of the expert demonstrations. That paper also achieves a linear regret bound under some assumptions.\\n\\n>Quality:\\nIs the submission technically sound?\\nMostly, although there are some issues:\\n1. Step 9 of the algorithm is ambiguous. What is the distribution of on-policy data that is fed into the cost? E.g. how many rollouts from the policy are collected?\\n2. Why is the clipped cost negative, as opposed to 0?\\n3. Why was a clipped cost used at all? This cost is different from that used in the theoretical analysis. Some justification and discussion is needed for why the new cost was used, and whether the analysis still applies when it's used.\\n4. Throughout most of the paper, p(\\\\pi | \\\\mathcal D) represents the model ensemble. However, no discussion was dedicated to what we should expect this distribution to look like in theory and in practice. It depends on how the ensemble is constructed / learned. A degenerate case would be if all models in the ensemble converged to the same local optima, in which case they would agree everywhere, nullifying the cost penalty. Discussion of what properties this distribution must satisfy is missing. It probably needs full support over the space of policies such that the optimal policy is nearly realizable (within \\\\epsilon)?\\n5. \\\\kappa is overloaded: A. it's used as a function B. it's used as the optimal value of that same function. Consider using different notation for one of the, e.g. \\\\kappa^* for the optimum, or \\\\gamma for the function. Furthermore, it might help to make \\\\kappa's dependencies clearer, which would help illustrate its independence of T.\\n6. Example 1: the fact that the policy always starts at s_1 is missing from the description (at least, an equivalent assumption is made in Ross 2010)\\n7. Example 1: it's not clear that setting \\\\mathcal U = \\\\{s_1, s_2\\\\} achieves the optimum of \\\\kappa(\\\\mathcal U). Discussion of this aspect is needed.\\n8. Example 1: The statement that the variance is equivalent to the variance of the uniform distribution seems to be a strong assumption about p(\\\\pi | \\\\mathcal D). This missing assumption is related to point 4. I mentioned above^\\n9. The paper is missing discussion for why the analysis would not immediately extend to continuous state and action spaces.\\n\\nAre claims well supported by theoretical analysis or experimental results?\\nYes, although the experimental results would be made stronger if related approaches were considered, e.g. Reddy 2019. Right now, there's just a single method of comparison -- BC.\\n\\nIs this a complete piece of work or work in progress?\\nSeems complete.\\n\\nAre the authors careful and honest about evaluating both the strengths and weaknesses of their work?\\nI believe so -- noting that BC ended up performing similar in environments where there is less drift was a good addition.\\n\\n>Clarity:\\nIs the submission clearly written?\\nYes.\\n\\nIs it well organized?\\nYes.\\n\\nDoes it adequately inform the reader?\\nYes.\\n\\n>Significance:\\nAre the results important?\\nYes.\\n\\nAre others (researchers or practitioners) likely to use the ideas or build on them?\\nYes.\\n\\nDoes the submission address a difficult task in a better way than previous work?\\nYes.\\n\\nDoes it advance the state of the art in a demonstrable way?\\nYes.\\n\\nDoes it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach?\\nUnique theoretical approach.\\n\\nAdditional feedback\\n---\", \"sec_3\": \"\\\"The threshold q defines a normal range of uncertainty based on the demonstration data, and values outside of this range incur a negative cost\\\". The logic of this statement is confusing. 1. It's not clear what \\\"outside\\\" means from the sentence alone (i.e. it should be \\\"above\\\"). 2. A single value doesn't define a range (i.e. state the lower value is 0).\\n\\nSec 4.1: \\\"high density\\\" -> \\\"high mass\\\"\\n\\nIt would help to have a diagram of \\\\mathcal U, \\\\mathcal S - \\\\mathcal U, \\\\alpha, \\\\beta, \\\\kappa.\\n\\nIt would be clearer if set notation was used for the complement of \\\\mathcal U, rather than \\\\beta's definition of s\\\\notin \\\\mathcal U.\", \"example_1\": \"citation should be Ross 2010, not Ross 2011.\\n\\nExample 1 has different notation than in Ross 2010 (consider changing to match)\\n\\nIt's possible that copying a model from the ensemble and fine-tuning it with the loss would yield a faster Algorithm (1). Would this work? What do the training curves (i.e. like the plots in Fig 3b) look like in that case?\\n\\nWhy does the breakout DRIL agent outperform the expert?\\n\\nMention that Pinkser's inequality yields the KL bound on total variation.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper proposes an imitation learning algorithm that combines behavioral cloning with a regularizer that encourages the agent to visit states similar to the demonstrated states. The key idea is to use ensemble disagreement to approximate uncertainty, and use RL to train the imitation agent to visit states in which an ensemble of cloned imitation policies is least uncertain about which action the expert would take. Experiments on image-based Atari games show that the proposed method significantly outperforms BC and GAIL baselines in three games, and performs comparably or slightly better than the baselines in the remaining three games.\\n\\nOverall, I enjoyed reading this paper. It proposes a relatively simple imitation method with compelling empirical results.\", \"one_minor_comment\": \"on page 15, the sentence \\\"We initially performed a hyperparameter search on Breakout with 10 demonstrations over the following values: \\\" ends in a blank space, without actually providing any hyperparameter values. It would be nice if you could actually include those values, or at least how many different values were searched.\\n\\nThank you for addressing the comments about related work in an earlier thread (https://openreview.net/forum?id=rkgbYyHtwB&noteId=S1lv4r5qvS). Two follow-ups:\\n - The chain MDP example clearly illustrates why including the BC cost is important, and how DRIL differs from support estimation methods like RED. Thank you for the clarification.\\n - The focus of Sasaki et al. is on reducing the number of environment interactions, but their proposed method also addresses covariate shift: it fits a Q function that classifies whether the demonstration states are reachable from the current state, and thus encourages the agent to return to demonstrated states.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"Summary:\", \"The paper aims to address the covariate shift issue of behavior cloning (BC). The main idea of the paper is to learn a policy by minimizing a BC loss and an uncertainty loss. This uncertainty loss is defined as a variance of a policy posterior given by demonstration. To approximate this posterior, the paper uses an ensemble approach, where an ensemble of policies is learned from demonstrations. This approach leads to a method called disagreement-regularized imitation learning (DRIL). The paper proofs for a tabular setting that DRIL has a linear regret bound in terms of the horizon, which is better than that of BC which has a quadratic regret bound. Empirical evaluation shows that DRIL outperforms BC in both discrete and continuous control tasks, and it outperforms GAIL in discrete control tasks.\", \"General comments:\", \"The paper proposes a simple but effective method to address the important issue of covariate shift. The method performs well empirically and has a theoretical support (although only for a tabular setting). While there are some issues (see below), this is a good paper. I vote for acceptance.\", \"Major comments and questions:\", \"Accuracy of posterior approximation via ensemble.\", \"It is unclear whether the posterior approximated from ensemble is accurate. More specifically, these ensemble policies are trained using BC loss. Under a limited amount of data (where BC fails), these policies would also fail and are inaccurate. Therefore, it should not be expected that a posterior from these inaccurate policies is accurate. Have the authors measure or analyze accuracy of these policies or that of the posterior? This important point is not mentioned or analyzed in the paper.\", \"Alternative approaches to posterior approximation and uncertainty computation.\", \"There are other approaches to obtain a posterior besides the ensemble approach, e.g., Bayesian neural networks. Such alternatives were not mentioned in the paper. Also, there are other quantities for measuring uncertainty besides the variance such as the entropy. These approaches and quantities have different pros and cons and they should be discussed in the paper.\", \"Sample complexity in terms of environment interactions.\", \"The sample complexity in terms of environment interactions is an important criterion for IL. I suggest the authors to include this criterion in the experiments.\", \"Minor questions:\", \"Why does the minibatch size is only 4 in the experiments for all methods. This is clearly too small for a reasonable training of deep networks. Is this a typo?\", \"It is strange to not evaluate GAIL in the continuous control experiments, since GAIL was originally evaluated in these domains. I strongly suggest the authors to evaluate GAIL (and perhaps stronger methods such as VAIL (Peng et al., 2019)) in the continuous control experiments.\", \"---After reading authors' response---\", \"I have read the authors' response and other reviews. The authors addressed my comments in the response and the updated paper. I keep the same rating and recommend acceptance.\"]}", "{\"comment\": \"Thank you for the helpful pointers to the related work. The Random Expert Distillation (RED) method by Wang et al [1] is indeed relevant and we will include a discussion in the updated paper. Both RED and our method use an uncertainty measure derived from the demonstration data as a cost function which is minimized through RL, and which is designed to guide the policy back towards the demonstration data. There are two main differences between RED and our method: i) we use the variance of an ensemble as a measure of uncertainty, rather than random network distillation [3] and ii) we include a supervised behaviour cloning (BC) cost in addition to the uncertainty cost.\\n\\nIncluding the BC cost is actually quite important for our theoretical results. In Lemma 1, J_{exp}(\\\\pi) is broken up into two terms, one of which is bounded by the BC cost (scaled by alpha(U)) and one of which is bounded by the uncertainty cost (scaled by 1/beta(U)). By minimizing both of these costs, we minimize J_{exp}(\\\\pi) (scaled by kappa), which in turn translates into a regret bound. \\n\\nThe following example shows that minimizing the uncertainty cost alone without the BC cost can lead to highly sub-optimal policies if the demonstration data is generated by a stochastic policy which is only slightly suboptimal. Consider the following deterministic chain MDP:\\n\\ns0 <---> s1 <---> s2 <---> s3 \\n\\nSay the agent always starts in s1, and gets a reward of 1 in s3 and 0 elsewhere, and there are 2 actions: left and right (in s3, going right keeps the agent at s3, in s0 going left keeps the agent at s0).\", \"assume_the_demonstration_data_is_generated_by_a_policy_defined_as_follows\": \"- in s0, go right with probability 1\\n- in s1, go right with probability 1\\n- in s2, go right with probability 0.9, left with probability 0.1\\n- in s3, go right (i.e. stay at s3) with probability 1. \\n\\nIf both transitions (s2, right) and (s2, left) appear in the demonstration data, then (assuming realizability) RED will assign the same cost to both transitions. This means that a policy which cycles forever between s1 and s2 (always going left at s2, and never collecting reward) will have the same cost as a policy which goes right at s2 and then stays at s3 (thus collecting lots of reward). If we include a BC cost however, the policy will learn to assign a higher probability to going right at s2 and end up collecting reward. For both RED and our method, if we are realizable and optimization can be performed exactly, then the uncertainty cost will be set to zero for all transitions appearing in the demonstration data, regardless of their relative frequency. However, this problem can be avoided by combining the BC cost with the uncertainty cost. \\n\\nThe method of Sasaki et. al [2] is interesting and we will include a reference in related work. The focus of their work is somewhat different, i.e. reducing the number of environment interactions rather than addressing covariate shift. \\n\\nThank you for the recommendations regarding the description of SQIL, we will include them when we update the paper.\\n\\n[1] http://proceedings.mlr.press/v97/wang19d/wang19d.pdf \\n[2] https://openreview.net/forum?id=BkN5UoAqF7\\n[3] https://arxiv.org/pdf/1810.12894.pdf\", \"title\": \"Addressing Related work comments:\"}", "{\"comment\": \"Thank you for the interesting paper! I have two comments about related work.\\n\\nThere are two prior methods \\u2013 Random Expert Distillation (RED) by Wang et al. [1], and the implicit IRL method in Sasaki et al. [2] \\u2013 that aren\\u2019t mentioned in the paper, but are similar to the proposed method. In particular, the proposed method seems to take a similar approach to RED, except that it uses ensemble disagreement instead of random network distillation for density estimation of the demonstrations. It would be nice to discuss how the proposed method relates to the prior work.\\n\\nThe discussion of one of the prior methods \\u2013 SQIL by Reddy et al. \\u2013 mischaracterizes how SQIL works. The first paragraph on page 6 claims that SQIL requires careful reward decay and does not use a fixed reward function. In fact, SQIL uses a fixed reward function (r=+1 for demonstrations, r=0 for everything else), and does not modify or decay the rewards over time. It would be nice to adjust how SQIL is positioned in the related work. In my opinion, the proposed method differs from SQIL in that it uses a fixed reward function that is potentially less sparse and potentially easier to train the imitation agent with via RL.\\n\\nAgain, thank you for the interesting work. I look forward to seeing how the paper evolves, and hope that others working on imitation learning give it a read.\\n\\n[1] http://proceedings.mlr.press/v97/wang19d/wang19d.pdf\\n[2] https://openreview.net/forum?id=BkN5UoAqF7\", \"title\": \"Related work\"}" ] }
B1eZYkHYPS
Shifted Randomized Singular Value Decomposition
[ "Ali Basirat" ]
We extend the randomized singular value decomposition (SVD) algorithm (Halko et al., 2011) to estimate the SVD of a shifted data matrix without explicitly constructing the matrix in the memory. With no loss in the accuracy of the original algorithm, the extended algorithm provides for a more efficient way of matrix factorization. The algorithm facilitates the low-rank approximation and principal component analysis (PCA) of off-center data matrices. When applied to different types of data matrices, our experimental results confirm the advantages of the extensions made to the original algorithm.
[ "SVD", "PCA", "Randomized Algorithms" ]
Reject
https://openreview.net/pdf?id=B1eZYkHYPS
https://openreview.net/forum?id=B1eZYkHYPS
ICLR.cc/2020/Conference
2020
{ "note_id": [ "zvhfGspnu1", "HylY8zcy5r", "B1eRnOfRYH", "rJlncd0aKS" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733518, 1571951185486, 1571854518003, 1571838100165 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1830/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1830/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1830/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The proposed algorithm is found to be a straightforward extension of the previous work, which is not sufficient to warrant publication in ICLR2020.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper extends the existing randomized SVD algorithm by Halko et al. to propose a shifted randomized SVD algorithm. The proposed algorithm performs randomized SVD on \\\\bar{X}=X-\\\\mu 1^T with a given constant vector \\\\mu without explicitly constructing \\\\bar{X} from X.\\n\\nThe proposed algorithm seems to be a straightforward modification of the randomized SVD algorithm, so that the contribution of this paper would be incremental. Also, the claimed benefits of the proposed method are better accuracy, less memory usage, and less computational time. On memory usage and computational time, however, I could not find any experimental comparison in Section 5. At least in Section 5.3 where the data matrix is indeed sparse, there should be some comparison in these respects. Because of these reasons, I would not be able to recommend acceptance of this paper.\\n\\nIn Algorithm 1, line 6, I think that one has to subtract not \\\\mu 1^T but \\\\mu 1^T \\\\Omega.\\n\\nI guess that equation (12) comes from Theorem 1.2 in Halko et al. (2011). An explicit citation should be needed here to put an appropriate credit to that paper. Also, the conditions under which equation (12) holds should be explicitly stated. In fact, Theorem 1.2 in Halko et al. is about the expected reconstruction error with rank-2k factorization obtained by randomized SVD, which is not consistent with the authors claim in page 3, lines 4-5: The first top unused singular value is not \\\\sigma_{k+1} but \\\\sigma_{2k+1}.\\n\\nPage 1, line 25: that provides (for) the SVD estimation\\nPage 2, line 38: that span(s) the range\\nPage 4, line 6: a s(h)ifted matrix\\nPage 4, line 29: to be use(d) by the\\nPage 5, line 22: uniformly distribut(ion -> ed)\\nPage 5, lines 33, 38: Space is missing after the period.\\nPage 5, line 43: There is an extra comma.\\nPage 5, line 48: suggest(s) that\\nPage 6, line 4: approaches (to) zero\\nPage 6, line 14: 16 \\\\times 1979 should read 64 \\\\times 1979.\\nPage 7, line 2: is validate(d)\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes a method for performing an SVD on a mean-subtracted matrix without having to actually subtract the mean from the matrix first.\\n\\nWhile the problem of calculating an SVD is fundamental, the paper\\u2019s idea and general problem it covers is a clear mismatch for ICLR in my opinion. The algorithm is an extension of a previous one. Since the idea is very simple, one would expect a lot of theory, but the main theoretical result in Section 4 can be copy-pasted from the previous algorithm since they share the same result. Three sources are cited, indicating either a very narrow view of the field, or overconfidence in the fundamental significance of the contribution.\\n\\nThe experiments show the technique works as advertised, but the importance of the result is low.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper adapts the approach by Halko to get a SVD using\\na low rank concept to the case where the matrix is implicit shifted.\\nHonestly - there is nothing wrong with this paper except the level\\nof contribution. I consider this work to be widely irrelevant. You\\ncan report this on arxiv if you like but I do not think it is important in general.\\nThe results show some effect - but not a relevant one. \\nFor ICLR this is much to less. And there is not much more to say.\\n\\n-----------------------------------------------------------------------------------\\n-----------------------------------------------------------------------------------\\n-----------------------------------------------------------------------------------\\n-----------------------------------------------------------------------------------\\n-----------------------------------------------------------------------------------\\n-----------------------------------------------------------------------------------\\n-----------------------------------------------------------------------------------\\n-----------------------------------------------------------------------------------\\n-----------------------------------------------------------------------------------\\n-----------------------------------------------------------------------------------\\n-----------------------------------------------------------------------------------\\n-----------------------------------------------------------------------------------\"}" ] }
r1xxKJBKvr
PassNet: Learning pass probability surfaces from single-location labels. An architecture for visually-interpretable soccer analytics
[ "Javier Fernández", "Luke Bornn" ]
We propose a fully convolutional network architecture that is able to estimate a full surface of pass probabilities from single-location labels derived from high frequency spatio-temporal data of professional soccer matches. The network is able to perform remarkably well from low-level inputs by learning a feature hierarchy that produces predictions at different sampling levels that are merged together to preserve both coarse and fine detail. Our approach presents an extreme case of weakly supervised learning where there is just a single pixel correspondence between ground-truth outcomes and the predicted probability map. By providing not just an accurate evaluation of observed events but also a visual interpretation of the results of other potential actions, our approach opens the door for spatio-temporal decision-making analysis, an as-yet little-explored area in sports. Our proposed deep learning architecture can be easily adapted to solve many other related problems in sports analytics; we demonstrate this by extending the network to learn to estimate pass-selection likelihood.
[ "fully convolutional neural networks", "convolutional neural networks", "sports analytics", "interpretable machine learning", "deep learning" ]
Reject
https://openreview.net/pdf?id=r1xxKJBKvr
https://openreview.net/forum?id=r1xxKJBKvr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "O9HqIH3Gc", "HJeEEZUtoS", "BJxH_iEYor", "rJxZUjVFoS", "S1gufiEFjS", "B1eW1o4FiB", "B1eBEXSAtS", "Sye0HBDpFB", "HJeJc8nvtr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733489, 1573638443571, 1573632876760, 1573632841239, 1573632784297, 1573632729024, 1571865389101, 1571808582115, 1571436166811 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1829/Area_Chair1" ], [ "ICLR.cc/2020/Conference/Paper1829/Authors" ], [ "ICLR.cc/2020/Conference/Paper1829/Authors" ], [ "ICLR.cc/2020/Conference/Paper1829/Authors" ], [ "ICLR.cc/2020/Conference/Paper1829/Authors" ], [ "ICLR.cc/2020/Conference/Paper1829/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1829/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1829/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes PassNet, which is an architecture that produces a 2D map of probability of successful completion of a soccer pass. The architecture has some similarities with UNet and has downsampling and upsampling modules with a set of skip-connections between them.\", \"the_reviewers_raised_several_issues\": [\"Novelty compared to UNET\", \"Lack of ablation studies\", \"Uncertainty about what probabilities mean and issues regarding output interpretation.\", \"The authors have tried to address these concerns in their rebuttal and provided additional experiments. They also argue that the application area (sport analytics) of the paper is novel. Even though the application area is interesting and might lead to new problems, this paper did not get enough support from reviewers to justify its acceptance.\"], \"title\": \"Paper Decision\"}", "{\"title\": \"Thanks for your reviews. Please take a look at the rebuttal.\", \"comment\": \"Dear reviewers,\\n\\nThank you very much for your efforts in reviewing this paper.\\n\\nThe authors have provided their rebuttal. It would be great if you take a look at them, and see whether it changes your opinion in anyway. If there is still any unclear point or a serious disagreement, please bring it up. Also if you are hoping to see a specific change or clarification in the paper before you update your score, please mention it.\\n\\nThe authors have only until November 15th to reply back.\\n\\nI also encourage you to take a look at each others\\u2019 reviews. There might be a remark in other reviews that changes your opinion.\\n\\nThank you,\\nArea Chair\"}", "{\"title\": \"Authors Response to Reviews: What is the value of PassNet in terms of representation learning\", \"comment\": \"While the use of skip-connections and some other of the components of this network can be found in several of the cited previous work in image segmentation, we consider PassNet provides a set of novel additions to this field. First, this is to our knowledge the first approach of this kind applied in sports analytics, where a full surface prediction is provided from full resolution tracking data. Also, while these kind of architectures have been proved successful when either a full ground-truth map correspondance is available or a single label related to objects in the input image is provided, we prove that high level features can be learn from just single-pixel level ground-truth. Also, this paper shows how complex representations can be learned through a large model (in parameters size compared to baseline models) while also providing rich visual interpretation that can be directly applied in practice. Also the network is not limited to estimating passing probabilities but we show that with few modifications it can be adjusted to learn many other similar problems in this field, allowing as well for a possible application in other sports where tracking data is also available.\"}", "{\"title\": \"Authors Response to Reviews: Evaluating unseen passes and accounting for unlike passes\", \"comment\": \"One of the reviewers commented that the only way to objectively evaluate probability prediction at every other location different from the observed pass destination location is to compare with the full probability surface as ground-truth, which is impossible in practice. The reviewer suggests to use the pass likelihood model that can be obtained by substituting the sigmoid activation output layer by a softmax and use that pass likelihood to evaluate different types of situations, that might be undersampled in the original data.\\nIn order to test that the model performs robustly on less likely passes, so we can provide a higher confidence level of the quality of the predicted surfaces, we performed the following test: for all the passes in the new test dataset we predicted the likelihood of observed passes and we applied K-means with K=5 to obtain three incremental pass likelihood groups, named according the likelihood of the passing location within the group: very low, low, medium, high, and very high.\\n\\nThe table below shows the the likelihood ranges (for a 104x68 grid) and the log loss between the predicted probability in each group and the observed outcome for the PassNet architecture.\\n\\n\\n+-----------------+-----------+-----------+------------------+\\n| Pass Likelihood | Min Value | Max Value | PassNet log-loss |\\n+-----------------+-----------+-----------+------------------+\\n| Very low | 1e-10 | 0.007 | 0.543 |\\n+-----------------+-----------+-----------+------------------+\\n| Low | 0.007 | 0.019 | 0.134 |\\n+-----------------+-----------+-----------+------------------+\\n| Medium | 0.019 | 0.033 | 0.083 |\\n+-----------------+-----------+-----------+------------------+\\n| High | 0.033 | 0.052 | 0.061 |\\n+-----------------+-----------+-----------+------------------+\\n| Very high | 0.052 | 0.136 | 0.049 |\\n+-----------------+-----------+-----------+------------------+\\n\\nWe can clearly observe that the more likely a passing destination location is the better the prediction of PassNet. Despite a worst than average prediction of \\nvery unlikely passing locations, PassNet is able to perform considerably well in the rest of the cases. We expect the network to suffer from the noise introduced by the labels of destination locations, where in case of failed passes it is likely the data provider tags the interception location of the pass instead of the intended location. In future work we can address this problem by resampling the training set according to the unlikely passes ratio observed in the data, as well as developing an intended receiver prediction model.\"}", "{\"title\": \"Authors Response to Reviews: Testing on an additional hold-out dataset:\", \"comment\": \"We would like to thank all the reviewers for their useful feedback and are glad to see that the paper was well received by most.\\nBelow we provide answers to the main questions sent by the reviewers.\\n\\nRecently we have obtained tracking data for an additional full season of the English Premier League (13/14).\\nThe dataset includes 237,128 passes from matches that have not been previously fed to PassNet during the development of the paper. Below we present the logloss and excepted calibration error (ECE) between the proposed PassNet architecture and the baseline models. We can observe that PassNet is still considerably better than the baseline models in this new test set.\\n\\n+--------------+----------+-------+\\n| Model | Log-loss | ECE |\\n+--------------+----------+-------+\\n| Naive | 0.488 | - |\\n+--------------+----------+-------+\\n| Logistic Net | 0.510 | 0.117 |\\n+--------------+----------+-------+\\n| Dense2 Net | 0.390 | 0.130 |\\n+--------------+----------+-------+\\n| PassNet | 0.316 | 0.063 |\\n+--------------+----------+-------+\"}", "{\"title\": \"Authors Response to Reviews: Ablation test\", \"comment\": \"A common request was to present the results of an ablation test in order to prove the usefulness of the architecture design decisions.\", \"the_components_used_for_the_ablation_test_are\": \"Skip-connection (Yes/No)\\nUpsampling (Yes/No)\\nFusion layer (Yes/No)\\nNon-linear prediction layer\", \"single_level_filters\": \"The number of layers of convolutional filters by sampling level\\n\\n+---------------------+----------------------+-----------------+-------------------+---------------------+----------------------+----------+\\n| Model | Skip-connection (SC) | Upsampling (UP) | Fusion layer (FL) | NL prediction (NLP) | Single level filters | Log-loss |\\n+---------------------+----------------------+-----------------+-------------------+---------------------+----------------------+----------+\\n| PassNet | YES | YES | YES | YES | 2 | 0.316 |\\n+---------------------+----------------------+-----------------+-------------------+---------------------+----------------------+----------+\\n| PassNet-NLP | YES | YES | YES | NO | 2 | 0.353 |\\n+---------------------+----------------------+-----------------+-------------------+---------------------+----------------------+----------+\\n| PassNet-FL | YES | YES | NO | YES | 2 | 0.290 |\\n+---------------------+----------------------+-----------------+-------------------+---------------------+----------------------+----------+\\n| PassNet-FL-NLP | YES | YES | NO | NO | 2 | 0.421 |\\n+---------------------+----------------------+-----------------+-------------------+---------------------+----------------------+----------+\\n| PassNet-UP | YES | NO | YES | YES | 2 | 0.315 |\\n+---------------------+----------------------+-----------------+-------------------+---------------------+----------------------+----------+\\n| PassNet-UP-FL | YES | NO | NO | YES | 2 | 0.317 |\\n+---------------------+----------------------+-----------------+-------------------+---------------------+----------------------+----------+\\n| PassNet-UP-NLP | YES | NO | YES | NO | 2 | 0.323 |\\n+---------------------+----------------------+-----------------+-------------------+---------------------+----------------------+----------+\\n| PassNet-UP-FL-NLP | YES | NO | NO | NO | 2 | 0.335 |\\n+---------------------+----------------------+-----------------+-------------------+---------------------+----------------------+----------+\\n| Single Layer CNN-D4 | NO | YES | YES | YES | 2 | 0.364 |\\n+---------------------+----------------------+-----------------+-------------------+---------------------+----------------------+----------+\\n| Single Layer CNN-D8 | NO | YES | YES | YES | 4 | 0.320 |\\n+---------------------+----------------------+-----------------+-------------------+---------------------+----------------------+----------+\\nPassNet achieved the best log-loss together with PassNet-FL and PassNet-UP. When performing a visual evaluation of the quality of the generated surfaces we can see that there are a few subtle differences between PasseNet-FL (see image https://ibb.co/WVpDq38 ) and PassNet (see image https://ibb.co/RNjj07s ), but they can be considered largely equivalent. Given that the best loss is achieved by PassNet-FL we could agree that the fusion layer could be removed from the architecture.\\n\\nAdditionally, PassNet-UP-FL performs similarly to PassNet in terms of log-loss. However, if we take a closer look at the generated surface (see image https://ibb.co/h1QqpDt ) we can see that the output is artifacted, while PassNet surface provides an eye-pleasing smooth surface. These smooth surfaces are preferable in practice for better communication with coaches. Given the marginal difference between both it is preferable to keep the upsampling layers.\\n\\nSingle Layer architectures perform similarly to PassNet according to log-loss, however if we take a look at the generated surface we can see that the net overfits and predicts exclusively 0.5 probabilities near dense opponents areas (see image https://ibb.co/hWHrDFd ).\\n\\nFrom this analysis we can conclude that all of the proposed components (with an exception of fusion layers) provide value to the model.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"Contribution:\\nThis paper proposes PassNet an architecture designed for soccer pass analytics. PassNet approach is similar to UNet, having a downsampling and upsampling modules with a set of skip-connection between the two modules. To train their model, authors apply the log-loss at the location of the passing event. Authors evaluate their approach on a soccer analytic dataset, where they demonstrate improvement over prior works relying on hand-crafted features.\", \"comment\": \"Contribution of the paper appears a bit incremental to me. It seems that the paper is a direct application of Unet type of architecture on the specific problem of pass analytic. It would be nice to explicit what is the main contribution of the paper from the representation learning point of view.\\n\\nIn addition, some of the design choice of PassNet could be better justified. It would be nice to run an ablation study to show the importance of the different architectural component (conv2d prediction layers, backpropagating using only one outputs, skip-connection...)\\n\\nAuthors claim that their approach is an \\\"extreme case of weakly supervised learning\\\". I tend to disagree with the assessment. I understand that they only backprop the loss computed at one specific location of the output, but this a choice on the architectural part and not on the labelling. Training PassNet requires fine-grained label as it needs to know both label value and localization. Training the model without the use of label localization would be more akin to weakly-supervised learning.\\n\\nAuthors evaluate their approach on only one dataset. It would be nice to extend the empirical results to other datasets/sports to ensure the robustness of the conclusions.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The aim of the system presented in this paper is to produce a 2D map of probabilities showing the chance of successful completion of a soccer pass to all locations on the field, given coordinate locations of players and the ball sampled over time. A network based on a fully-convolutional style semantic segmentation network is applied to a 2D, 8-channel game state representation, with final sigmoid layer to predict a pass success indicator at each location. The system is trained using manually annotated pass success and destinations, which corresponds to a label on the destination point; locations other than the labeled point are not trained (treated as incomplete/\\\"don't-care\\\" training targets). Evaluation is performed using both log loss and probability calibration measure, to measure effectiveness of predictions as well as how well they calibrate to correspond to probabilities in the sense defined by the measurement.\\n\\nThis is a fun application and it appears the system is effective at a basic level. However, I think both the theory/explanation and experiments leave a fair bit of uncertainty as to what the probabilities mean and how to interpret them, including conflation of success probability and the model's certainty in its estimate.\\n\\nIn particular, it is unknown to what degree the values output by the model can be interpreted as a predictive probability of pass completion for anywhere on the field. If one location says 0.8 and the other 0.5, does this mean that if the player were to actually pass to the 0.5 location, the chance of success is actually lower? Or does it mean that the model is less confident or that this location was under-trained for this state? The ECE measurement in conjunction with loss error doesn't quite address this: although a good verification of calibration in its own sense, ECE simply confirms that for cases where the model predicts 0.5, there is success 0.5 of the time for locations that *exist in the test set*, which is sampled according to player action. To verify the probability maps at all locations, one would need to be able to measure *any* point in the field, not just those already selected by the players' actions. I think discussion and attempt to measure this is fairly important, as one of the intended applications in the motivation is analyzing what might have happened had a player selected a different destination point.\\n\\nUnfortunately, knowing the true outcomes at arbitrary locations the players didn't pass to is impossible, so addressing this issue is not straightforward, and unclear to me how it might be done. A possible suggestion, is to use the destination selection predictions that the paper also mentions can be found using softmax instead of sigmoid. Although this does not entirely eliminate the issue (these predictions themselves may conflate model confidence with actual selection probability), these maps would likely provide a good indication of player selections. Thus they might be used to sample or reweight the test set, so that unlikely destination points are sampled more, to try to get a more uniform sample.\\n\\nEven with this issue, though, I feel that this is an interesting application and system that seems reasonable in its current state, if with important caveats. Thus, I'd lean towards accepting. However, I'd encourage the authors to discuss these differences and issues of output interpretation, and to try addressing if possible.\", \"additional_questions_and_comments\": [\"The train/val/test split appears to be uniform random by pass event. It would be interesting to hold out all events for one or more teams, or holding out full games, to measure generalizability to these cases.\", \"Are the destinations for unsuccessful passes the intercepted location, or an estimated intended location? Does this difference affect the completion prediction at the intended location (beyond the interception location)?\"]}", "{\"rating\": \"8: Accept\", \"experience_assessment\": \"I do not know much about this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a deep convolutional neural network for estimating, at the given the moment, for any given position on the field, the probability that a pass to that position is successful. It also shows results regarding the probability that a pass to a particular position is attempted. The paper is well-written and the figures and videos do a very nice job of communicating the important ideas. The related work appears to be extensive and the description of the design choices of the architecture and the training procedure is clear and thorough. Based on the results in Table 1, the proposed network appears to compare very favorably against the baselines.\\n\\nOne thing I feel would make the paper stronger is the inclusion of more baselines and an ablation study. Both Logistic Net and Dense2 Net appear to be very lightweight. It is not obvious to me that all of the design choices made in the paper translated to gains in performance. Would a large network perform well without them? You mention that you tried a class-weighting with no sampling approach but that it did perform as well. Why not include these results in the paper?\", \"small_typo\": \"\\u201cwhere achieved by augmentation\\u201d -> \\u201cwere achieved by augmentation\\u201d\\n\\nFrom the perspective of someone who is not an expert in this area, I think this is a nice paper. It appears to successfully address an interesting problem, it is well-organized, and its solution methodology may have applications to other related problems. My one major criticism is that I feel that there is insufficient information regarding how the design choices affected the performance.\"}" ] }
H1ggKyrYwB
On Incorporating Semantic Prior Knowlegde in Deep Learning Through Embedding-Space Constraints
[ "Damien Teney", "Ehsan Abbasnejad", "Anton van den Hengel" ]
The knowledge that humans hold about a problem often extends far beyond a set of training data and output labels. While the success of deep learning mostly relies on supervised training, important properties cannot be inferred efficiently from end-to-end annotations alone, for example causal relations or domain-specific invariances. We present a general technique to supplement supervised training with prior knowledge expressed as relations between training instances. We illustrate the method on the task of visual question answering to exploit various auxiliary annotations, including relations of equivalence and of logical entailment between questions. Existing methods to use these annotations, including auxiliary losses and data augmentation, cannot guarantee the strict inclusion of these relations into the model since they require a careful balancing against the end-to-end objective. Our method uses these relations to shape the embedding space of the model, and treats them as strict constraints on its learned representations. %The resulting model encodes relations that better generalize across instances. In the context of VQA, this approach brings significant improvements in accuracy and robustness, in particular over the common practice of incorporating the constraints as a soft regularizer. We also show that incorporating this type of prior knowledge with our method brings consistent improvements, independently from the amount of supervised data used. It demonstrates the value of an additional training signal that is otherwise difficult to extract from end-to-end annotations alone.
[ "regularizers", "vision", "language", "vqa", "visual question answering" ]
Reject
https://openreview.net/pdf?id=H1ggKyrYwB
https://openreview.net/forum?id=H1ggKyrYwB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "QGMBIDoC9y", "BJg3HDxnoS", "rJlYRymUsB", "SkxKo1QIjB", "HylXQyX8jB", "SyeasshEor", "rkgQW97pKS", "SyxXGwWjKB", "B1l7jf5rFS" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733460, 1573812036180, 1573429200582, 1573429152908, 1573429018789, 1573338020988, 1571793402708, 1571653386917, 1571295899508 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1828/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1828/Authors" ], [ "ICLR.cc/2020/Conference/Paper1828/Authors" ], [ "ICLR.cc/2020/Conference/Paper1828/Authors" ], [ "ICLR.cc/2020/Conference/Paper1828/Authors" ], [ "ICLR.cc/2020/Conference/Paper1828/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1828/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1828/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper proposes a technique for incorporating prior knowledge as relations between training instances.\\n\\nThe reviewers had a mixed set of concerns, with one common one being an insufficient comparison with / discussion of related work. Some reviewers also found the clarity lacking, but were satisfied with the revision. One reviewer found the claim of the approach being general but only tested and valid for the VQA dataset problematic.\\n\\nFollowing the discussion, I recommend rejection at this time, but encourage the authors to take the feedback into account and resubmit to another venue.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to new version\", \"comment\": \"Thanks for the changes made to the paper, which makes it much more readable, and hence I will be happy to increase my score accordingly.\", \"just_a_few_typos_that_have_been_introduced_that_may_as_well_get_fixed\": \"p.2 typo: hlWe s\\n\\n. Our contribution, on the opposite -> . Our contribution, in contrast\\n\\nthat perfectly fit of\\nthe desired constraints. - remove \\\"of\\\"\"}", "{\"title\": \"Authors' response\", \"comment\": \"Hi, thanks a lot for your time and valuable comments. They really helped pinpoint sections that required clarifications. We have significatly revised the paper, as summarized below. We believe it to be much improved as a result (updates are highlighted in the PDF).\\n\\n1,3> Related works\\n- [1] \\\"Harnessing deep neural networks with logic rules\\\"\\nThis paper used posterior regularization to improve how a learned model complies with hand-designed rules. In comparison, we use *instance-level* auxiliary annotations. They be seen as rules that apply to *some* of the training examples. The scope of the two methods is complementary. The major innovation in our method is to enforce *hard* constraints on the learned embeddings, whereas general rules in [1] are softly balanced with learned predictions. [1] uses teacher/student distillation during training, but their best results are obtained with the teacher network, not with the student. In our case, the distillation step is critical to enforce the hard constraints.\\n\\n- [2] \\\"Constrained Convolutional Neural Networks for Weakly Supervised Segmentation\\\"\\nThis one shows how to use image-level tags to learn semantic image segmentation. The tags are turned into linear constraints, which is similar to how we handle program annotations (one of our three use cases). Their contribution is to turn their constrained optimization problem into an objective amenable to SGD that is robust to hard-to-enforce/competing constraints. Our contribution, on the opposite, shows how to enforce constraints strictly. In our particular applications, it proved superior to soft-regularized objectives.\\n\\n- [3] \\\"The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences from Natural Supervision\\\"\\nThis paper combines symbolic AI with neural networks for VQA. The vocabulary of visual concepts is learned, but the operation of the model otherwise depends on a pre-specified domain-specific language, and on manually-design modules. They do not require or seek to exploit program annotations for training. Their contributions are essentially orthogonal to ours. Our method could, for example, apply to the representations of questions learned in their semantic parser.\\n\\n\\n2> Distillation process\", \"excellent_question\": \"we realized of a gap in the presentation of our motivation for the distillation phase.\\nStandard training usually leads to overfitting, which is avoided with early stopping. If one uses a soft regularizer, there is no principled way to design the regularizer to converge right before overfitting occurs.\\nWith the proposed method, one stops the first training phase once overfitting occurs. Then, the learned regularized embeddings are frozen. We retrain the earlier layers, while the classifier is fixed. The network outputs do not change and there is no further risk of overfitting on the task labels. Two reasons why the second phase succeeds in practice are that (1) we retrain only a handful of layers and (2) we use dense/continuous targets (quite the opposite of training a large-capacity network on sparse labels).\\n\\n\\n4> Time complexity\\nThe time complexity (as a function of dataset size) is the same as with standard training.\\n- There is a very small fixed overhead, for each mini-batch, to retrieve the data needed to compute the regularizer (e.g. equivalent questions), which is stored alongside the training examples.\\n- There is an additional cost in running the second training phase (distillation). This only involves retraining a few layers for a handful of epochs. In our experiments with VQA, we retrain only the question embedding (a word embedding and GRU) for about 5 epochs, whereas the first training phase takes in the order of 20 epochs. The added cost is a very small fraction of the total training time.\"}", "{\"title\": \"Authors' response\", \"comment\": \"Hi, thanks for your time and for the excellent quality of the review. We took your comments on-board and did a major revision of the paper, which is now much clearer as a result. Updates are highlighted in the PDF. Thanks a lot for your contribution.\", \"summary_of_updates\": [\"Introduction: better introduction of technical method, including distillation step.\", \"Section 3: justification for the 2-phase procedure and for the distillation step; most of technical description moved from the appendix.\", \"Discussion of constraints in parameter/embedding space. In embedding space, one can constrain how the network represents data. In parameter space, one can guide what the network does with these representations. Both can be useful. The former can be closer semantically to task- or domain-specific knowledge of the data used. Our applications indeed focuses on the *linguistic* embedding space in VQA.\", \"Fig. 3: thanks for the feedback, this was indeed confusing in many ways. One operation is \\\"*6\\\" or \\\"+3\\\", for example. The input digit and operation sequence are embedded separately to apply constraints on the embeddings of the latter independently. The examples \\\"...+1,*2...\\\" and \\\"...*2,-2,+4...\\\" are marked as equivalent because (x+1)*2 = ((x*2)-2)+4.\", \"The annotations of equivalent sequences are the assumed \\\"prior knowledge\\\". We made it clearer that the objective was to verify that these annotations bring a useful training signal, complementary to training examples where these sequences are applied on specific x's.\", \"All typos fixed. LaTeX had shifted all refs to figures and tables.\", \"Also added a mention of other possible applications in the conclusions (embedding of graph- and tree-structured data).\"]}", "{\"title\": \"Revision uploaded\", \"comment\": \"We have revised the paper taking into account all points raised in the reviews. We believe that the new version is now much clearer as a result (updates highlighted in the PDF). Thanks again for your contribution !\"}", "{\"title\": \"Authors' response\", \"comment\": \"Hi, thanks for the thorough review.\\n\\n1> Cycle-consistent learning\\nShah et al. essentially learned a generative model of the question conditioned on the answer, for data augmentation while ensuring that the generated rephrasings lead to the same answer. The only connection with our method is to have multiple examples of a same question, which is really just one of three use cases that we demonstrate. Is there another common aspect that we missed ? I feel that their work and ours are pushing in different directions (they focus on *learning* rephrasings from the data).\\n\\n2> Constraints in parameter/embedding space\\nIn embedding space, one can constrain how the network represents data. In parameter space, one can guide what the network does with these representations. Both can be useful. The former can be closer semantically to a domain expert's knowledge of the data used.\\n\\n3> Natural language not compositional\\nExcellent point. That is why we did not seek to use the annotations as full program trees, but simply the fact that some questions have some operations in common (e.g. a counting operation, using a 'color' attribute, referring to a 'dog' in the image, etc.). These seem realistic to identify in real questions. Fully agree with the correction \\\"defined\\\"->\\\"translated\\\".\\n\\n4> Other methods on GQA\\nThere are even a few others with higher absolute performance on GQA, but their contributions seem orthogonal to ours. No published method has shown how to benefit from additional annotations as we did. There's nothing in principle that precludes those methods to be combined with our technique. If accepted, we'll certainly include up-to-date references to the state-of-the-art at the time of publication.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper proposes the incorporation of \\u201cprior knowledge\\u201d which enters in the form of the relations between training instances in neural network training. The proposed method is tested on VQA problem, bringing improvements upon the popular soft regularizer.\\nThe authors claim that their method is a general technique but in fact, the constraints are drawn from specific tasks (VQA for example). So, I believe the contribution is rather domain-dependent and not general. Can you explain more how this method can be applied to general problems?\\n\\nOther than that, I have some concerns:\\n1. Although the authors claim that they are the first to bring these annotations to VQA, I see their training procedure is closely related to cycle-consistent learning. Recent work in VQA also applied cycle consistency as an online data-augmentation technique (See Shah et al. 2019).\\n\\u201cShah, M., Chen, X., Rohrbach, M., & Parikh, D. (2019). Cycle-consistency for robust visual question answering.\\u201d\\n2. In Section 2, the authors say \\u201cconstraints on the parameter space of a model are often non-intuitive\\u201d. How are they \\\"non-intuitive\\\" and why the proposed method is more intuitive in terms of theory? Please clarify this.\\n3. Each question in Hud et al. is associated with a functional program, therefore, questions are compositional. However, arbitrary questions don\\u2019t need to strictly follow this constraint. Natural language is not exactly suited to functional programming I think. I have doubts about the claim in Section 4 \\u201cOur method can use partial annotations and should more easily extend to other datasets and human-produced annotations\\u201d. Also, the definition \\u201cA question is defined as a set of operations\\u201d does not seem correct. A question can be translated into a program that is composed of a set of operations.\\n4. Experimental results are not strong enough for such strong claims I believe. Regarding GQA dataset, the authors should compare the proposed method with more works, for example, Hu et al. 2019 and Hudson et al. 2019 achieve much favorable performance upon MAC.\\n\\\"Hu, R., Rohrbach, A., Darrell, T., & Saenko, K. (2019). Language-Conditioned Graph Networks for Relational Reasoning.\\\" \\n\\\"Hudson, D. A., & Manning, C. D. (2019). Learning by abstraction: The neural state machine.\\\"\", \"minor_comments\": \"The paper is not really well written. I even found a wrong reference (Section 3).\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"The paper argues for encoding external knowledge in the (linguistic) embedding layer of a multimodal neural network, as a set of hard constraints. The domain that the method is applied to is VQA, with various relations on the questions translated into hard constraints on the embedding space. A technique which involves distillation is used to satisfy those constraints during learning.\\n\\nThe question of how to encode external knowledge in neural networks is a crucial one, and the limitations of end-to-end learning with supervised data is well-made. Overall I feel that this is a potentially interesting paper, addressing an important question in a novel way, but I found the current version a highly-frustrating read (and I read the paper carefully a number of times); in fact, so frustrating that it is hard for me to recommend acceptance in its current form. More detailed comments below.\\n\\nMajor comments\\n--\\nThe main problem I have with the paper lies with the first part of section 3, which is a key section describing the main method by which the constraints are satisfied during learning. This is very confusing. The need for the two-step procedure, in particular, and the importance of distillation needs much more explanation, and not relegated to the Appendix (which reviewers are not required to read - see call for papers). I'm not suggesting that the whole of the appendix needs moving to the body of the paper, but I would suggest perhaps 1/2 a page.\\n\\nA related comment is the use of the distillation technique. This looks crucial, but I don't believe distillation is mentioned at all until the end of the related work section, and even there it comes as a bit of a surprise since there's no mention anywhere of this technique in the introduction.\\n\\nI would say a little more about the distinction between the embedding space and parameter space, since you say that the external knowledge is encoded in the former and not the latter, and this is important to the overall method. Since embeddings are typically learned (or at least fine-tuned) it's not clear where the boundary is here. Another comment is that embedding space in this paper means the linguistic embedding space. Since this is ICLR and not, eg, ACL, I would make clear what you mean by embedding space.\\n\\nI don't understand the diagram in Fig. 3 of the architecture, nor the explanation. What's an operation here? Is it *, or *6? I don't get why 3 is embedded by itself in the diagram, and then combined with the remainder using the MLP. Why not just run the RNN over the sequence?\\n\\nWhy are the training instances {3,+1...} and {4,*2,...} equivalent. I stared at this a while, and still have no idea. Also, how are these \\\"known to be equivalent\\\" - what's the procedure?\\n\\nMinor comments including typos etc.\\n--\\nThe paper has the potential to be really nicely written and well-presented. Currently it reads like it was thrown together just before the deadline (which only adds to the overall frustration as a reader).\\n\\nIn fig. 1 the second equivalent question example is interesting, since strictly speaking \\\"box\\\" and \\\"rectangular container\\\" are not synonyms (e.g. boxes can be round). Since strict synonymy is hard to find, does that matter? (I realise the dataset already exists and was presented elsewhere, but this might be worth a footnote).\\n\\nmissing (additional) right bracket after Herbert (2016)\\n\\nNot sure footnote 1 needs to be a footnote. It's already been said, I think, but if it does need repeating it probably deserves to be in the body of the text.\\n\\nbetween pairs questions\\n\\nsee Fig.3 -> figure 2?\\n\\nsee Fig.1 -> Tab. 1? (on p.5)\\n\\nfootnote 1 missing a right bracket\\n\\nusually involve -> involves\\n\\n+9]) - extraneous bracket\\n\\nFig. 4.1 -> Fig. 3? (p.6)\\n\\np.7 wastes a lot of space. In order to bring some of the appendix into the main body, I would do away with the very large bulleted list. (I don't mean lose the content - just present it more efficiently)\\n\\nRemember than\\n\\nFinally in Fig. 4.2 - some other figure\\n\\ndue of the long chains\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose a framework to incorporate additional semantic prior knowledge into the traditional training of deep learning models such that the additional knowledge acts as both soft and hard constraints to regularize the embedding space instead of the parameter space. To illustrate the idea, the authors use 3 different annotated knowledge that are already available in a public dataset that contains equivalent statements, entailed statements as well as functional programs and show that the final performance indeed increases.\\n\\nIn general, the paper is well-written and easy to follow. The motivation is clear, i.e., to boost the performance of supervised learning tasks with additional knowledge constraints in a hard way. Compared with the existing models that treat the constraints as soft regularizers, the authors propose to additionally distill the knowledge using teacher-student framework. And this paper contributes in a novel way to incorporate the constraints with both soft and hard training strategies. However, there are several considerations which limits the contribution of this paper:\\n\\n1. As a teach-student distillation framework, there are several papers using a posterior regularizer with hard constraints, e.g., \\\"Harnessing deep neural networks with logic rules\\\", \\\"Constrained Convolutional Neural Networks for Weakly Supervised Segmentation\\\". More discussions and comparisons with these models should be addressed, and even experimental comparisons if possible, since they also use knowledge distillation to convey the knowledge expressed in the constraints.\\n\\n2. The proposed model differs with other soft-regularization-based methods in terms of an additional distillation process. The authors state that the combination of task loss with soft regularization lead to over-fitting. To my point of view, the distillation step actually makes similar effect with the case when only optimize the regularizer without the task loss. Hence, I am wondering what's the performance of first using the combined loss and then fix the subsequent layers to only optimize the embedding layers using only the regularization loss. This could demonstrate the difference between the distillation process and the regularization process.\\n\\n3. Many recent models for VQA have been proposed, e.g, \\\"The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences from Natural Supervision\\\" which also combines extra knowledge as symbolic reasoning. The authors should also compare with such models.\\n\\n4. It seems the model need to sample a pair of data each time at training to compute the regularizer and also conducting the distillation process. In this case, the time cost should be non-trivial because the distillation process requires optimizing the distance between the current embedding with the hard constraint. Then the question comes as how's the time complexity of the model? What's the convergence speed?\"}" ] }
SygeY1SYvr
Are Few-shot Learning Benchmarks Too Simple ?
[ "Gabriel Huang", "Hugo Larochelle", "Simon Lacoste-Julien" ]
We argue that the widely used Omniglot and miniImageNet benchmarks are too simple because their class semantics do not vary across episodes, which defeats their intended purpose of evaluating few-shot classification methods. The class semantics of Omniglot is invariably “characters” and the class semantics of miniImageNet, “object category”. Because the class semantics are so similar, we propose a new method called Centroid Networks which can achieve surprisingly high accuracies on Omniglot and miniImageNet without using any labels at metaevaluation time. Our results suggest that those benchmarks are not adapted for supervised few-shot classification since the supervision itself is not necessary during meta-evaluation. The Meta-Dataset, a collection of 10 datasets, was recently proposed as a harder few-shot classification benchmark. Using our method, we derive a new metric, the Class Semantics Consistency Criterion, and use it to quantify the difficulty of Meta-Dataset. Finally, under some restrictive assumptions, we show that Centroid Networks is faster and more accurate than a state-of-the-art learning-to-cluster method (Hsu et al., 2018).
[ "few-shot", "classification", "meta-learning", "benchmark", "omniglot", "miniimagenet", "meta-dataset", "learning to cluster", "learning", "cluster", "unsupervised" ]
Reject
https://openreview.net/pdf?id=SygeY1SYvr
https://openreview.net/forum?id=SygeY1SYvr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "JZBoDYXEnV", "HJlHfPO5or", "r1g4kUu5or", "HJlkjPX5iS", "rJlFdCZFor", "S1gSkRWKiS", "rkeGKT-toS", "rklIxTWFiS", "HklF3-A6tS", "SyxI703nYr", "rkeIbjh3tr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733428, 1573713677134, 1573713371718, 1573693334629, 1573621361165, 1573621213187, 1573621114442, 1573620974189, 1571836337384, 1571765790224, 1571764990175 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1827/Authors" ], [ "ICLR.cc/2020/Conference/Paper1827/Authors" ], [ "ICLR.cc/2020/Conference/Paper1827/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1827/Authors" ], [ "ICLR.cc/2020/Conference/Paper1827/Authors" ], [ "ICLR.cc/2020/Conference/Paper1827/Authors" ], [ "ICLR.cc/2020/Conference/Paper1827/Authors" ], [ "ICLR.cc/2020/Conference/Paper1827/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1827/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1827/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper is interested in assessing the difficulty of popular few-shot classification benchmarks (Omniglot and miniImageNet). A clustering-based meta-learning method is proposed (called Centroid Network), on which a metric is built (gap between the performance of Prototypical Networks and Centroid Networks). As noted by several reviewers, the proposed metric (critical for the paper) is however not motivated enough, nor convincing enough - after discussion, the logic in the metric reasoning seems to remain flawed.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Mathematical Definitions of Episodic Bayes Accuracies\", \"comment\": \"The reviewer asked for a mathematical definition of unsupervised Bayes accuracy.\\n\\nPlease note that the unsupervised and supervised Bayes accuracies we use for CSCC are defined in the episodic setting, which makes them different from the usual Bayes accuracy defined in the batch setting (usual training).\\n\\n-> Batch Supervised Bayes Accuracy (1 - Bayes error)\\n\\n$$BatchSBA = E_{x,y\\\\sim p} [ \\\\mathbf 1\\\\lbrace y = \\\\arg\\\\max_{y'} p(y'|x) \\\\rbrace ]$$\\n\\nthis can be rewritten as a sup over all functions $h:X\\\\to Y$\\n\\n$$BatchSBA = \\\\sup_{h:X\\\\to Y} \\\\mathbf E_{x,y\\\\sim p} \\\\mathbf 1\\\\lbrace y = h(x) \\\\rbrace $$\\n\\nJust like the Bayes accuracy is the best possible accuracy on a classification task, the episodic unsupervised (resp. supervised) Bayes accuracies are the best possible unsupervised (resp. unsupervised) supervised accuracies on a unsupervised (resp. supervised) few-shot classification task. They are all well defined concepts, but we do provide (cumbersome) mathematical definitions below. \\n\\n-> Episodic Supervised Bayes Accuracy\\n\\nDenote $T$ the task, $S\\\\in\\\\mathcal S$ a labeled support set, $(x,y)$ a labeled query point. episodic supervised Bayes accuracy can be written as a sup over all functions $h:\\\\mathcal S\\\\times X\\\\to Y$.\\n\\n$$EpisodicSBA = \\\\sup_{h:\\\\mathcal S\\\\times X\\\\to Y} \\\\mathbf E_{T\\\\sim p(T)} \\\\mathbf E_{S,x,y\\\\sim p(S,x,y|T)} \\\\mathbf 1\\\\lbrace y = h(x, S) \\\\rbrace $$\\n\\n-> Episodic unsupervised Bayes accuracy\\n\\nEpisodic unsupervised Bayes accuracy can also be written as a sup over all functions $h:\\\\mathcal S_x\\\\times X\\\\to \\\\mathcal S_y\\\\times Y$ which take as input a unlabeled support set $S_x$, a query point $x$, and predicts a cluster index $y$ and cluster indices for the support set $\\\\widehat S_y$. We define an intermediate operator $F_\\\\sigma: \\\\mathcal S_y\\\\times \\\\mathcal S_y \\\\to \\\\Sigma$ which returns the optimal permutation between predicted clusters and ground truth classes (it's the argmax of the resulting accuracy).\\n\\nThen $EpisodicUBA$ is the solution of the following optimization problem :\\n\\nMaximize w.r.t. $h:\\\\mathcal S_x\\\\times X\\\\to \\\\mathcal S_y\\\\times Y$: \\n\\n$$\\\\mathbf E_{T\\\\sim p(T)} \\\\mathbf E_{S,x,y\\\\sim p(S_x,S_y,x,y|T)} \\\\mathbf 1\\\\lbrace y = \\\\sigma(\\\\widehat y) \\\\rbrace $$\\n\\nSubject to $ (\\\\widehat S_y,\\\\widehat y) = h(S_x, x)$ and $\\\\sigma = F_\\\\sigma(S_x, \\\\widehat S_y)$\"}", "{\"title\": \"Answer to \\u201cthere are much more factors that can affect the performance of the meta-learning methods [...]\\u201d\", \"comment\": \"\", \"reviewer\": \"\\\"there are much more factors that can affect the performance of the meta-learning methods [...]\\u201d\", \"we_agree_that_the_approximate_cscc_is_not_invariant_to_the_backbone_architecture\": [\"Nevertheless, we think it is reasonable to say that the unsupervised accuracies for CentroidNets are surprisingly high for Omniglot and quite decent for miniImageNet.\", \"Moreover, in order to limit the factors that can affect the performance of the models we have taken the following precautions:\", \"We have used ProtoNets, one of the top methods for the Conv-4 architecture on Omniglot and miniImageNet. This ensures that the estimated unsupervised Bayes accuracy is not too low.\", \"We have run CentroidNets on exactly the representation learned by ProtoNets, with the sole difference that we use center loss during training (we do not use it for ProtoNet because it degrades performance very slightly). This ensures that CentroidNets has no unfair advantage over ProtoNets.\", \"Finally, we emphasize the fact that the CSCC metric itself is well defined. The issue lies only in its practical approximation. There are other examples of well defined quantities in statistics for which approximators are highly dependent on the backbone architecture:\", \"for instance, mutual information is approximated by neural networks in very high dimensions \\u201cMINE: Mutual Information Neural Estimation\\u201d (Belghazi 2018). However, the approximation is completely dependent on the neural network considered.\", \"the Wasserstein distance is approximated by neural networks in high dimensions \\u201cWasserstein GAN\\u201d (Arjovsky 2017), but the approximation also depends on the architecture of the neural network.\"]}", "{\"title\": \"Thank you for your response\", \"comment\": [\"Thank the authors for the response and additional experiments.\", \"\\\"CentroidNets uses a variety of tricks to improve performance, therefore it is unfair to not also consider tricks (such as R1) to improve Protonet performance\\\"\", \"What is unsupervised Bayes accuracy ?\", \"Does CentroidNet really work without labels during meta-validation ?\", \"Why does end-to-end training not work ?\", \"The responses for the above questions almost make sense to me. Minor concerns are:\", \"It would be better to show a mathematical definition of unsupervised Bayes accuracy.\", \"It would be better to show which part is meta-training/meta-validation/meta-test in Fig. 1 and 2.\", \"\\\"CSCC attempts to quantify the importance of the supervision information, which is not directly related to the difficulty of few-shot learning problem\\\"\", \"My most major concern is related to this question. The authors' main claim in this paper seems to be \\\"since CentroidNet works better than ProtoNet, this dataset has too high class-semantic consistency.\\\" However, this is not sufficiently convincing to me, because there are much more factors that can affect the performance of the meta-learning methods as the authors mentioned. To validate this claim, there is most straightforward way: using the same dataset and meta-learning method, we can only change how to generate episodes. For example, in the harder dataset shown in section 1, we can compare \\\"using all semantics to generate episodes\\\" and \\\"using only single semantic to generate episodes\\\" as similar with the example shown in the authors' response. The comparison between CentroidNet and ProtoNet does not directly support the claim.\"]}", "{\"title\": \"Answer to Reviewer 2\", \"comment\": \"We thank the reviewer for their positive review and constructive feedback. We have added explanations relating to the advantages of Sinkhorn K-Means to the Appendix.\\n\\n\\n* Relabeling the query set\\n\\nAny clustering is permutation invariant. Therefore, there are many equally correct ways to label the support set and the query set, which is why we find the optimal permutation which matches the ground truth cluster indices with the predicted cluster indices.\", \"specifically_for_figure_2\": [\"cluster indices are predicted for the query set shapes during step 3.\", \"{yellow square} gets assigned to cluster A, {red square, green triangle, yellow triangle} get assigned to cluster B.\", \"the optimal permutation shows that cluster A matches Class 2, and cluster B matches Class 1\", \"therefore, {yellow square} gets relabeled to Class 2, {red square, green triangle, yellow triangle} get relabeled to class 1\", \"the unsupervised accuracy can be computed by comparing the predicted and ground truth classes 1, 2.\", \"Why is Sinkhorn K-means expected to improve performance ?\"], \"there_are_mainly_two_reasons_why_sinkhorn_k_means_improves_performance_compared_to_k_means\": [\"Sinkhorn K-Means is particularly well adapted to the few-shot clustering and unsupervised few-shot classification problems because it strictly enforces the fact that the classes have to follow a given distribution (e.g. balanced), whereas K-Means does not.\", \"Sinkhorn K-Means is likely to converge better than K-means due to the regularization factor of the Sinkhorn distance.\", \"To illustrate the second point, consider the limit case where the regularization factor of Sinkhorn distance goes to infinity. Then, the assignments in Sinkhorn K-Means become uniform (each cluster is assigned equally to all points), and all the centroids converge to the average of all the points, which in this case is a global minimum. This is by no means a proof, but this example suggests that for large enough regularization, Sinkhorn K-Means will converge better.\"]}", "{\"title\": \"Answer to Reviewer 1\", \"comment\": \"We thank the reviewer for taking the time to review our paper. We address the reviewer\\u2019s concerns below.\\n\\n\\n* It\\u2019s not immediately clear that the other approaches from the literature they compare their method to were conceived for the setting considered here, or indeed optimized for it.\", \"supervised_few_shot_classification_literature\": [\"Our main contribution is to compare CentroidNets on unsupervised few-shot classification vs. ProtoNets on supervised few-shot classification.\", \"ProtoNets were conceived and optimized precisely for the supervised few-shot classification problem (often just called few-shot learning).\", \"Our comparison is fair because unsupervised few-shot classification is strictly harder than supervised few-shot classification.\"], \"few_shot_clustering_literature\": [\"Our main contribution doesn't lie in our comparison with CCN (Hsu et al. 2017).\", \"The comparison is here mostly to confirm that our approach is reasonable (to have a point of comparison).\", \"We are honest and open about the fact that our method is less flexible than CCN.\", \"See page 8 \\u201cHowever, we wish to point out that Centroid Networks are less flexible than CCNs, as they require specifying the number of clusters and making an assumption on the sizes of the clusters[...]\\u201d\", \"The results on miniImageNet are less clear-cut\", \"Indeed, the gap between supervised and unsupervised accuracies is bigger on miniImageNet.\", \"This can be due to the higher visual difficulty of miniImageNet.\", \"This can also be due to the lower class semantic consistency.\", \"We do point out that the unsupervised accuracy of our method (55.3%) is still impressive, considering that it is almost equal to the performance of earlier supervised few-shot classification methods with the same architecture (56.2% for MatchingNets without fine-tuning).\", \"The results of the evaluation of the meta-dataset appear to depend on the specific setting considered\", \"Just like the usual supervised accuracy in few-shot learning, the unsupervised accuracy and CSCC are dependent on the task distribution considered. Therefore, it is perfectly normal that the numbers are different for Meta-Dataset in the [Train on ILSVRC] vs. [Train on All datasets] setting, because they define different task distributions (and are therefore different benchmarks).\", \"This makes it unclear to what extent the proposed metric is general and predictive.\", \"In order to better address the reviewer\\u2019s concern, we ask the reviewer to clarify what they mean exactly by \\u201cgeneral\\u201d and \\u201cpredictive\\u201d. Maybe with a concrete example illustrating what these properties are ?\"]}", "{\"title\": \"Answer to Reviewer 3, Part 2/2\", \"comment\": \"* \\\"CSCC attempts to quantify the importance of the supervision information, which is not directly related to the difficulty of few-shot learning problem\\\"\\n\\nIndeed, the reviewer is absolutely right that the difficulty of few-shot learning problems can come from many aspects, including but not limited to :\\n- visual difficulty (~how hard is it to train a classifier on all the classes at the same time)\\n- class semantic consistency (~how much do the class semantics vary)\\n\\nIf the goal is to design meaningful benchmarks for supervised few-shot classification methods, it is important to understand which aspects make those benchmarks difficult. For instance, consider the limit case of a supervised few-shot classification task in which the same 5 classes are sampled over and over again. The visual difficulty might be extremely high (e.g. very fine-grained classification), which might lead people to believe that it is a good benchmark (because it is hard and all methods achieve low accuracies). However, because there is no variability at all in the class semantic consistency, such a benchmark does not evaluate at all the capacity of few-shot methods of adapting to new tasks.\\n\\nOur intent is not to introduce CSCC as a proxy for task difficulty (supervised accuracy of SOTA models might be fine for that purpose). Rather, we introduce the CSCC as an attempt to decouple the different axes of difficulty. Dividing the unsupervised Bayes accuracy by the supervised Bayes accuracy is a rough way of normalizing away the visual difficulty (which affects both supervised and unsupervised accuracies) and focusing on the supervision information only.\\n\\n\\n* Why does end-to-end training not work ?\\n[Rephrased from Reviewer: ``the most intuitive way\\\" should work, because it can learn the common semantics via meta-training]\\n\\nIt is true that for a fixed image distribution, the general difficulty of the clustering task gets easier as class semantics become more similar. But here, we mean specifically that the issue with meta-training with an end-to-end loss is an *optimization* issue. \\n\\nIndeed, to solve few-shot clustering end-to-end, the first step would be to define a differentiable clustering loss that we can optimize with gradient descent:\\n\\nLoss(\\\\theta) = ClusteringLoss( ClusteringAlgorithm( h_\\\\theta( support_images ) ) )\\n\\nThe first challenge is to ensure that ClusteringLoss and SinkhornKMeans are differentiable with respect to their inputs. This is not trivial but doable (we can give more details if needed). \\n\\nHowever, making ClusteringAlgorithm and ClusteringLoss differentiable does not guarantee that they are smooth enough, and in fact they might be highly nonlinear. For instance, a small perturbation on \\\\theta could lead to completely different final centroids. Because all gradient descent proofs require some sort of regularity on functions, lack of smoothness might be the reason why end-to-end training does not work with CentroidNets.\\n\\nMore generally, the idea of replacing or combining the end-to-end loss with an auxiliary loss is not new and has been proposed and successfully implemented many times in the literature. See for instance :\\n- \\\"Hierarchical Graph Representation Learning with Differentiable Pooling\\\" (Section 3.3 of Ying et al 2018). They recognize that \\\"it can be difficult to train the pooling GNN using only [the end-to-end loss]\\\" and propose to use an \\\"an auxiliary link prediction objective\\\".\\n- \\\"Learning Longer-term Dependencies in RNNs with Auxiliary Losses\\\" (Trinh et al. 2018) which proposes auxiliary losses to alleviate the usual problems of BPTT (e.g. gradient vanishing) when only minimizing the end-to-end sequence-to-sequence loss. \\n- More generally, \\\"Limits of End-to-End Learning\\\" (Glasmachers 2017) state that \\\"end-to-end learning can be very inefficient for training neural network models composed of multiple non-trivial modules\\\", and characterize pros and cons of gradient-based end-to-end learning.\\n\\n\\n* Does CentroidNet really work without labels during ``meta-validation ?\\n\\nThis seems to be a terminology issue. The existing terminology is ambiguous (for instance \\u201ctesting/evaluation/validation\\u201d could either refer to [make predictions on new data] or [make predictions on new data + compute evaluation metrics]).\\nOur point is that after the meta-training phase, our method does not need any labels to cluster new support/query sets, as opposed to Prototypical Networks which does require the labels of the support set.\\nAny evaluation generally requires some notion of ground truth. The reviewer is correct that the labels are required to compute accuracies. This is standard in the learning to cluster literature (see for instance Hsu et al 2018).\"}", "{\"title\": \"Answer to Reviewer 3, Part 1/2\", \"comment\": \"We thank the reviewer for their thorough review and raising several valid points. We have done our best to answer them and we will improve the main paper accordingly (we have added some points to the appendix already). We hope that the reviewer will reconsider their score if we have addressed their concerns.\\n\\n\\n* \\\"CentroidNets uses a variety of tricks to improve performance, therefore it is unfair to not also consider tricks (such as R1) to improve Protonet performance\\\"\\n[Rephrased from Reviewer: \\u201cThe high performance of CentroidNet does not support the claim on the insufficient variety of the class semantics [...] ]\\n\\nIndeed, Sinkhorn K-Means is a key component in the performance of CentroidNets. However, it is not obvious that using Sinkhorn K-Means would be an unfair advantage compared to Prototypical Networks, for two reasons :\\n- In CentroidNets, we use Sinkhorn k-Means to attempt to recover the hidden class labels, i.e. to infer the ground-truth labels. In contrast, ProtoNets has direct access to the ground-truth labels (which incidentally turn out to be hard assignments and lead to unweighted averages).\\n- In CentroidNets, we run Sinkhorn k-Means on representations which were learned with the ProtoNet loss, i.e., they were by construction designed to be averaged without weights.\\n\\nHowever, in order to best address the reviewer\\u2019s concern, we go further and run new experiments on miniImageNet. This time we constrain the centroids to be unweighted averages of the data points. To do so, starting from the soft weights, we reassign each point only to its closest centroid, and compute the unweighted averages. The comparison between ProtoNets and CentroidNets is now fair in the sense that both prototypes and centroids use unweighted averages.\\n- Unsupervised Accuracy on miniImageNet is 0.5508 +/- 0.0072 for weighted average and 0.5497 +/- 0.0072 for unweighted average. The difference is not significant.\\n- Clustering Accuracy on miniImageNet is 0.6421 +/- 0.0069 for weighted average and 0.6417 +/- 0.0069 for unweighted average. The difference is also not significant.\\nWe\\u2019ll be happy to add these results to the paper, if the review thinks them valuable.\\n\\nTherefore, the new experiment suggests that using weighted averages does not bring an unfair advantage, and therefore does not invalidate our comparison. More generally, instead of trying to tune ProtoNets and CentroidNets as well as possible, we try to use comparable models for ProtoNets and CentroidNets (same architecture, nearly same representation).\\n\\n\\n* What is unsupervised Bayes accuracy ?\\n\\nWe define the unsupervised Bayes accuracy of an unsupervised few-shot classification task distribution as the highest achievable unsupervised accuracy. Just like the usual Bayes error is limited by label noise, the unsupervised Bayes accuracy is limited by cluster-semantic noise of a task.\\n\\nFor illustration, consider the following unsupervised few-shot classification task distribution :\\n- Uniformly sample a random dimension 1<= j <= D (hidden to the algorithm)\\n- Sample (iid, probability=\\u00bd) random binary vectors (x_i) of dimension D (shown to the algorithm) and split them between support and query set.\\n- Assign binary labels y = x_j to each vector (x_i) (hidden to algorithm).\\n- The goal is to cluster the support set and associate query set points with the support clusters.\\n\\nBecause the algorithm does not know which dimension j was sampled (i.e. the class semantic), it does not know how to cluster the support set. Therefore, it is just as good to make random predictions on the query set. Therefore the unsupervised Bayes accuracy is 0.5\\n\\nNow, consider the same task distribution, except the dimension index j is always fixed to 1. After meta-training, the algorithm can learn a representation mapping each vector to the value of its first dimension only. The support set can be clustered by grouping all 1s together, and all 0s together. Each query point can then be unambiguously assigned to one of the clusters. The resulting unsupervised Bayes accuracy is 1.\\n\\nBoth task distributions would become equivalent if the algorithm had access to the class semantics j. Therefore, the two unsupervised few-shot tasks differ in difficulty only because of the uncertainty/variability on class semantics, and this is reflected in the difference in unsupervised Bayes accuracy.\\n\\nIf this example is deemed helpful, we\\u2019ll be happy to add it to the paper.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": [\"<Paper summary>\", \"The authors argue that the popular benchmark datasets, Omniglot and miniImageNet, are too simple to evaluate supervised few-shot classification methods due to their insufficient variety of class semantics. To validate this, the authors proposed clustering-based meta-learning method, called Centroid Network. Although it does not utilize supervision information during meta-evaluation, it can achieve high accuracies on Omniglot and miniImageNet. The authors also proposed a new metric to quantify the difficulty of meta-learning for few-shot classification.\", \"<Review summary>\", \"Although the main claim of this paper seems correct, it is not sufficiently supported by theory or experiments. My score is actually on the border line, but I currently vote for ``weak reject, because some points in the paper are ambiguous yet. Given clarifications in an author response, I would be willing to increase the score.\", \"<Details>\", \"Strength\", \"The paper is well-organized. Especially, the examples shown in the introduction greatly help understanding of what the authors argue in this paper.\", \"A novel study on quantifying the difficulty of meta-learning.\", \"The proposed CentroidNet performs well in the experiments.\", \"Weakness and concerns\", \"Does CentroidNet really work without labels during ``meta-validation\\\"? As far as I understand, ground truth clusters of the support set defined by the labels are required to compute the accuracies. Therefore, the labels seem to be required to validate the performance of the model. I think it should be ``meta-test.\\\"\", \"The authors state ``The most intuitive way to train ..., we did not have much success with this approach\\\" in 5.3, but it is counter-intuitive. If the class semantics are similar among episodes, ``the most intuitive way\\\" should work, because it can learn the common semantics via meta-training. Further discussion about why it does not work is required.\", \"The high performance of CentroidNet does not support the claim on the insufficient variety of the class semantics. According to ablation study, adopting Sinkhorn K-means is the most important factor to improve the performance. It means that adopting weighted average like in [R1] can also improve the performance of ProtoNet, which results in substantial difference in the performance between ProtoNet and CentroidNet that can deny the claim.\", \"The definition of CSCC is not convincing. First, I could not get the meaning of ``unsupervised Bayes accuracy\\\" (supervised Bayes accuracy means 1 - Bayes error rate, right?). Second, CSCC seems to mainly quantify the importance of the supervision information during meta-learning, which is not directly related to the difficulty of few-shot learning problem. Intuitively, difficult few-shot learning problems should lead to lower supervised Bayes accuracy, which does not necessarily decrease CSCC. Third, what we can induce via comparing CSCC is not clarified in theory. The discussion in 6.3 is too subjective and specific for the case of training with ILSVRC/all datasets.\", \"This paper lacks citing some closely related works [R1, R2].\", \"[R1] ``Infinite Mixture Prototypes for Few-Shot Learning,\\\" ICML2019\", \"[R2] ``A Closer Look at Few-shot Classification,\\\" ICLR2019\", \"Minor concerns that do not have an impact on the score\", \"Another arXiv paper related to this work: ``Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML\\\"\"]}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper is concerned with few-shot classification, both its benchmarks and method used to tackle it. The scope of the few-shot classification problem can be set relatively widely, depending on what data is available at what stage. In general few-shot classification is an important ability of intelligent systems and arguably an area in which biological systems outperform current AI systems the most.\\n\\nThe paper makes a number of contributions. (1) It suggests an approach to do a specific type of clustering and compares it favorably to the existing literature. In a specific sense the approach does not use supervised labels (\\u201cwithout labels at meta-evaluation time\\u201d). (2) It applies that approach to currently existing datasets and achieves \\u201csurprisingly high accuracies\\u201d in that setting, with the implication that this shows a weakness in these datasets when used for benchmarking (\\u201ctoo easy\\u201d). (3) It further suggests a metric, dubbed \\u201cclass semantics consistency criterion\\u201d, that aims to quantify this shortcoming of current benchmarks on these datasets. (4) It assesses a specific meta-dataset using that metric, confirming it is harder in this sense, at least in specific settings.\\n\\nMy assessment of the paper is mildly negative; however this is an assessment with low confidence given that I am no expert on few-shot classification or related areas.\\n\\nWhile the authors first example (the \\u201cMongolian\\u201d alphabet of the Omniglot dataset and geometric shapes falling into different categories) illustrates the problem space well and is indeed quite intuitive, the same cannot be said about either the specific setting they consider nor the metric they propose. It\\u2019s not immediately clear that the other approaches from the literature they compare their method to were conceived for the setting considered here, or indeed optimized for it. The authors do show good accuracy on clustering Omniglot characters without using labels and thus indeed demonstrate a high amount of class semantics consistency for that dataset. The results on miniImageNet are less clear-cut, and the results of the evaluation of the meta-dataset appear to depend on the specific setting considered. This makes it unclear to what extent the proposed metric is general and predictive. To their credit, the authors state that in future work they are looking to make their metric \\u201cmore interpretable and less dependent on the backbone architectures\\u201d.\\n\\nI believe the paper might benefit from being given additional attention. A streamlined and more accessible version might well be an important contribution in the future.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper introduces a new method for learning to cluster without labels at meta-evaluation time and show that this method does as well as supervised methods on benchmarks with consistent class semantics. The authors propose a new metric for measuring the simplicity of a few-shot learning benchmark and demonstrate that it is possible to achieve high performance on Omniglot and miniImageNet with their unsupervised method, resulting in a high value of this criterion, whereas the Meta-Dataset is much more difficult.\\n\\nThe paper is well written and generally very clear. I appreciate that the authors have highlighted the limitations of both their clustering method (that it requires more assumptions than CCNs) and their benchmark. The centroid method itself seems to draw heavily on pre-existing work, but uses a new similarity metric that improves performance beyond the current state-of-the-art on few-shot clustering tasks.\\n\\nThe authors acknowledge that the approximate CSCC metric they define is not consistent across architectures and hyperparameters. It is also a fairly simple metric, but nonetheless represents a novel contribution.\\n\\nOverall I feel that the paper introduces a well-defined problem and makes a step toward quantifying and resolving it. The experiments do a thorough job supporting the arguments of the authors.\", \"i_have_only_minor_issues_that_could_help_with_the_clarity_of_this_paper\": \"I wasn\\u2019t sure what was meant by \\u201crelabeling\\u201d the query set predictions in the text below Figure 2. \\n\\nI would have appreciated some discussion as to why Sinkhorn distance might be expected to improve performance .\"}" ] }
H1lkYkrKDB
UNIVERSAL MODAL EMBEDDING OF DYNAMICS IN VIDEOS AND ITS APPLICATIONS
[ "Israr Ul Haq", "Yoshinobu Kawahara" ]
Extracting underlying dynamics of objects in image sequences is one of the challenging problems in computer vision. On the other hand, dynamic mode decomposition (DMD) has recently attracted attention as a way of obtaining modal representations of nonlinear dynamics from (general multivariate time-series) data without explicit prior knowledge about the dynamics. In this paper, we propose a convolutional autoencoder based DMD (CAE-DMD) that is an extended DMD (EDMD) approach, to extract underlying dynamics in videos. To this end, we develop a modified CAE model by incorporating DMD on the encoder, which gives a more meaningful compressed representation of input image sequences. On the reconstruction side, a decoder is used to minimize the reconstruction error after applying the DMD, which in result gives an accurate reconstruction of inputs. We empirically investigated the performance of CAE-DMD in two applications: background/foreground extraction and video classification, on publicly available datasets.
[ "Non-linear dynamics", "Convolutional Autoencoder", "Foreground modeling", "Video classification", "Dynamic mode decomposition" ]
Reject
https://openreview.net/pdf?id=H1lkYkrKDB
https://openreview.net/forum?id=H1lkYkrKDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "J5FabpB6SM", "SyeTbYISqH", "rJxRE-91cS", "Syl2HTO2tB" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733396, 1572329732999, 1571950901919, 1571749188095 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1826/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1826/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1826/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper focuses on extracting the underlying dynamics of objects in video frames, for background/foreground extraction and video classification. Generally speaking, the presentation of the paper should be improved. Novelty should be clarified, contrasting the proposed approach with existing literature. All reviewers also agree the experimental section is also too weak in its current form.\", \"title\": \"Paper Decision\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents an application of convolutional autoencoder networks and a nonlinear dynamic systems analysis method known as extended dynamic mode analysis (EDMD) to a data-driven analysis of multivariate time series.\\nThe DMD method appears to be well-known in the physics community but is outside my area of expertise and unfortunately I have limited time to make a quick study of it. However, from what I gather, it involves empirical approximation of a nonlinear dynamical system as a high-dimensional linear dynamical system, which in turn enables analysis in terms of eigendecomposition of the resulting linear operator, revealing basic modes of the dynamics. In the proposed method, DMD is used in the latent representations of a convolutional autoencoder for image sequences. The DMD objective is incorporated into the autoencoder training loss to minimize its reconstruction error. The DMD is also used, by conditioning on the eigenvalues, to split the reconstruction into high frequency (quickly varying) foreground modes and low-frequency (slowly varying) background modes. Although end-to-end training is mentioned, it is not made clear how the derivatives of the DMD decomposition are implemented, especially considering that the DMD involves an SVD, which can have unstable/ singular derivatives when two or more singular values are close to the same / exactly the same. The resulting methods are applied to a foreground extraction and a classification tasks, and compared with numerous baselines. It is not clear to me what the state of the art is on these tasks, but the proposed methods compare favorably to reported baselines, and the images of results look convincing. However the experimental results seem a little thin and I would expect a more thorough study. Overall the method looks very interesting.\", \"some_complaints\": [\"the tables are a bit sloppy and should be formatted to fit in the document with normal sized fonts,\", \"the images are too small to see well.\"]}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"I found the topic of this paper interesting and I believe I understand what the authors are trying to achieve but I'm afraid this was after several readings and I do think the paper could be presented differently that would make it more accessible. My suggestion would be to explain how the model will be applied first (identify the required properties) to motivate the need for the learned basis and then present the DMD as a method for providing a basis that meets the properties required. I acknowledge that different communities have different styles of presentation so apologies if this is just me.\\n\\nFirst I would just like to check that I have understood correctly so please could the authors point out if I have missed something or misunderstood in the following?\\n\\nOur goal is to establish a basis invariant to the video dynamics that can then be used, for example, to partition the video into parts with differing dynamics - e.g. foreground/background. To do this we need to identify such a basis from a specific video - we will use the collection of pairs of neighboring frames.\\n\\nThe Koopman operator acts on a differential system to identify a function space invariant to the dynamics. If we instantiate this with a finite number of dimensions we can essentially establish the invariance as an eigenvalue problem. From this and our pairs of successive videos we can establish a vector basis for the space and then project the video into this basis. The spectral properties of the coefficients of the projection will determine whether something is static (omega = 0) or transitory in the scene and these can be used to identify foreground and background.\\n\\nNext there is the issue that this method operates in a linear domain with something like Gaussian noise which is not a good fit for image space videos so the authors propose to identify the dynamics in a linear latent space determined by an autoencoder to handle the non-linear mapping to image space.\\n\\nI hope I have understood the main points?\\n\\nIf this is the case, I think that much more needs to be said about the second part, which is the essential novelty of the paper, with a discussion of the merits of different approaches and full details - at the moment there is just one small paragraph at the end of 4.2 which contains the majority of the contribution.\\n\\nMy main concern about the paper is that I find it very difficult to appreciate the efficacy of the method given the current presentation of the results. There are no error bars to ascertain significance for any of the results and the summarization of multiple experiments to a single percentage gives very little insight into where this method works and where it doesn't. There are a number of ways that a dynamic prior could be added to a latent space and it is unclear why we would expect this approach to be preferred given the evidence presented in the paper.\", \"other_notes\": \"I found that the notation is not always consistent and sometimes could be simplified - it is unclear whether some operators are convolutions or multiplications (vector or scalar). To me the asterisk does not represent straight forward multiplication but it might be being used for this?\\n\\nCould Table 1 be placed in the experiments section rather than in the middle of page 5?\\n\\nDo the authors mean half the number of pixels or half the edge size (e.g. a quarter of the area) in terms of the latent space?\\n\\nPlease can all equations be numbered so that they can be referred to - there are no equation numbers in all of section 2.\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have read many papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper at least twice and used my best judgement in assessing the paper.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper considers the problem of extracting the underlying dynamics of objects in video frames. The paper focuses on two major applications: background/foreground extraction and video classification. The paper proposes a method that first obtains latent vectors from a video sequence by training a neural network and then applies dynamic mode decomposition (by Schmidt, 2010).\\n\\nThe paper is not well written and even after reading it the second time I, unfortunately, have difficulties understanding the exact contributions and experiments.\", \"here_are_concrete_examples_that_lead_to_this_critique\": [\"Section 2: Letters are not defined (e.g., \\\\mathcal F is not defined when first used), and sentences are not finished and/or do not make sense.\", \"Sec. 5.1: not clear whether the experiment is only done for one sequence or for many. If done on only one sequence, this is not sufficient to demonstrate that the method works well, if done on more than one then the results are not reported.\", \"Conclusion: States that ``this method can be applied to any multivariate-time series data to extract complex and non-linear dynamics''. That statement sounds overlay general given the experimental evaluation.\"]}" ] }
rJgCOySYwH
Function Feature Learning of Neural Networks
[ "Guangcong Wang", "Jianhuang Lai", "Guangrun Wang", "Wenqi Liang" ]
We present a Function Feature Learning (FFL) method that can measure the similarity of non-convex neural networks. The function feature representation provides crucial insights into the understanding of the relations between different local solutions of identical neural networks. Unlike existing methods that use neuron activation vectors over a given dataset as neural network representation, FFL aligns weights of neural networks and projects them into a common function feature space by introducing a chain alignment rule. We investigate the function feature representation on Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN), finding that identical neural networks trained with different random initializations on different learning tasks by the Stochastic Gradient Descent (SGD) algorithm can be projected into different fixed points. This finding demonstrates the strong connection between different local solutions of identical neural networks and the equivalence of projected local solutions. With FFL, we also find that the semantics are often presented in a bottom-up way. Besides, FFL provides more insights into the structure of local solutions. Experiments on CIFAR-100, NameData, and tiny ImageNet datasets validate the effectiveness of the proposed method.
[ "neural networks", "ffl", "identical neural networks", "function feature learning", "function feature representation", "different local solutions", "different", "feature learning", "similarity", "crucial insights" ]
Reject
https://openreview.net/pdf?id=rJgCOySYwH
https://openreview.net/forum?id=rJgCOySYwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "a_0cc8A99b", "k0yT7dKj2H", "HyxIbqgriS", "rkezXwxHiS", "S1x3MHlHiH", "SJgHjqzIqr", "HygT-omAtB" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1577068589235, 1576798733365, 1573353982233, 1573353241529, 1573352724049, 1572379292940, 1571859204993 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper1824/Authors" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1824/Authors" ], [ "ICLR.cc/2020/Conference/Paper1824/Authors" ], [ "ICLR.cc/2020/Conference/Paper1824/Authors" ], [ "ICLR.cc/2020/Conference/Paper1824/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1824/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Updated code and Further studies\", \"comment\": \"Updated code and further studies, please see:\", \"https\": \"//github.com/Wanggcong/SolutionSimilarityLearning\\n\\nThis is one of my favorite works! Suggestions are welcomed!\"}", "{\"decision\": \"Reject\", \"comment\": \"This paper tackles an important problem: understanding if different NN solutions are similar or different. In the current form, however, the main motivation for the approach, and what the empirical results tell us, remains unclear. I read the paper after the updates and after reading reviews and author responses, and still had difficulty understanding the goals and outcomes of the experiments (such as what exactly is being reported as test accuracy and what is meant by: \\\"High test accuracy means that assumptions are reasonable.\\\"). We highly recommend that the authors revisit the description of the motivation and approach based on comments from reviewers; further explain what is reported as test accuracy in the experiments; and more clearly highlight the insights obtain from the experiments.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to all reviewers\", \"comment\": [\"We thank all the reviewers for their helpful comments (while we still have a missing review from R3). We have revised the paper as suggested by the reviewers, and summarize the major changes as follows:\", \"Clearer explanations about several terminologies required by Reviewer2 are updated in Section 3.1.\", \"Clearer goals of the experiments and local solution/retrieval required by Reviewer2 are added in Section 4.\", \"Clearer explanations about the findings/insights of this paper required by Reviewer1 and Reviewer2 are updated in Section 3.1 and Section 4.\", \"Discussions about the usefulness of the proposed method required by Reviewer1 are added in Section 5.\", \"Discussions about the soundness of the proposed method required by Reviewer1.\", \"We would like to ask for the reviewers\\u2019 suggestions if it is allowed to have one more extra page to include more details and make the paper clearer. We targeted at 8 pages in the initial submission, but according to the reviewers\\u2019 comments, it will be helpful to have more details in the main text.\", \"We also try to eliminate each reviewer\\u2019s concerns one by one, which can be seen in the corresponding response.\"]}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer for the comments and would like to answer the reviewer\\u2019s questions as follows.\", \"q1\": \"I\\u2019m not 100% sure about the usefulness of the method. More insights into neural networks?\", \"a1\": \"We use the function feature representation together with the \\u201cLearning tells the truth\\u201d principle to validate some assumptions. We have polished Section 3.1 and Section 4 to make it clear.\\n\\nIn Section 4.1, 4.2 and 4.3, we validate that local solutions of the same learning tasks share a highly similar solution structure even though neural networks are non-convex functions. In Section 4.4, we found 1) local solutions of different network depths (PlainNet-5 and PlainNet-6) to a learning task share a similar structure. 2) Different network structures (plain and residual) partially share similar local solutions to a learning task. 3) Different activation functions (ReLU and LeakyReLU) lead to similar local solutions to a learning task. 4) SGD and Adam optimizers do not share similar local solutions to a learning task.\\n\\nDue to the non-convexity of the neural network, one could hardly know the properties of local solutions and even do not know if two given solutions are trained from the same learning task. This paper could take one small step towards this goal. Furthermore, one could use the chain aligned and projected solution to validate other useful assumptions. \\n\\nBesides, we would like to highlight the importance of weight similarity metric that could be under-explored. Weight similarity metrics could provide many potential benefits for machine learning:\\n1)Transfer learning. Transfer learning aims to find similar tasks that contain overlapping knowledge to help the learning of the target task. A good weight similarity metric can tell if two learning tasks are transferable. The low similarity of two tasks leads to negative transfer while high similarity brings positive transfer.\\n2)Ensemble learning. Ensemble learning aims to find a set of diverse learners to further boost performance. A good weight similarity metric can measure the similarity of a set of learners and select diverse ones for ensemble learning.\\n3)Non-convex optimization. A good weight similarity metric could be used to remove redundant local solutions and discover the underlying relations between different local solutions, which provides better gradient descent direction.\\n\\nWe have added these as future works in the conclusion section.\", \"q2\": \"The authors used a neural networks - a black box model - to provide insights for other neural networks, also black box models?\", \"a2\": \"We analyze the similarity of neural networks by using the chain alignment rule and a linear function (FC layer), but not a black-box model. One can use other traditional linear classifiers instead. We are not sure if we get the right point of the reviewer. If not, please correct us.\", \"q3\": \"One assumption from this paper that networks trained with different initializations for the same subtasks produce the same local solution is wrong. Therefore, I\\u2019m not 100% sure whether the results produced from all the experiments are trustable.\", \"a3\": \"Assumptions could be wrong under some conditions (e.g., ADAM optimizer) while reasonable under some conditions (eg., SGD). We use a \\u201clearning tells the truth\\u201d principle to validate assumptions. High accuracy means the assumptions are reasonable. To eliminate the reviewer\\u2019s concerns, we remove that assumption and use the retrieval setting. We directly use aligned function features (without FC layer for projection, no learning process, normalized) and cosine similarity to compute the similarity of aligned solutions. The solutions are generated like Section 4.1 and 4.2. We found that without that assumption, the results are still promising. We report these results as follows.\", \"tinyimagenet\": \"cmc rank 1,5,10: 97.3,1,1 (using Chain 1)\\ncmc rank 1,5,10: 98.8,99.8,1 (Chain 2)\", \"cifar100\": \"cmc rank 1,5,10: 95.7,99.3,99.8 (Chain 1)\\ncmc rank 1,5,10: 97.3,99.6,99.7 (Chain 2)\\n\\nThese show that without the learning process the similarity between aligned local solutions to a learning task is naturally higher than that of different learning tasks, leading to high retrieval results. We only compute the first two chains because weight space is too large (without projecting into the low dimension space). \\n\\nWe would like to share our code and confirm all of the authors\\u2019 information is removed. Some related absolute paths could be invalid. The code could take too much time to create a solution set (e.g., 5000 trained models). As a simple example, we suggest the reviewers focus on these files:\\nA.Solution set generation:\\n--train.py\\n--model/mlp.py\\nB.Solution classification/retrieval, includes chain alignment rule and linear projection\\n-- train_sup.py\\n--model/meta_model_mlp.py\\n\\nThe result is easy to reproduce since it is naturally a simple classification/retrieve problem. Anonymous code at: https://anonymous.4open.science/r/74e46ebe-4023-4a85-86b6-19ee20c5070a/\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for the kind review and suggestions. We have revised the paper according to the suggestions and would like to answer the reviewer\\u2019s questions as follows.\", \"q1\": \"What are the findings of the paper? The authors make the assumption that weights across the same layer (say layer #2) are somehow always going to learn similar values?\", \"a1\": \"We assume that weights across the same chain (say Layer-#1-#2-#3 ) could be affected by the permutation problem in different runs. A chain is defined as a sequence of layers of a neural network that begins with the first layer. Please find the definition in Page 4 of the previous manuscript. That means we compare two nets chain by chain:\\nNet1(Layer#1) <---> Net2(Layer#1))\\nNet1(Layer#1-2) <---> Net2(Layer#1-2))\\nNet1(Layer#1-2-3) <---> Net2(Layer#1-2-3)\\n...\", \"we_also_set_a_baseline_that_compares_two_nets_layer_by_layer\": \"Net1(Layer#1) <---> Net2(Layer#1))\\nNet1(Layer#2) <---> Net2(Layer#2))\\nNet1(Layer#3) <---> Net2(Layer#3)\\n...\\nFigs. 2, 3 and 4 (Section 4.1, 4.2 and 4.3) show that the chain alignment rule of the proposed method works. Fig. 1 (Section 2.2) shows the motivation of the chain alignment rule. \\n\\nWhat are the findings of the paper? In Section 3.1, we introduce a \\u201clearning tells the truth\\u201d principle. Given a specific assumption, we label a dataset based on the assumption. The dataset is split into a training set and a test set. If a trained model achieves high accuracy on the test set, we say the assumption is reasonable, and vice\\u00a0versa. The findings are listed as follows\\n1)In Section 4.1, 4.2 and 4.3, \\n--Assumption: the local solution (weights) of the same learning tasks share a highly similar solution structure even though neural networks are non-convex functions.\\n--Setting: based on this assumption, for each task (different runs by SGD), we assign a label to these local solutions. The solution set is split into a training set and a test set. \\n--Learning and validation: using the weight alignment and projection can achieve high accuracy (98~99% accuracy) on the test set.\", \"conclusion\": \"the assumption holds and the function feature learning works.\\n\\n2)In Section 4.4, \\n--Assumption:local solutions of different network depths (PlainNet-5 and PlainNet-6) of a learning task share a similar structure. (Learning and validation: yes, high classification/retrieval accuracy)\\n--Assumption: different network structures (plain and residual) share similar local solutions of a learning task. (Learning and validation: partially correct, residual nets affect the weights to some extent, moderate classification accuracy)\\n--Assumption: different activation functions (ReLU vs. LeakyReLU) lead to similar local solutions of a learning task. (Learning and validation: yes, high accuracy).\\n--Assumption: SGD and Adam optimizers lead to similar local solutions of a learning task. (Learning and validation: no, low accuracy).\\n\\nExcept several findings are listed above, some other potential benefits are given in the response to Reviewer #2.\\n\\nQ2. What the term solution class means, or what the authors want the reviewer to believe it means. Also solution label?\", \"a2\": \"Solution class or solution label is assigned based on assumptions that local solutions of a learning task (weights in different runs) share the same class while local solutions of different learning tasks have different class labels. Local solutions (trained weights of neural networks) and their assumptive labels can be regarded as data points, which are used for function feature learning. We have provided a clearer definition for solution label and class in Section 3.1. In Section 4.1~4.3 shows the detail for the generation of solution sets.\\n\\nQ3. Can you elaborate more on the goals of the experiments?\", \"a3\": \"We have added these in the introduction of Section 4. For each experiment, we have added some words to clarify the goals of the experiments. Specifically, Section 4.1~4.3 assume that under the SGD optimizer condition, local solutions of each task (different runs) are highly similar. In Section4.4, we change one factor of the baseline to form one new setting each time. Under a new condition, we investigate if the assumption in Section 4.1~4.3 still holds. We also study if the proposed function feature representation that is trained under the baseline condition is available under another condition.\", \"q4\": \"Can you elaborate the goal of local solution classification?\", \"a4\": \"We have added these in the introduction of Section 4. Local solution classification is to validate that under the label assumption the rule/knowledge learning from a training set also holds on the test set. Local solution retrieval aims to validate if function feature representation can be used for unseen solution classes, because solution classes between a training set and a test set are non-overlapping in the retrieval setting. These protocols follow image classification and retrieval and thus have the same motivation.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"I first wanted to thank the authors for their proposed approach in this paper. The paper discusses an interesting idea for quantizing the similarity of neural networks, based on weight similarity. However, overall this assumption is based on the fact that similar layers within a particular architecture will learn similar semantics across different runs (and surprisingly authors add tasks to this as well, which I don't understand why this is the case).\\n\\nUnfortunately, the paper is hard to read. It is not easy to understand the research questions, and experimental setup. Partly due to overloaded terms, such as \\u201csolution\\u201d being used for describing multiple different concepts across the paper (solution class, local solution classification, local solution retrieval, none of them particularly well defined in the paper). I highly recommend the authors to describe their findings in a more concrete manner. I did not see any attempts at giving *insights*, mostly numerical comparison.\", \"my_concerns_with_the_methodology_of_the_paper_are_as_follows\": \"1. What are the findings of the paper? Permutation of the neural networks is certainly an area worth studying. However, in this paper, authors make the assumption that weights across the same layer (say layer #2) are somehow always going to learn similar values (except in permutation) across different runs of the model? Major clarification in this area is required. \\n\\n2. I am not quite sure what the term solution class means, or what the authors want the reviewer to believe it means. Please elaborate. Also solution label? This terminology seems a bit obsolete and cumbersome, unless properly defined at the beginning of the paper. \\n\\n3. Can you elaborate more on the goals of the experiments? Right now the outcome of the experiments are a bit vague based on lack of hypotheses. \\n\\n4. Can you elaborate what the goal of local solution classification is? It is not clear if this is simply the classification accuracy of a trained model, or what is vaguely described in section 3.3\\n\\nI will make a re-evaluation of the paper after the above questions are answered. Overall, I suggest a rewrite of the paper to make claims, experimental hypotheses and design more clear.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper presents a method called \\u2018function feature learning\\u2019 which do not learn the data distribution but the parameters distribution of several neural networks types. The main idea is to generate many weights from different NNs trained with different random initializations for different subtasks and use them as training data for \\u2018function feature learning\\u2019. The experiments were done on three different datasets.\\n\\nOverall, the idea is quite interesting and new. However, I\\u2019m not 100% sure about the usefulness of the method. The authors claimed to provide more insights of neural networks with their method which I did not see when reading the paper. Furthermore, the authors used a neural networks - a black box model - to provide insights for other neural networks, also black box models. It sounds odd, doesn\\u2019t it? Moreover, one assumption from this paper that networks trained with different initializations for the same subtasks produce the same local solution is wrong. Therefore, I\\u2019m not 100% sure whether the results produced from all the experiments are trustable. \\n\\nIn sum, I rate this paper as a borderline paper and lean towards rejection due to several aforementioned uncertain points.\"}" ] }
r1eCukHYDH
Manifold Learning and Alignment with Generative Adversarial Networks
[ "Jiseob Kim", "Seungjae Jung", "Hyundo Lee", "Byoung-Tak Zhang" ]
We present a generative adversarial network (GAN) that conducts manifold learning and alignment (MLA): A task to learn the multi-manifold structure underlying data and to align those manifolds without any correspondence information. Our main idea is to exploit the powerful abstraction ability of encoder architecture. Specifically, we define multiple generators to model multiple manifolds, but in a particular way that their inverse maps can be commonly represented by a single smooth encoder. Then, the abstraction ability of the encoder enforces semantic similarities between the generators and gives a plausibly aligned embedding in the latent space. In experiments with MNIST, 3D-Chair, and UT-Zap50k datasets, we demonstrate the superiority of our model in learning the manifolds by FID scores and in aligning the manifolds by disentanglement scores. Furthermore, by virtue of the abstractive modeling, we show that our model can generate data from an untrained manifold, which is unique to our model.
[ "Generative Adversarial Networks", "Manifold Learning", "Manifold Alignment" ]
Reject
https://openreview.net/pdf?id=r1eCukHYDH
https://openreview.net/forum?id=r1eCukHYDH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "WnqIRMPcn", "HygfSsOnor", "SkxgAi42sH", "rJlf6PbKjS", "rJgkJHaGoH", "B1xkYmpzjH", "HkxhczaGor", "BJlU_xM-5S", "B1evrRiTKS", "S1lO7zYNtr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733334, 1573845817742, 1573829576237, 1573619642476, 1573209302534, 1573208950600, 1573208724122, 1572049005607, 1571827262642, 1571226144447 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1823/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1823/Authors" ], [ "ICLR.cc/2020/Conference/Paper1823/Authors" ], [ "ICLR.cc/2020/Conference/Paper1823/Authors" ], [ "ICLR.cc/2020/Conference/Paper1823/Authors" ], [ "ICLR.cc/2020/Conference/Paper1823/Authors" ], [ "ICLR.cc/2020/Conference/Paper1823/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1823/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1823/AnonReviewer2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This work proposes a GAN architecture that aims to align the latent representations of the generator with different interpretable degrees of freedom of the underlying data (e.g., size, pose).\\n\\nReviewers found this paper well-motivated and the proposed method to be technically sound. However, they cast some doubts about the novelty of the approach, specifically with respect to DMWGAN and MADGAN. The AC shares these concerns and concludes that this paper will greatly benefit from an additional reviewing cycle that addresses the remaining concerns.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Updating score\", \"comment\": \"I greatly appreciate the effort by the authors to further contextualize their results, as well as provide updates to their table. I've increased my score to 6 in light of this.\\n\\nRegarding incrementality, it seems the original DMWGAN authors proposed using something quite similar to the author's proposed regularizer (without the log)--i.e., the DMWGAN authors monitor Tr[Cov[x]] as a quality measure, and the present manuscript optimizes Log[Tr[Cov[x]]]. While I agree that clearly nontrivial effort went forth into justifying this choice of regularizer--could the authors comment on the similarity?\"}", "{\"title\": \"We conducted the suggested experiments and added the results.\", \"comment\": \"Dear R3,\\n\\nWe conducted several additional experiments you suggested and updated the manuscript with the results.\\n\\n------------------------------------------------------------------------------------------\\nPlease see\\n- Appendix F, regarding your comment#1 (as we have already answered in our previous comment).\\n- Appendix G, regarding your comment#2 (manifold alignment performance of MADGAN) and comment#3 (the effect of using different $\\\\lambda$'s).\\n- Appendix H, regarding your comment#3 (the effect of using different $\\\\mu$'s).\\n- Updated Table 1, regarding your comment#4.\\n------------------------------------------------------------------------------------------\", \"brief_summary_of_what_we_found_from_the_additional_experiments\": [\"Using different $\\\\lambda$'s did not significantly affected the performance.\", \"Using different $\\\\mu$'s brings up a trade-off situation between the sample quality (manifold estimation quality) and the manifold alignment quality. The value we used was around the middle of the two extremes.\", \"Instead of MADGAN, we tested a (MADGAN-like) DMWGAN model where the parameters of the first three layers are shared among the generators. This model showed a bit better performance than DMWGAN, but far worse than the MLA-GAN.\"]}", "{\"title\": \"The scores of DMWGAN is added in Table 1\", \"comment\": \"Dear reviewers\\n\\nAs suggested by R1 and R3, we just filled in the scores of DMWGAN in Table 1 (please see the updated manuscript). Note that these scores are obtained from the best performing setting among we have examined, to be fair to DMWGAN; we tried adding/removing the BatchNorm layers and using different division numbers (which effectively changes the number of hidden units in every layer). The detailed experimental setting can be found in Appendix C.\\n\\nTo briefly discuss the added scores, we can first see that the disentanglement scores of DMWGAN are very close to the base value (which is one), being the worst among the other models. This is the expected result, however, since DMWGAN uses a set of uncorrelated generators which gives no meaningful alignment in the latent space (see Figure 1, middle, as a reminder). Secondly, we can see that the FID score of DMWGAN in 3D-Chair dataset is also quite bad. We carefully speculate that the effectiveness of MI regularizer in DMWGAN has been saturated here, since it becomes hard to assign each generator to each datum as the dataset gets complicated.\\n\\nWe will update the results of the other experiments suggested by R3 very soon (the effect of weight sharing in the first few layers as in MADGAN; the effect of regularization weights $\\\\lambda$, $\\\\mu$).\"}", "{\"title\": \"Thank you for your supportive comments.\", \"comment\": \"Thank you for your supportive comments.\\n\\nYou are correct about the strength of the regularization. Our formulation is indeed not the most general one that meets the consistency of encoders. However, we would like to note that this does not substantially restrict the expressiveness of our model as we use multiple layers. Although the top linear layer would only produce a set of parallel manifolds, the following nonlinear activation bends or folds the manifolds at different points. Processed similarly by the rest of the linear and nonlinear layers, the resulting manifolds in the bottom layer are much more complex than just being parallel to each other.\\n\\nNevertheless, it is true that intersecting manifolds are hard to be represented by our model, as you pointed out. In practice, however, the intersecting manifolds are successfully learned in the form of manifolds with small gaps at the intersecting points. You can observe this in Zap50k results in Figure 3, the fourth column from the right, \\\"the high heels.\\\" Although each of the four manifolds presents a different class of shoes, they seem to intersect near the high heels, and our model shows no difficulty in modeling such an intersection (except the intersection could have been approximated as a small gap).\"}", "{\"title\": \"Thank you for your supportive comments.\", \"comment\": \"Thank you for your supportive comments. We would like to address the points one by one.\\n\\n1. We agree that the number of generators ($A$) is one of the crucial factors in modeling. In fact, the number of classes in the dataset is not the optimal number, but only a reasonable number for $A$. In this regard, we conducted additional experiments with MNIST using different $A$'s, as shown in Figure F.3. It can be seen that our model, regardless of the different $A$ values, performs consistently better than the baseline WGAN model. Interestingly, $A=10$ was not the best setting for MNIST; it was $A=25$. This suggests a need for learning $A$ from data, but we think this is beyond our current scope, as discussed in Sec. 2.3.2.\\n\\n2. This is a very good point. Although the original purpose of sharing weights in MADGAN was to avoid redundant computations, it is definitely worth checking if this design contributes to the manifold alignment. We are working on reproducing the MADGAN experiments, and we will report the results in a few days.\\n\\n3. Good point. Note the current version already includes the results for $\\\\lambda=0$ (see Table 1 and Appendix G). We will add the results for different $\\\\lambda$'s and $\\\\mu$'s soon.\\n\\n4. As the generators of DMWGAN are not at all correlated to each other, we initially thought that showing the manifold-alignment performance of DMWGAN makes little sense. We will report these scores, both for MNIST and 3D-Chair, in a couple of days (InfoGAN scores are filled now).\", \"typos\": \"We have fixed the typos, if not all. We will review the text more thoroughly and revise it before submitting the final version.\"}", "{\"title\": \"Thank you for your valuable comments.\", \"comment\": \"Thank you for your valuable comments.\\n\\nWe believe your concerns arose mainly from the quantitative results in Table 1. We would like to resolve the concerns by analyzing Table 1 in detail, but before that, we invite you to see the qualitative results from Figure H.6 in Appendix H for the better discussion.\\n\\nBy comparing (a) and (c) of Figure J.8, we can clearly see the superior performance of MLA-GAN over DMWGAN (Note the samples are arranged in the same manner as Figure 3.). Column-wise, we see the samples share the same smooth features (e.g., stroke, slant) in MLA-GAN, but this is not the case at all in DMWGAN. Row-wise, we see the generators of MLA-GAN present distinct digit manifolds with a clean separation, whereas the generators of DMWGAN present manifolds involving a few crossovers between the digits; this is likely because the generators of MLA-GAN are enforced to share the smooth features, driving more regular manifold structures in all the generators.\\n\\nFrom these clear differences and benefits, we disagree that MLA-GAN is incremental to DMWGAN. Most of all, our principal objective was not only the multi-manifold learning but also the manifold alignment, but DMWGAN cannot perform the manifold alignment, as illustrated in Figure 1 and discussed in Section 3. Also, we would like to emphasize that the generalizability of MLA-GAN to an untrained manifold, demonstrated in the style-transfer experiments, is another very distinct property of MLA-GAN over other models.\\n\\nThat being said, we agree that Table 1 is lacking information about DMWGAN. We are currently running the experiments, and we assure you that all the unfilled scores will be reported in a couple of days (The reason that we did not report the disentanglement scores at first was that the DMWGAN has almost nothing to do with the manifold alignment).\\n\\nYour last concern about Table 1 was that the disentanglement scores of our model are inferior to that of $\\\\beta$-VAE. But as we have pointed out in Section 4.2, the manifold learning performance (FID) of $\\\\beta$-VAE is far worse than our model (see also, Figure J.8 (e) and Figure J.9 (b)). We emphasize that the FID and the disentanglement scores should be simultaneously considered to evaluate the MLA task, and MLA-GAN is showing the best performance in that regard.\", \"typos_and_grammatical_errors\": \"We have fixed the most, if not all. We will review the text more thoroughly and revise it before submitting the final version.\\n\\n== Minor Edit ==\\nWe updated some of the appendix-figure indices in this comment, since they are changed as we add more figures in the revised manuscript.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"EDIT: Updated score to weak Accept in lieu of author's response. See below for more details.\\n\\nThe authors propose a GAN architecture that aims to align the latent representations of the GAN with different interpretable degrees of freedom of the underlying data (e.g., size, pose). While the text, motivation, and experiments are fairly clear, there are some spelling/grammar mistakes throughout, and the draft could use a solid pass for overall clarity.\\n\\nWhile the idea of using the log trace covariance as a regularizer for the manifold is certainly interesting, it seems fairly incremental upon previous work (i.e., DMWGAN). Even modulo the work being incremental, I still have concerns regarding the comparison to baselines/overall impact, and thus I suggest a Weak Rejection.\\n\\nTable 1 seems to indicate that the author's proposed method is on par with or worse than every method compared against except for 3D chair (bright). Additionally, the lack of comparison against DMWGAN for every task (except the first) is a bit concerning, considering its similarity to the proposed method. If the authors could check DMWGAN's performance for all of their tasks and report it, I would be more likely to raise my score.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"The paper suggests performing simultaneous manifold learning and alignment by multiple generators that share common weight matrices and a constructed inverse map that is instantiated by a single encoder. The method utilizes a special regularizer to guide the training. It has been empirically well tested on a multi-manifold learning task, manifold alignment, feature disentanglement, and style transfer.\\nOverall, this is an interesting idea with a motivated approach, however, I would like several points to be addressed before I could increase a score.\\n1. It seems that the method makes direct use of the number of classes in the datasets used. How would it fare compared to other models when the number of manifolds is not known (e.g. CelebA dataset)?\\n2. In MADGAN (Ghosh et al., 2017) generators share first layers which possibly makes them not independent as claimed in the paper, thus it is worth checking if MADGAN exhibits any kind of manifold alignment and could be a baseline for disentanglement with multiple generators.\\n3. There are hyperparameters \\\\lambda and \\\\mu for the regularizers in the model. It would be helpful to study their effect of different values on the training and encoding.\\n4. Is there a reason for DMWGAN/InfoGAN scores being omitted in Table 1?\\n\\nMinor remark - there are a number of typos in the text.\"}", "{\"rating\": \"6: Weak Accept\", \"experience_assessment\": \"I have published one or two papers in this area.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I made a quick assessment of this paper.\", \"title\": \"Official Blind Review #2\", \"review\": \"A paper's idea is to train joint Wasserstein GAN for k datasets given in R^n (corresponding, e.g., to different classes of objects), together with manifold alignment of k manifolds.\\n\\nThe idea is to align points whose corresponding latent points are the same. This induces a natural constraint: k encoder functions (inverse of generators) should be consistent with each other. This is done by adding a regularization term. A paper demonstrates a clear motivation and a working solution for the problem. Experiments are convincing.\\n\\nThe only question is that the regularization term forces something stronger than just consistency of encoders. It seems, the requirement that \\\"all tangential components of biases are the same\\\" means that k images of latent space (under k generator functions) are either coincide or non-intersecting. This is much stronger than just consistency, which is the weakest part of the approach.\"}" ] }
ByeadyrtPB
Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders
[ "Benoit Gaujac", "Ilya Feige", "David Barber" ]
Probabilistic models with hierarchical-latent-variable structures provide state-of-the-art results amongst non-autoregressive, unsupervised density-based models. However, the most common approach to training such models based on Variational Autoencoders often fails to leverage deep-latent hierarchies; successful approaches require complex inference and optimisation schemes. Optimal Transport is an alternative, non-likelihood-based framework for training generative models with appealing theoretical properties, in principle allowing easier training convergence between distributions. In this work we propose a novel approach to training models with deep-latent hierarchies based on Optimal Transport, without the need for highly bespoke models and inference networks. We show that our method enables the generative model to fully leverage its deep-latent hierarchy, and that in-so-doing, it is more effective than the original Wasserstein Autoencoder with Maximum Mean Discrepancy divergence.
[ "Generative modelling", "Optimal Transport" ]
Reject
https://openreview.net/pdf?id=ByeadyrtPB
https://openreview.net/forum?id=ByeadyrtPB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "7A-2b_L_bH", "rkl9gazPjB", "rJlkhDzwjB", "SyxkyrGvsH", "BJllJxEH5B", "BJx5f3iAKr", "rJgzs5lwtr" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733305, 1573494002311, 1573492646597, 1573491927071, 1572319192001, 1571892241806, 1571388058410 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1822/Authors" ], [ "ICLR.cc/2020/Conference/Paper1822/Authors" ], [ "ICLR.cc/2020/Conference/Paper1822/Authors" ], [ "ICLR.cc/2020/Conference/Paper1822/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1822/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1822/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper received 6, 3, 1. The main criticism is the lack of quantitative evaluation/comparison. The rebuttal did not convince the last reviewer who strongly argues for a comparison. The authors are encouraged to add additional results and resubmit to a future venue.\", \"title\": \"Paper Decision\"}", "{\"title\": \"On the comparison with prior works.\", \"comment\": \"We thank the reviewer for their feedback.\", \"the_reviewer_identified_3_areas_on_which_they_felt_the_draft_could_be_stronger\": \"(i) incrementality versus[1], (ii) quantitative evaluations of Stacked WAE versus other methods, and (iii) purported lack of significance in addition to the standard WAE framework. We believe that, unfortunately, this review has misunderstood our work. In particular:\\n\\n(i) the comparison with [1] misunderstands the difference between our approach to \\\"stacking\\\" WAE losses and the \\\"nesting\\\" of Wasserstein distances in [1],\\n\\n(ii) the lack of quantitative comparisons is a result of our work using a novel non-likelihood based objective, which prohibits natural single-metric comparisons, and\\n\\n(iii) our work is a mathematically clean and qualitatively incremental contribution on top of the existing WAE approach, enabling the training of latent-variable models that the WAE outright fails to train.\\n\\nWe address each of these points in turn.\\n\\n[1] propose to nest Wasserstein distances in the sense that they use the Wasserstein distance in the space of the images pixels as their $\\\\textit{ground metric}$. They then use the dual formulation of the 1-Wasserstein distance (in the image space) to derive an adversarial objective for training generative models. This differs from our work in 2 paradigmatic ways. Firstly, while [1] use the \\\"nested\\\" Wasserstein distance as their ground metric for the Wasserstein distance, we use the \\\"nested\\\" Wasserstein distance as a regularisation term on the space of latent distributions in the formulation of the WAE objective. Secondly, the objective in[1] is trained using an adversarial scheme and thus, no encoder network allows for the mapping from the observation space to the latent space. In our work, we are interested in training deep-hierarchical generative models in the autoencoder framework, with an encoder network allowing us to learn a meaningful latent manifold. The role of the \\\"nested\\\" Wasserstein distance in these two works is thus only the same in nomenclature: [1] actually $\\\\textit{nest}$ a Wasserstein distance in the pixel space as their ground metric, while we $\\\\textit{stack}$ a Wasserstein distance as a latent regulariser in the WAE objective.\\n\\nWe agree with the reviewer that a rigorous comparison with existing methods is important. That said, the form of the WAE loss makes such a comparison hard. Indeed, in the WAE, the relaxation of the hard constraint on the coupling of the data distribution and the generative distribution introduces a hyperparameter that will be tuned for each experiments. Moreover, the WAE objective is a likelihood free method, making it hard to compare with the common likelihood based methods. A good metric that enables the comparisons between likelihood and non-likelihood methods remains to be discovered. One attempt at comparing generative models trained with non-comparable objectives is to use sample-based metrics such as the FID score ([2]). However, given the data sets considered in our work, we felt that such metric would not be relevant. Despite this, we do perform a qualitative comparison with the original WAE method when training deep hierarchical models. We intuitively explain why WAEs would fail in training deep hierarchical latent models in section 2.2 (see Equation (7)) and then show empirically in section 3.1 that it is indeed the case (see Figure 4). While the 5-layer generative model trained as WAE achieved good reconstructions (Figure 4a), the samples are significantly worse than those obtained using our Stacked WAE (Figure 4b versus Figure 2b) and no structure was learnt in the deep latent space (Figure 4c versus Figure 2c).\\n\\nFinally, while our Stacked WAE method is indeed built on the well-known WAE objective and consists of stacking WAE modules on the top of each other, the novelty resides in the way we unroll the original WAE objective, using WAEs as latent regularisers at each layer, enabling the hierarchical model to leverage all of its deep layers. This allows for the propagation of information from the observation space all the way to the deepest latent layer in fully factorised Markov models, and by doing so, it captures the data structure all along the hierarchy. This result, which we clearly demonstrate, is something that both WAEs and VAEs outright fail at. In this sense we do not consider our work to be an insignificant contribution on top of the pre-existing WAE framework.\\n\\nWe hope that this review might either be amended significantly given that it seems to have misunderstood both our work and the relevant literature.\\n\\n[1]: Y. Dukler, W. Li, A. Lin and G. Montufar. Wasserstein of Wasserstein Loss for Learning Generative Models. In International Conference on Machine Learning, 2019.\\n[2]: M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems, 2017.\"}", "{\"title\": \"On the motivation behind training deep latent hierarchical generative models.\", \"comment\": \"We appreciate the reviewer detailed feedback.\\n\\nThere are many differences between our work and that of [1], but they stem predominantly from a significant difference in motivation. In [1], the authors train a single-latent-layer generative model (in evident contrast to our work) with a bespoke architecture for the encoder and decoder aiming at capturing hierarchical structure in the data and learning disentangled representations. In our work on the other hand, the goal was to show that using the Stacked WAE objective, a deep-hierarchical-latent model can be trained, in principle improving generative capacity over shallower generative models. \\n\\nWhile we acknowledge that one of the motivations behind using hierarchical latent-variable models is the discovery of hierarchical representations (as [1] sought to do), we focus on improving the ability of generative models to learn deep latent hierarchies (similarly to [2], [3]). That is, our motivation is to methodologically enable the training of deep latent hierarchies. Indeed, as explained in Section 2.2 and shown in Section 3.1, the Stacked WAE method allows for better training of deep hierarchical generative models than the original WAE framework. More specifically, it is able to learn an approximate posterior over all the latent layers as opposed to the WAE, and without the need for skip connections and weight sharing in the encoder and decoder networks unlike VAE methods ([2], [3]). \\n\\nWe admit that this leads to debatable choices for the generative models considered. For example, in the MNIST experiment in Section 3.1, we trained a generative model that is surely too deep for this simple data set (we use 5 latent layers while [1] have only 3 levels in their hierarchy). The intention of our work was not to learn MNIST well, but to show that a 5-layer latent-variable model can actually be trained on MNIST (a feat that requires significant architecture and optimisation design in the VAE setting, see Figure 6 of [2]). This is why we did not, for example, carefully interpret our latent hierarchies, despite that being an interesting question.\\n\\nTo make our motivation clearer to readers, in particular in contrast to [1], we have added a short discussion to the introduction. We believe that the motivational differences between our work and that of [1] justify the shortcomings pointed out by the reviewer, and in this context hope that the reviewer would agree to amending their rating to a \\\"weak accept\\\".\\n\\n[1]: S. Zhao, J. Song, and S. Ermon. Learning hierarchical features from deep generative models. In International Conference on Machine Learning, 2017.\\n[2]: C. K. S\\u00f8nderby, T. Raiko, L. Maal\\u00f8e, S. K. S\\u00f8nderby, and O. Winther. Ladder variational autoencoders. In Advances in neural information processing systems, 2016.\\n[3]: L. Maal\\u00f8e, M. Fraccaro, V. Li\\u00e9vin, and O. Winther. BIVA: a very deep hierarchy of latent variables for generative modeling. In Advances in neural information processing systems, 2019.\"}", "{\"title\": \"Thank you\", \"comment\": \"We thank the reviewer for their positive feedback.\"}", "{\"experience_assessment\": \"I do not know much about this area.\", \"rating\": \"6: Weak Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper presents a deep, latent variable model for unsupervised data modeling problems. The problem with such latent, deep generative models is that they are difficult to train reliably. In this paper, the authors provide an approach based on stacked Wasserstein autoencoders to train deep latent variable models. Experimental results are demonstrated on various image datasets and the latent codes are demonstrated to have an interpretable meaning.\\nI like the inference techniques in the paper and like the ideas presented in this paper.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"In this paper, a hierarchical extension to Wasserstein Autoencoders (WAE) is proposed, where the latent variables are stacked in a multi-layer structure. In the proposed model, the divergence function in WAE is viewed as a relaxed WS distance. Therefore, another layer of WAE can be stacked to minimise the WS distance. In this way, a hierarchical model can be built to learn hierarchical representations.\\n\\nI think the idea of viewing the divergence in WAE as a relaxed WS distance and then minimising it with another WAE structure is interesting, intuitive and straightforward. However, the advantages of the proposed model over WAE and VLAE (S.Zhao et.al 2017) are less obvious to me. It is a bit hard for me to tell whether the hierarchical latent variables help to improve quantitative results, generate better images, or learn intuitive hierarchical representations, which is the main reason that I go to mild rejection.\\n\\nFor example, I would expect to see similar things as in VLAE, where the representations in different layers capture hierarchical structures or disentanglements. But in the proposed model, it seems to be hard to see the differences between the hierarchical representations such as in Figure 3(b). Also in the two-dimensional visualisation of Figure 3(a), it is a bit hard for me to intuitively understand what the representations really capture. \\n\\nFrom the graphical model point of view, the proposed model is a hierarchical Gaussian model and the inference (although with WAE) is in the flavour of Gibbs sampling, which propagates information layer-wisely from bottom up. Conventionally, a hierarchical Gaussian model is hard to work with many layers such as 5. Therefore, I may suggest improving in case of fewer layers.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The paper aims to develop a deep generative model, which -unlike VAEs or GANs- comprises a hierarchy of latent variables rather than a direct map from the stochastic latent manifold to the observation space. To this end, the paper builds a training objective based on nesting the Wasserstein distance between the data distribution and its estimation arbitrarily many times. The generated objective corresponds naturally to a deep hierarchical generative model.\\n\\nThe principled approach followed to achieve the objective is solid and elegant. It is also intuitive and matches nicely with some valid observations highlighted in the paper such as insufficiency of by-passing intermediate latent variables (sentence above the Sec 2.3 title).\\n\\nOne major weakness of the paper is that it lacks a sufficient argumentation about how it differentiates from earlier attempts to nest Wasserstein distances. For instance,\\n\\nY. Dukler et al., \\\"Wasserstein of Wasserstein Loss for Learning Generative Models\\\", ICML, 2019\\n\\nApart from the theoretical argumentation, the paper should also compare their solution to this prior work on a number of benchmarks.\\n\\nAnother major weakness is that the paper lacks a quantitative evaluation scheme for its success. The experiments section starts with the claim that the proposed method \\\"significantly\\\" improves on the WAE, which I fail to see on the plots. \\n\\nLastly, Having said that the proposed method is novel and elegant, it is still a straightforward extension of the existing and well-known Wasserstein Auto-Encoder (WAE) approach. It extends WAEs by repetitively applying the tricks proposed by this earlier work, putting aside some minor additional adjustments.\", \"minor_on_style\": \"The abstract does not give any single hint about the methodological novelty of the work.\\n\\n---\", \"post_rebuttal\": \"Thanks to authors for their effort for clarifications. Yet, I'm afraid the author response does not touch at all to any of the concerns I have raised. There are well-known ways to compare the success of generative models, FID being one of them as the authors point out. Another could be the test log-likelihood of a synthetic data set the true distribution of which can be predesigned. I understand the issues the authors raise about the difficulties in comparing generative models, but I kindly disagree with the attitude that there are no ways to compare, so we are obliged to live with qualitative comparisons. If a one-score comparison is not enough, the right way to go is to provide multiple scores. If direct metrics are not feasible, one should go for indirect ones, but should still provide outcomes a reader can reproduce.\"}" ] }
rygT_JHtDr
Scalable Deep Neural Networks via Low-Rank Matrix Factorization
[ "Atsushi Yaguchi", "Taiji Suzuki", "Shuhei Nitta", "Yukinobu Sakata", "Akiyuki Tanizawa" ]
Compressing deep neural networks (DNNs) is important for real-world applications operating on resource-constrained devices. However, it is difficult to change the model size once the training is completed, which needs re-training to configure models suitable for different devices. In this paper, we propose a novel method that enables DNNs to flexibly change their size after training. We factorize the weight matrices of the DNNs via singular value decomposition (SVD) and change their ranks according to the target size. In contrast with existing methods, we introduce simple criteria that characterize the importance of each basis and layer, which enables to effectively compress the error and complexity of models as little as possible. In experiments on multiple image-classification tasks, our method exhibits favorable performance compared with other methods.
[ "Deep Learning", "Deep Neural Networks", "Low-Rank Matrix Factorization", "Model Compression" ]
Reject
https://openreview.net/pdf?id=rygT_JHtDr
https://openreview.net/forum?id=rygT_JHtDr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "YKkJOBZnJ", "SkgnNjL2sB", "BJlF7FLnoB", "HyxHmUIhoB", "ByxS6g82oS", "BJgsbqjy5r", "r1eS584AYH", "Skx9EjdpKH", "SJx6uf6sFr", "rkxGJainur" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1576798733263, 1573837619672, 1573837089116, 1573836317057, 1573834940782, 1571957251227, 1571862157362, 1571814193820, 1571701365279, 1570712793522 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1821/Authors" ], [ "ICLR.cc/2020/Conference/Paper1821/Authors" ], [ "ICLR.cc/2020/Conference/Paper1821/Authors" ], [ "ICLR.cc/2020/Conference/Paper1821/Authors" ], [ "ICLR.cc/2020/Conference/Paper1821/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1821/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1821/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1821/Authors" ], [ "~Yuhui_Xu2" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The proposed paper presents low-rank compression method for DNNs. This topic has been around for a while, so the contribution is limited. Lebedev et. al paper in ICLR 2015 used CP-factorization to compress neural networks for Imagenet classification; in 2019, the idea has to be really novel in order to be presented on CIFAR datasets. The latency is not analyzed.\\nSo, I agree with reviewers.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Reply to reviewer #3\", \"comment\": \"Thank you for your thoughtful comments.\\n\\nWe will investigate or compare with those methods you suggested and also consider to do experiments on larger datasets as our future works. \\nWe have revised some notations according to your suggestions.\\n\\n>In their deduction of full-rank-low-rank model joint training, .....\\n$U_r^{(\\\\ell)}{U_r^{(\\\\ell)}}^T$ is not $I_r$ but a projection matrix onto the rank-$r$ subspace (may be confusing with ${U_r^{(\\\\ell)}}^T U_r^{(\\\\ell)}=I_r$).\"}", "{\"title\": \"Reply to reviewer #2\", \"comment\": \"Thank you for your thoughtful comments. We reply in order.\\n\\n>1, 2, 3, and 5. Thank you for suggesting. We will investigate or compare with those methods as our future works.\\n\\n>4. It is based on heuristics that the proportion of # of parameters and MACs in each layer could be a selection criterion for efficiently reducing the complexity of entire network. It was experimentally better to use both $M$ and $P$ than to use either.\\n\\n>6. A corresponding result for $\\\\lambda = 0$ (regular loss) is shown in Figure 3 (\\\"infer (uni)\\\"), which is good for full-rank model (right-most point in Figure 3) but it is poor for reduced models. In the case of $\\\\lambda = 1$, for which we minimize a loss only for low-rank network whose ranks are randomly determined in each iteration, but the result was poor. We consider this is because randomness is too strong to learn a model.\\n\\n>7. At least, the method has an effect on the distribution of the singular values. As shown in Figure 2 (right most), singular values with the proposed loss are smaller than that with regular loss, meaning that approximation errors could be suppressed more than regular loss.\\n\\n>8. BN correction is needed only for inference. As shown in Figure 3, the proposed BN correction is not better in terms of accuracy than simply computing mean & var for each model after training. However, for recomputation, the model sizes must be fixed in advance and the computation is required for every model to be used. Our method requires the computation only one time for full-rank model and can analytically produce mean & var for the model with any rank.\\n\\n>9. Thank you for suggesting. We will take it into consideration.\\n\\n>10. Different from an original VGG-16, VGG-15 has only one FC layer (other than the last one for classification), which has only 512 nodes. Therefore, the proportion of # of parameters in the FC layer is low.\\n\\n>11. In the preliminary experiment on the CIFAR datasets, we confirmed that the performance gap is negligible for the interval of 2.\"}", "{\"title\": \"Reply to reviewer #1\", \"comment\": \"Thank you for your thoughtful comments. We reply in order.\\n\\n>1. We have described in the revised version that $m$ and $n$ are $K_w K_h C_{in}$ and $C_{out}$, respectively, for CNNs.\\n\\n>2. As you commented, latency would be better performance measure for practical evaluation, which remains as one of future tasks.\\n\\n>4. Because we input each mini-batch to full- and low-rank networks, the computation time for forward and back prop. will be increased. A weight matrix of each layer in the low-rank network is generated by applying SVD to the full-rank network and other parameters are shared with the full-rank network. Thus, the number of total parameters is not increased. We revised Figure.1 to better explain our method.\\n\\n>5. Currently, we don't have appropriate answer but it may depend on the target device.\\n\\n>6, 7. Two iterations between SVD. This means that $U$, $S$, and $V$ in low-rank network are updated once in every two iterations while $W$ in full-rank network are updated every iteration. A method in [1] uses trace-norm regularizer to obtain low-rank weight matrices. We consider it is only suitable for a resulting low-rank model. Therefore, the performance of full-rank model is not explicitly compensated while our method explicitly does by minimizing losses for both of full- and low-rank network.\\n\\n>8. Thank you for suggesting. We will investigate or compare with those methods as our future works.\\n\\n>9. We did not compress the last FC layer with uniform reduction (\\\"uni\\\") but we did with our criterion (\\\"c1\\\" and c1c2\\\"\\\"). Please see the right side of Figure 4 (it is slightly different). We consider this is because the last FC layer is important for classification.\\n\\n>10. As shown in Figure 3, the proposed BN correction is not better in terms of accuracy than simply computing mean & var for each model after training. However, for recomputation, the model sizes must be fixed in advance and the computation is required for every model to be used. Our method requires the computation only one time for full-rank model and can analytically produce mean & var for the model with any rank.\\n\\n>11. We use $x$ as a single input in each layer. We revised the notation in page 3.\\n\\n>12. It should be experimentally determined as shown in Figure 2.\"}", "{\"title\": \"Reply to all reviewers\", \"comment\": \"We would like to thank all the reviewers for their careful comments.\\nFirst of all, let us comment comprehensively. \\nWe will reply to individual comment not covered by this post.\\n\\n# Contributions\\nAccording to the comments from the reviewers, the novelty of our method has questioned due to utilization of classical low-rank matrix factorization techniques (i.e. SVD).\\nHowever, our method is not for compaction as in the literature [1], which aim to compress the model to a specific size, but for scalable usage in which the size of DNNs are changed without retraining.\\nAlthough the researches on this purpose (scalability of the model) have been done by Yu et al. (2019), there are some points to be improved.\\nWe propose a different approach based on low-rank matrix factorization, to the best of our knowledge, which is novel at least for this purpose (scalability of the model). \\nOn the algorithmic side, we believe that there is a novelty in our training procedure that explicitly minimizes losses for both of full- and low-rank network.\", \"the_main_contributions_of_our_work_are_as_follow\": \"1. In contrast to a work by Yu et al. (2019), we do not directly reduce the width but instead reduce the redundant basis obtained via SVD, which prevents the feature map in each layer from losing important features.\\n2. We propose a training method, which is designed not only to keep the performance of full-rank network but also to improve that of multiple low-rank networks (to be used at the inference phase). \\n\\n# Relation to low-rank based compression methods\\nWhile low-rank compression methods achieve good performance with a model of specific size, we need a single model that achieve good performance in multiple sizes which are to be selected at the inference phase. \\nWe consider that our training method (in the second contribution) is effective to achieve the purpose and is different from other methods that impose a certain low-rankness in training [1]. \\nIn addition, although we used simple channel decomposition by SVD, the proposed scheme does not depend on the decomposition method. \\nTherefore, the other decomposition methods such as spatial decomposition (Ioannou et al., 2016) and tensor decomposition (Kim et al., 2016) can be applied.\\nWe will investigate or compare with those methods as our future works. \\n\\nWe have revised the paper to better explain the details of our concept (Figure 1 in particular) and added ablation studies for a contribution 1 in appendix E.\\n\\n[1] Compression-aware training of DNN, Alvarez and Salzmann. NeurIPS 2017.\\n\\nThank you.\\nAuthors.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes a method to modify the computing requirements of a trained model without compromising the accuracy and without the need for retraining (with new requirements). To this end, the algorithm focuses on factorizing the models and uses a 2-branch training process to train low-rank versions of the original model (to minimize the accuracy drop).\\n\\nDifferent from other approaches, the algorithm claims to exploit the importance of each layer when reducing the compute.\", \"comments\": [\"Figure 2 and related numbers are slightly misleading. The paper focuses on CNN while these numbers and the figure is for FC. M changes significantly when using convolutions. It would be great to clarify this all over the text as M increases signigicantly when using (at least) 3x3 convolutions.\", \"One missing thing for me is taking into account the latency rather than the number of parameters. While factorization may reduce the number of parameters (considering the rank is sufficiently low), the number of layers and therefore data movements increases and so does the latency. Some analysis on this can be found in [1] where the paper trains a network with low-rank promoting regularization. I missed having [1] and other similar approaches in the related work and how the proposed method compares to those directly promoting low-rank solutions.\", \"The approach would be sounded if the algorithm does not need to work on the factorized version of the layer. That would bring direct benefits to inference time.\", \"On the algorithmic side, it is not clear to me how the \\\"2-branches\\\" are trained and what parameters are shared. This seems to involve more compute, right? How is this better than aiming at the lowest-rank possible?\", \"The complexity-based criterion is interesting although only uses FLOPS as a proxy. How would this translate in practice when the latency is not directly represented by the FLOPS (given the parallelization).\", \"During the learning, it is not clear to me how the process is implemented and the scalability of this approach. The paper suggests computing SVD per iteration is infeasible. How many iterations are between SVD? and how results are reused. How this is different from [1] where the authors used truncated SVD to promote low-rank every epoch?\", \"I need clarification on the need of training full-rank and low-rank (end of page 4). If full-rank does not actually provide better accuracy (see [1]), then, why do we need to rely on that?\", \"Section 3 focuses on works relying on retraining. It would be nice to see how the proposed method compares to those not considering retraining to improve accuracy.\", \"Not clear to me what is the take-home message with Figure 4. Is the resulting rank per layer enough to represent a big compression? Why not compressing the last layer. the compression is limited as the factorization does not promote sparsity when combining the basis (Sparsity on V). Thus, the size after factorization is the same as the original.\", \"The BN correction does not seem to contribute. Experiments suggest: our method is effective. Not very appealing as a contribution.\"], \"minor_details\": [\"x is used as a single input (page 2) and as an entire dataset (page 3)?\", \"How do we set the \\\\alpha values?\", \"[1] Compression-aware training of DNN, Alvarez and Salzmann. NeurIPS 2017\"]}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": [\"This paper proposes to reshape the weights of the layers of deep neural networks and parametrize them with a low-rank matrix decomposition (SVD). The rank is optimized using two criterion (error-based and complexity-based). Since the decomposition is applied post-hoc, the authors propose to correct the parameters of the batch norm analytically. The authors propose to jointly optimize a loss on the full network and the low-rank version. Experiments are done on CIFAR 10 an 100 with a VGG-15 and ResNet-34 architecture.\", \"Some of the ideas are interesting and would be worth developping further. However, the paper in the current state cannot be accepted for the following reasons: (1) the novelty is low, this very type of decomposition is already widely studied (2) the paper is not clear as to what the contributions are, and why they are justified, theoretically or empirically, (3) the review and comparison with the state-of-the-art is lacking and (4) the experimental setup is simplistic and not convicing. (5) overall the paper is imprecise.\", \"Main comments\", \"Applying SVD to the matricized weights of deep neural networks is not new. Actual contributions need to be separated from existing works.\", \"The related word needs to be reviewed. Many references are missing. In particular, the proposed method could be considered as a special case of tensor based methods.\", \"The references that *are* listed in the related work are not properly reviewed: the authors aim to not compare with them claiming that they require re-training. Lebedev et al provide a method that works both for end-to-end training or post-hoc, by applying tensor decomposition to the trained weights. Fine-tuning is optional and done to recover performance.\", \"How was the complexity-based criterion obtained? Why use both M and P, since M includes P? How do the proposed criterion compare to simple measures, e.g. explained variance?\", \"The authors should compare with other compression techniques: layer-wise compression (e.g. Lebedev et al, Speeding-up convolutional neural networks using fine-tuned cp-decomposition, ICLR 2015) or full network compression (e.g. Kossaifi et al, T-Net: Parametrizing Fully Convolutional Nets with a Single High-Order Tensor, CVPR 2019).\", \"Missing experiments\", \"The proposed learning loss needs to be compared with the original one to demonstrate any potential performance improvement. Currently, it is not clear whether it is actually helping. In other words, there should also be a comparison with \\\\lambda = 1 or 0.\", \"Does the proposed loss have an effect on the selected rank? On the actual rank of the weights? On the distribution of the eigenvalues?\", \"The BN correction needs to be experimentally motivated: since the network is trained with a loss that incorporates the low-rank network, is that needed? Does the propose loss affect performance? How does performance change with and without that BN correction?\", \"Experiments on CIFAR 10-100 is not sufficient to be convincing, the authors should ideally try a more realistic, large scale dataset, e.g. ImageNet.\", \"VGG-15 is not convincing to show-case compression, as more than 80% of the parameters are in the fully-connected layer\", \"The authors assume that the SVD decomposition of the weights does no change significantly at each step: is there any empirical evidence supporting this assumption? This most likely depends on the experimental setup (batch-size, learning rate, etc).\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"This paper propose to compress deep neural network with SVD decomposition, which however has been published 6 years ago in this paper to decompose FC layers:\\nXue, Jian, Jinyu Li, and Yifan Gong. \\\"Restructuring of deep neural network acoustic models with singular value decomposition.\\\" Interspeech. 2013.\\nFor convolutional layers, Tucker decomposition, which is high-order SVD, is apparently a better choice. (Kim et al. (2016)) They should at least compare their method with this one.\\n\\nIn their deduction of full-rank-low-rank model joint training, W_r^{(l)} = U_r^{(l)}{U_r^{(l)}}^T W^{(l)} is used without any explanation. Since U_r^{(l)}{U_r^{(l)}}^T = I_r, W_r^{(l)} will be first r rows of W^{(l)} or first r cols of W^{(l)}. It can not be treated as approximation of W^{(l)}. In other words, the foundation of their training is wrong, which makes their experimental results unconvincing.\\n\\nThey conduct their experiments with CIFAR-10/100, which is too small for VGG-15 and ResNet-34. A larger dataset would be better.\\n\\nThe writing of this paper is not good. For example:\\n1. In proposition 2.1, what does \\\"y\\\" represent.\\n2. For Error-based criterion, what is its difference with selecting rank according to singular values?\\n3. What is P^{(i)} and M^{(i)} in complexity-based criterion?\\n\\nIn conclusion, i will give a weak reject.\"}", "{\"title\": \"Thank you for your comments.\", \"comment\": \"The paper [1] proposes a novel training method to improve the performance of low-rank network:\\n- apply SVD in training for every $m$ iterations and reduce ranks according to a fixed ratio $e \\\\in [0, 1]$.\\n- impose trace norm regularization with a parameter $\\\\lambda$ to facilitate low-rankness.\\n\\nThe paper [1] shows theoretical guarantee of convergence to a specific rank, where the resulting rank depends on $e$. That is, the rank of network is fixed after training and thereby is not assumed to be flexibly changed.\\n\\nOur training method explicitly minimizes losses for both of full- and low-rank network, which is designed not only to keep the performance of full-rank network but also to improve that of multiple low-rank networks (whose ranks are randomly determined in training). We consider the method helps network to perform well for multiple ranks to be used after training.\\n\\nAnyway, we will cite it as related work.\\nWe thank you again.\"}", "{\"comment\": \"Hi,\\n\\nThanks for sharing your interesting work!\\nThe biggest contribution of this work is the new scalable low rank decomposition. While I think you may miss a relevant reference Trained Rank Pruning[1] which proposes a similar procedure(embeds the low-rank decomposition in the training).\\n\\n[1]Trained Rank Pruning for Efficient Deep Neural Networks https://arxiv.org/abs/1812.02402v2\", \"title\": \"About the training procedure\"}" ] }
rkgTdkrtPH
NoiGAN: NOISE AWARE KNOWLEDGE GRAPH EMBEDDING WITH GAN
[ "Kewei Cheng", "Yikai Zhu", "Ming Zhang", "Yizhou Sun" ]
Knowledge graph has gained increasing attention in recent years for its successful applications of numerous tasks. Despite the rapid growth of knowledge construction, knowledge graphs still suffer from severe incompletion and inevitably involve various kinds of errors. Several attempts have been made to complete knowledge graph as well as to detect noise. However, none of them considers unifying these two tasks even though they are inter-dependent and can mutually boost the performance of each other. In this paper, we proposed to jointly combine these two tasks with a unified Generative Adversarial Networks (GAN) framework to learn noise-aware knowledge graph embedding. Extensive experiments have demonstrated that our approach is superior to existing state-of-the-art algorithms both in regard to knowledge graph completion and error detection.
[ "Knowledge graph embedding", "Noise aware" ]
Reject
https://openreview.net/pdf?id=rkgTdkrtPH
https://openreview.net/forum?id=rkgTdkrtPH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "preNA0A6cy", "ryxPtR_2sB", "BklSeTd3jS", "rkxEN3dhsB", "HklhuRpxiH", "ryl29S9x5H", "S1l96w7HKH" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733233, 1573846655093, 1573846252919, 1573846060188, 1573080691873, 1572017556345, 1571268546390 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1820/Authors" ], [ "ICLR.cc/2020/Conference/Paper1820/Authors" ], [ "ICLR.cc/2020/Conference/Paper1820/Authors" ], [ "ICLR.cc/2020/Conference/Paper1820/AnonReviewer4" ], [ "ICLR.cc/2020/Conference/Paper1820/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1820/AnonReviewer1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper proposes a noise-aware knowledge graph embedding (NoiGAN) by combining KG completion and noise detection through the GANs framework. The reviewers find that the idea is interesting, but the comparison to SOTA is largely missing. The paper can be improved by addressing the reviewer comments.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Responds to Review #1\", \"comment\": \"We thank the reviewer for the constructive reviews. We addressed the questions and concerns of the reviewer accordingly in the following.\\n\\n(1) Thanks to the reviewer for pointing out the problem of data leakage in FB15K and WN18. We have conducted the experiments on the FB15K-237 and WN18RR instead. Please find the result in Table 3 in our latest version of the paper.\\n\\n(2) Thanks to the reviewer for pointing the issue of insufficient baselines. We have added more baseline methods, including (1) KGE models (e.g., DistMult [5] and RotatE [4]), (2) robust KGE models (e.g., attention based method [6]) and (3) KGE models with GAN (e.g., KBGAN [3]). In addition, to show that our NoiGAN can be easily generalized to various KGE models, RotatE is also added as score function for NoiGAN. Please find the result in Table 3 in our latest version of the paper. The results show that both NoiGAN-TransE and NoiGAN-RotatE consistently and significantly outperform all the baseline methods in terms of robustness. \\n\\n(3) The goal of NoiGAN and KBGAN is totally different. KBGAN incorporates GAN for better negative sampling to improve the quality of embeddings. The discriminator of the GAN is their final KGE model. However, in our case, NoiGAN utilizes GAN to determine whether a triple is noisy and it is independent with our KGE model. Different from KBGAN, the discriminator in our GAN is a binary classifier, which is used to learn confidence score for each triple to enable NoiGAN to cope with noisy training data. To further show the difference between NoiGAN and KBGAN, we have added KBGAN as our baseline and report the results in Table 3 in our latest version of the paper. The results indicate that KBGAN cannot cope with noisy training data.\\n\\n(4) To analyze the efficiency of NoiGAN, we compare the total training time until converge against the baseline methods on FB15K-237 with 100% noise as follows.\\n\\nMethods The whole training time until converge (min)\\nTransE [8] 60\\nCKRL [7] 150\\nDistMult [5] 40\\nRotatE [4] 60\\nKBGAN [3] 30\\nattention based method [6] 3600\\nNoiGAN-TransE 60\\n\\nWe can observe that our NoiGAN does not cost much time compared to other baseline methods.\\n\\n(5) Thanks to the reviewer for pointing the issue of not reporting NoiGAN performance with 0% noise. We have added these experiments as shown in Table 3 in our latest version of the paper. We can observe that for NoiGAN-RotatE, it has almost the same performance as its variant RotatE on FB15K-237 and WN18RR. It performs even better than RotatE on YAGO3-10. The major reason could be that YAGO3-10 contains more noise than FB15K-237 and WN18RR. Our NoiGAN-RotatE shows its superiority in this situation. \\n\\n[1] \\u201cGraphGAN: Graph Representation Learning with Generative Adversarial Nets.\\u201d AAAI'18. \\n[2] \\u201cIrgan: A minimax game for unifying generative and discriminative information retrieval models.\\u201d SIGIR'17. \\n[3] \\u201cKbgan: Adversarial learning for knowledge graph embeddings.\\u201d NAACL\\u201918. \\n[4] \\u201cRotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space.\\u201d ICLR'19. \\n[5] \\u201cEmbedding Entities and Relations for Learning and Inference in Knowledge Bases.\\u201d ICLR'15. \\n[6] \\u201cLearning Attention-based Embeddings for Relation Prediction in Knowledge Graphs.\\u201d ACL\\u201919\\n[7] \\u201cDoes william shakespeare really write hamlet? knowledge representation learning with confidence.\\u201d AAAI\\u201918.\\n[8] \\u201cTranslating embeddings for modeling multi-relational data.\\u201d NeurIPS\\u201913\"}", "{\"title\": \"Responds to Review #2\", \"comment\": \"We thank the reviewer for the constructive reviews. We addressed the questions and concerns of the reviewer accordingly in the following.\\n\\n(1) Thanks to the reviewer for pointing the issue of insufficient baselines. We have added more baseline methods, including (1) KGE models (e.g., DistMult [5] and RotatE [4]), (2) robust KGE models (e.g., attention based method [6]) and (3) KGE models with GAN (e.g., KBGAN [3]). In addition, to show that our NoiGAN can be easily generalized to various KGE models, RotatE is also added as score function for NoiGAN. Please find the result in Table 3 in our latest version of the paper. The results show that both NoiGAN-TransE and NoiGAN-RotatE consistently and significantly outperform all the baseline methods in terms of robustness. \\n \\n(2) Thanks to the reviewer for pointing out the problem of data leakage in FB15K and WN18. We have conducted the experiments on the FB15K-237 and WN18RR instead. Please find the result in Table 3 in our latest version of the paper.\\n\\n(3) Thanks to the reviewers for pointing out more related works. According to [1] pointed out by the reviewer, although the authors include real-world noise in the Biological Knowledge Graph, it still introduces random noise to FB15k-237, which is the same as what we do. The major reason is that the real-world noise is unavailable for the benchmark Knowledge Graph dataset, including FB15k-237, YAGO3-10 and WN18RR. Some of the other related works also use the same strategy to introduce random noise, such as [7], [8], [9].\\n\\n(4) We apologize for the unclear claim. We agree that well trained KGE models are widely used for denoising when constructing a knowledge graph, such as [2] mentioned by the reviewer. However, in order to achieve a reliable well trained KGE model, the training data has to be clean. The major reason is that current KGE models highly rely on high-quality training data and thus are lack of robustness to noise [7]. Given the fact that a real knowledge graph will inevitably include many kinds of errors, such as ambiguous, conflicting and erroneous and redundant information, it is difficult for us to find an ideal clean dataset to train KGE models. To address this problem, in this paper, we proposed a novel technique to enable current embedding models to cope with noisy data.\\n\\n[1] \\u201cInterpretable Graph Convolutional Neural Networks for Inference on Noisy Knowledge Graphs.\\u201d Workshop at NeurIPS\\u201918.\\n[2] \\u201cKnowledge Vault: A Web-Scale Approach to Probabilistic Knowledge Fusion.\\u201d KDD\\u201914.\\n[3] \\u201cKbgan: Adversarial learning for knowledge graph embeddings.\\u201d NAACL\\u201918. \\n[4] \\u201cRotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space.\\u201d ICLR'19. \\n[5] \\u201cEmbedding Entities and Relations for Learning and Inference in Knowledge Bases\\u201d ICLR'15. \\n[6] \\u201cLearning Attention-based Embeddings for Relation Prediction in Knowledge Graphs\\u201d, ACL\\u201919.\\n[7] \\u201cSparsity and noise: Where knowledge graph embeddings fall short.\\u201d ENMLP\\u201917.\\n[8] \\u201cDoes william shakespeare really write hamlet? knowledge representation learning with confidence.\\u201d AAAI\\u201918.\\n[9] \\u201cConfidence-aware negative sampling method for noisy knowledge graph embedding.\\u201d ICBK\\u201918.\"}", "{\"title\": \"Responds to Review #4\", \"comment\": \"We thank the reviewer for the constructive reviews. We addressed the questions and concerns of the reviewer accordingly in the following.\\n\\n1 Using policy gradient to generate discrete data with GAN is first proposed by [7] and [2] and shows great performance in information retrieval in [2]. Afterward, this strategy has been widely adopted to learn graph representation, e.g., [1] and [3]. Following [1], [2], [3], [7], we adopt the same strategy in our work. We agree that GAN is unstable and hard to train. Fortunately, in our work, it doesn\\u2019t cause much trouble. To show that our model is stable and our results are easy to reproduce, we train NoiGAN-RotatE (soft) with the same parameters for 3 times on FB15K-237 with 70% noise and report the results on test dataset as follows:\\n\\nMRR HITS@1 HITS@3 HITS@10\\n0.279 0.179 0.320 0.475\\n0.279 0.179 0.319 0.477\\n0.279 0.179 0.319 0.474\\n\\nWe can observe that the results are almost the same, which shows the stability of our model.\\n\\n(2) To study the effect of the percentage of triples as positive training examples, we also run NoiGAN-RotatE (soft) with the percentage of triples as positive training examples as 10% 20% 40% on FB15K-237 with 70% noise, the result is as follows:\\n\\nPercentage of triples MRR HITS@1 HITS@3 HITS@10\\n10% 0.279 0.179 0.320 0.475\\n20% 0.278 0.179 0.318 0.473\\n40% 0.279 0.180 0.318 0.475\\n\\nWe can observe that the variation among the results is relatively small. It indicates that our NoiGAN is less sensitive to the percentage of triples as positive training examples. \\n\\n(3) Thanks to the reviewer for pointing out this issue. We have added more baseline methods, including (1) KGE models (e.g., DistMult [5] and RotatE [4]), (2) robust KGE models (e.g., attention based method [6]) and (3) KGE models with GAN (e.g., KBGAN [3]). In addition, to show that our NoiGAN can be easily generalized to various KGE models, RotatE is also added as score function for NoiGAN. Please find the results in Table 3 in our latest version of the paper. The results show that both NoiGAN-TransE and NoiGAN-RotatE consistently and significantly outperform all the baseline methods in terms of robustness. \\n\\n[1] \\u201cGraphGAN: Graph Representation Learning with Generative Adversarial Nets.\\u201d AAAI'18. \\n[2] \\u201cIrgan: A minimax game for unifying generative and discriminative information retrieval models.\\u201d SIGIR'17. \\n[3] \\u201cKbgan: Adversarial learning for knowledge graph embeddings.\\u201d NAACL\\u201918. \\n[4] \\u201cRotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space.\\u201d ICLR'19. \\n[5] \\u201cEmbedding Entities and Relations for Learning and Inference in Knowledge Bases\\u201d ICLR'15. \\n[6] \\u201cLearning Attention-based Embeddings for Relation Prediction in Knowledge Graphs\\u201d, ACL\\u201919.\\n[7] \\u201cSeqGAN: Sequence Generative Adversarial Nets with Policy Gradient\\u201d, AAAI\\u201917.\"}", "{\"experience_assessment\": \"I have read many papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #4\", \"review\": \"This paper presented a jointly learning framework based on GAN for tackling both knowledge graph completion and noise detection simultaneously. Existing works only deal with each of task independently and did not investigate the benefits of coping with both tasks together. The paper is well motivated. In order to achieve them, the paper presented a GAN framework in order to train a noising KG embedding as well the generator and discriminator. The key connections between two parts are through the confidence of a noise triple and generation of the negative sample triples. The whole framework looks quite interesting and promising. The experimental results are provided to validate the effectiveness of the proposed model.\", \"there_are_two_key_concerns_about_this_paper\": \"1) It is well known that both GAN and RL are hard to train, not to mention combining them together to joint train in order to deal with data indifferenceability issue of discrete triple generation. Are the results easy to reproduce?\\n\\n2) Choosing 10% triples as positive training examples seems very ad-hoc. Have you studied the sensitivity of the number of percentage of triples as positive training examples on the system performance?\\n\\n3) I don't know too much about the methods from knowledge graph noise detection so maybe one baseline - CKRL is enough for representing state-of-the-arts. However, for knowledge graph completion task, TransE is most simple baseline and they are rich state-of-the-art methods in this line such as [1]. It is not convincing to show the advantages of the proposed NoiGAN without such comparisons. \\n\\n[1] \\u201cRotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space.\\u201d ICLR'19.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper proposes a GAN-oriented framework for training robust-to-noise neural link predictors. My main concern is that CKRL is the only baseline -- ignoring years of prior works in this space (see e.g. [1, 2]).\\nFurthermore, [2] shows that two of the three datasets used by the authors suffer from test triple leakage in the training set.\\n\\nFinally, the considered datasets do not really test for the presence of noise - authors may want to check out e.g. https://arxiv.org/abs/1812.00279 (there are several works in this space, all of which were systematically ignored by this paper).\\n\\nFinally, authors claim neural link predictors were never used for denoising, but actually [3] use them to learn a prior distribution over triples in a probabilistic DB setting.\\n\\n\\n[1] https://arxiv.org/abs/1806.07297\\n[2] https://arxiv.org/abs/1707.01476\\n[3] https://ai.google/research/pubs/pub45634\"}", "{\"rating\": \"3: Weak Reject\", \"experience_assessment\": \"I have published in this field for several years.\", \"review_assessment\": \"_thoroughness_in_paper_reading: I read the paper thoroughly.\", \"title\": \"Official Blind Review #1\", \"review\": \"This paper proposes to provide a novel noise-aware knowledge graph embedding (NoiGAN) by combining KG completion and noise detection through the GANs framework. More specifically, NoiGAN repeatedly utilizes a GAN model to 1) approximate the confidence score for facts identifying reliable data (discriminator) and 2) generate more challenging negative samples (generator). Then, it uses this confidence score and negative samples to train a more accurate link prediction model. The authors validate the proposed model through several experiments.\\n\\nThis paper reads well and the results appear sound. I personally find the idea of incorporating confidence score into a link prediction model to achieve a more accurate model very interesting. Furthermore, the provided experiments support their intuition and arguments outperforming considered baselines.\\n\\nAs for the drawbacks, I find the baselines considered in this work outdated missing many SOTA and related works in link prediction and noise detection [1,2, 3, 4, 5]. Further, I believe this work needs more experimental results and an ablation study capturing different aspects of the presented method. My concerns are as follows:\\n\\n\\u2022\\tConsidering the existing reverse relation issue in FB15K and WN18, I suggest conducting the experiments on the FB15K-237 and WN18RR from [6] instead. \\n\\u2022\\tI suggest considering more recent link prediction models as baselines.\\n\\u2022\\tI am wondering if the only difference between NoiGAN and KBGAN [7] is incorporating the confidence score in the link prediction loss?\\n\\u2022\\tConsidering the fact that NoiGAN repeatedly retrains GAN and link prediction model, I suggest providing a comparison of computational complexity.\\n\\u2022\\tI am wondering if NoiGAN can only work with pre-knowledge of noisy triples in KG? If not, why didn\\u2019t you report NoiGAN performance with 0% noise in Table 3?\\n\\u2022\\tI find utilizing few examples to evaluate the power of discriminator in distinguishing noisy triples (Table 4) not satisfactory at all. I suggest experimenting with more data and providing the per-relation breakdown performance of the discriminator.\\n\\nOn overall, although I find the proposed model quite novel and interesting, the paper needs more experimental results to validate the idea.\\n \\n[1] Pinter, Yuval, and Jacob Eisenstein. \\\"Predicting Semantic Relations using Global Graph Properties\\\".\\n[2] Nathani, Deepak, et al. \\\"Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs\\\". \\n[3] Bala\\u017eevi\\u0107, Ivana, Carl Allen, and Timothy M. Hospedales. \\\"TuckER: Tensor Factorization for Knowledge Graph Completion\\\".\\n[4] Sun, Zhiqing, et al. \\\"Rotate: Knowledge graph embedding by relational rotation in complex space\\\".\\n[5] Pezeshkpour, Pouya, Yifan Tian, and Sameer Singh. \\\"Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications\\\".\\n[6] Dettmers, Tim, et al. \\\"Convolutional 2d knowledge graph embeddings.\\\", AAAI-18.\\n[7] Liwei Cai and William Yang Wang. \\u201cKbgan: Adversarial learning for knowledge graph embeddings\\u201d.\"}" ] }
ByxhOyHYwH
Fast Task Adaptation for Few-Shot Learning
[ "Yingying Zhang", "Qiaoyong Zhong", "Di Xie", "Shiliang Pu" ]
Few-shot classification is a challenging task due to the scarcity of training examples for each class. The key lies in generalization of prior knowledge learned from large-scale base classes and fast adaptation of the classifier to novel classes. In this paper, we introduce a two-stage framework. In the first stage, we attempt to learn task-agnostic feature on base data with a novel Metric-Softmax loss. The Metric-Softmax loss is trained against the whole label set and learns more discriminative feature than episodic training. Besides, the Metric-Softmax classifier can be applied to base and novel classes in a consistent manner, which is critical for the generalizability of the learned feature. In the second stage, we design a task-adaptive transformation which adapts the classifier to each few-shot setting very fast within a few tuning epochs. Compared with existing fine-tuning scheme, the scarce examples of novel classes are exploited more effectively. Experiments show that our approach outperforms current state-of-the-arts by a large margin on the commonly used mini-ImageNet and CUB-200-2011 benchmarks.
[ "Few-Shot Learning", "Metric-Softmax Loss", "Fast Task Adaptation" ]
Reject
https://openreview.net/pdf?id=ByxhOyHYwH
https://openreview.net/forum?id=ByxhOyHYwH
ICLR.cc/2020/Conference
2020
{ "note_id": [ "tF-ejoOfDt", "BklJfCltiS", "BJlI_uHOsS", "HyeZeYzdor", "HyekWIfdjS", "HylaQFF8cH", "r1ewpj31cB", "H1ebjwJUtH", "Hkx-s9HBtr", "B1eyZUMEYS", "H1g2MBMEKr", "Byx6mHd7FH", "HJgM7jD7FS", "HylyzIxXtB", "S1gOz-xXKS", "Syxy3okmKH", "BJlP3iMzKS", "SylrfmMzYB", "rkgj2klftS", "HJgeCaL-Fr", "rJgj3NQ-FH", "SylkJ5beYH", "Skl9PUQs_H", "B1g2f2MjuB", "rkl9rezjuB", "HyxbTadFdB", "rkxO2Vzw_B" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "comment", "official_comment", "official_comment", "comment", "comment", "official_review", "official_comment", "official_comment", "comment", "comment", "official_comment", "official_comment", "comment", "comment", "comment", "official_comment", "comment", "official_comment", "comment" ], "note_created": [ 1576798733202, 1573617159375, 1573570670031, 1573558505095, 1573557750765, 1572407588709, 1571961791422, 1571317657240, 1571277465263, 1571198454889, 1571198227768, 1571157285409, 1571154713526, 1571124743280, 1571123471574, 1571122087013, 1571068846824, 1571066637425, 1571057586871, 1571020232191, 1571005619072, 1570933207496, 1570612834191, 1570610196357, 1570607170436, 1570504120820, 1570346159663 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1819/Authors" ], [ "ICLR.cc/2020/Conference/Paper1819/Authors" ], [ "ICLR.cc/2020/Conference/Paper1819/Authors" ], [ "ICLR.cc/2020/Conference/Paper1819/Authors" ], [ "ICLR.cc/2020/Conference/Paper1819/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1819/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1819/Authors" ], [ "~Yue_Wang2" ], [ "ICLR.cc/2020/Conference/Paper1819/Authors" ], [ "ICLR.cc/2020/Conference/Paper1819/Authors" ], [ "~Cantona_ViVian1" ], [ "~Jinghan_Gao1" ], [ "ICLR.cc/2020/Conference/Paper1819/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1819/Authors" ], [ "ICLR.cc/2020/Conference/Paper1819/Authors" ], [ "~Cantona_ViVian1" ], [ "~Alex_Matthew_Lamb1" ], [ "ICLR.cc/2020/Conference/Paper1819/Authors" ], [ "ICLR.cc/2020/Conference/Paper1819/Authors" ], [ "~Cantona_ViVian1" ], [ "~Bin_Liu4" ], [ "~Ning_Ma1" ], [ "ICLR.cc/2020/Conference/Paper1819/Authors" ], [ "~Ning_Ma1" ], [ "ICLR.cc/2020/Conference/Paper1819/Authors" ], [ "~Ning_Ma1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper develops a new few-shot image classification algorithm by using a metric-softmax loss for non-episodic training and a linear transformation to modify the model towards few-shot training data for task-agnostic adaptation.\\n\\nReviewers acknowledge that some of the results in the paper are impressive especially on domain sift settings as well as with a fine-tuning approach. However, they also raise very detailed and constructive concerns on the 1) lack of novelty, 2) improper claim of contribution, 3) inconsistent evaluation protocol with de facto ones in existing work. Author's rebuttal failed to convince the reviewers in regards to a majority of the critiques.\\n\\nHence I recommend rejection.\", \"title\": \"Paper Decision\"}", "{\"title\": \"On the novelty of our approach and experimental settings\", \"comment\": \"Thanks for your comments and suggestions. We have revised our paper as noted in this comment (https://openreview.net/forum?id=ByxhOyHYwH&noteId=HyekWIfdjS ). We believe that the quality and convincingness of our work have been significantly improved. To address your concerns, in particular on the novelty of our approach and setting of experiments, see our explanations below.\\n\\n## Contributions of this work\\n\\n- Metric-Softmax Loss\\nWhat motivate us to design the Metric-Softmax loss are to 1) enjoy the strong feature learning capability of the Softmax loss and 2) preserve the consistency between training, fine-tuning and inference. Although it shares similar form with existing losses, the key to the effectiveness of our approach lies in the overall learning pipeline, rather than the loss alone. The ablation studies in the Section 5 clearly verify the positive impacts brought by both Metric-Softmax and FTA. In other words, we propose a complete learning framework, which is a composite of multiple techniques applied to the training and adaptation stages.\\n\\n- Fast Task Adaptation\\nBy transforming the features of novel images using the matrix $\\\\mathbf M$, we aim to learn a more compact and discriminative feature space for each new few-shot classification task. It can be easily implemented. Given d-dimensional feature vectors $\\\\mathbf h$, it is analogous to apply a fully connected layer to $\\\\mathbf h$. It should be noted that 1) the weight matrix $\\\\mathbf M$ is a square matrix such that the output dimension of $\\\\mathbf h$ is unchanged; 2) no bias is applied; 3) $\\\\mathbf M$ is initialized with an identity matrix. This design has two merits. On one hand, the strong features learned on the base data can be maximally preserved. On the other hand, tuning $\\\\mathbf M$ on the support images of novel classes further strengthens the feature to current few-shot task, without suffering from overfitting as a na\\u00efve fine-tuning. The transformation is applied to both support and query images, which is distinct from existing works. Please see our responses to Reviewer #2 (A1, B4 and B7) for more detailed analysis.\\n\\n## Improved experiments\\n\\n- The setting that matters most for performance is the input image size. Originally we followed the setting of ResNet-10 in Chen et al. (2019), which turns out to be unusual for ResNet-12 in the literature. This issue has been fixed by rerunning the experiments of ResNet-12 with the 84x84 setting. The latest results have been updated in current revision. On mini-ImageNet the accuracies are 58.03\\u00b10.48 and 80.73\\u00b10.44 for the 5-way 1-shot and 5-way 5-shot settings respectively, which are still competitive to existing state-of-the-arts.\\n\\n## Other issues\\n\\n- Q: The scores for baseline methods are seemingly taken from the paper of Chen et al. who trained them themselves. This should be mentioned in the paper and in the caption.\", \"a\": \"Fixed by adding a note in the caption. Thanks for pointing this out.\\n\\n- More state-of-the-art methods have been added for comparison. See Table 1 for the latest benchmark.\"}", "{\"title\": \"Responses to reviewer #2\", \"comment\": \"Thanks for your thoughtful comments. Based on your suggestions, we have carefully revised our paper (https://openreview.net/forum?id=ByxhOyHYwH&noteId=HyekWIfdjS ). In particular, all experiments of ResNet-12 have been rerun with the new input image size (84x84) and the accuracy numbers have been updated in the latest version of the paper.\\n\\nTo further address your concerns, here are our explanations.\\n\\nA1. As you mentioned, the most related work to ours is Ravichandran et al. (2019) (https://arxiv.org/abs/1905.04398 ), which also learns the class representations followed by projecting the features. The major difference is that they apply the projection to the representative features derived from the support images only. While we apply the affine transformation g to both features of support images and query images. That is, we aim to learn a more compact and discriminative feature space for the full set of images of novel classes, rather than merely adjusting the class representations of novel classes. This is the key to explain the observation that we improve the 5-shot setting more than the 1-shot setting. When there is 1 image per class, it is difficult to learn a class-level transformation given single image-level information. When there are multiple (e.g. 5) images per class, it is able to learn a class-level transformation that generalizes well to query images. This analysis is well supported by the latest results in the updated Table 1, where we achieve higher accuracy (80.73% vs. 77.46%) than Ravichandran et al. (2019) in the 5-shot setting. \\n\\nA2. We believe that reporting results on two datasets using three network backbones is fairly convincing. We will add experiments on more datasets in the future revision. \\n\\nA3. The input image size of ResNet-12 has been changed from 224x224 to 84x84 for a fair comparison with existing methods. And all the related experiments have been rerun using the new setting. The accuracy numbers have been updated accordingly (see Table 1, Table 4 and Figure 2). Originally we followed the setting of ResNet-10 in Chen et al. (2019), which turns out to be unusual for ResNet-12 in the literature. We apologize for the confusion, and would like to thank the reviewers and readers for pointing this issue out.\\n\\nA4. The 1-shot and 5-shot settings share the same backbone model. During evaluation, since different support and query image sets are sampled randomly in different episodes, we fine-tune the transforming matrix M for each episode independently, leading to different Ms for different few-shot classification tasks. In other words, the training procedure is identical for all tasks, what differs is the list of support images used for fine-tuning.\\n\\n\\nB1. The consistency between training and inference lies in the way to compute the class probability. In Softmax, the probability is computed by vector inner-product in training and Euclidean distance in inference. In Metric-Softmax they are all computed by Euclidean distance.\\n\\nB2. The reason is that we used the same scaling factor $\\\\alpha$ (0.25) for both TAT and fine-tuning. It works perfectly fine for TAT, but is too small for fine-tuning as the weights are randomly re-initialized in fine-tuning. In the revised version, we set $\\\\alpha$ to 15, which is the same as training on the base data, and obtain a more reasonable result (31.32% at the 25-th epoch).\\n\\nB3. As explained in B1, we design the Metric-Softmax loss to ensure the consistency between training and inference. During meta-training, the weight matrix W is learned from scratch. It can be interpreted as the centroids (or representations) of base classes only due to the definition of the classifier in Eq. (7). During fine-tuning, it can not be transferred to novel classes. Thus we use the centroids of novel classes as the initial weight for the classifier on novel classes.\\n\\nB4. By inspecting M, we found that indeed the change is trivial and it is close to the initial identity matrix. The diagonal elements of M range from 0.94 to 1.05, while the rest elements are of the magnitude of 1e-2. It indicates that a slight transformation to the base features already suffices.\\n\\nB5. Yes, you are right. It has been fixed in current revision.\\n\\nB6. We have rerun the experiments of ResNet-12 using the conventional 84x84 setting. The latest accuracies on mini-ImageNet are 58.03\\u00b10.48 and 80.73\\u00b10.44 for the 5-way 1-shot and 5-way 5-shot settings respectively, which are still competitive to other state-of-the-arts (see the updated Table 1).\\n\\nB7. Suppose the features of query images are fixed, it does not matter whether the cluster is shrunk. However, the transformation matrix is applied to both query and support features during inference. Therefore, learning a compact feature representation by minimizing the intra-class distance is helpful. Please refer to A1 for more detailed analysis.\\n\\nB8. More state-of-the-art methods have been added for comparison in the revised paper.\"}", "{\"title\": \"Revised paper available\", \"comment\": \"Thanks for your affirmative comment. Note that we have revised our paper to improve its quality and convincingness. Should you have any concerns, please check out the latest version and refer to our responses to the other reviewers.\"}", "{\"title\": \"Revised paper uploaded (fixing the wrong input image size issue)\", \"comment\": \"Dear reviewers and all,\\n\\nWe have uploaded a revised version of our paper to address the major issues raised by the reviewers and readers. The changes can be summarized as follows.\\n\\n1) The input image size of ResNet-12 has been changed from 224x224 to 84x84 for a fair comparison with existing methods. And all the related experiments have been rerun using the new setting. The accuracy numbers have been updated accordingly (see Table 1, Table 4 and Figure 2). Originally we followed the setting of ResNet-10 in Chen et al. (2019), which turns out to be unusual for ResNet-12 in the literature. We apologize for the confusion, and would like to thank the reviewers and readers for pointing this issue out.\\n\\n2) More state-of-the-art few-shot learning methods have been added for comparison as suggested by Reviewer #2.\\n\\n3) The latest accuracies for ResNet-12 on mini-ImageNet are 58.03\\u00b10.48 and 80.73\\u00b10.44 for the 5-way 1-shot and 5-way 5-shot settings respectively. Compared with the strong baseline MetaOptNet-SVM (Lee et al., 2019), we still achieve better accuracy in the 5-way 5-shot setting, confirming the effectiveness of the proposed approach.\\n\\n4) All the missing implementation details mentioned in this comment (https://openreview.net/forum?id=ByxhOyHYwH&noteId=H1g2MBMEKr ) have been added.\\n\\nWe believe that the quality and convincingness of the paper have been significantly improved.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper develops a new few-shot image classification algorithm. It has two main contributions. The first one is to use a metric-softmax loss used to train on the meta-training dataset without episodic updates. The second is that the features learnt thereby are further modified using a linear transformation to fit the few-shot training data and the metric soft-max loss is again used for classifying the query samples. The authors provide experimental results for 5-way-1-shot and 5-way-5-shot testing on mini-Imagenet and CUB-200-2011 datasets.\\n\\nI think this paper is below the acceptance threshold. The reasons are:\\n\\n1. The contributions of this paper are marginal: both learning centroids for each meta-training class and projecting the few-shot features have been used before in published work (https://arxiv.org/abs/1905.04398). The empirical results are weaker than existing work (see for instance, https://arxiv.org/abs/1904.03758, https://arxiv.org/abs/1909.02729 etc.); also see #3 below.\\n2. The authors should provide experimental results on other few-shot learning datasets like tiered-Imagenet.\\n3. The image-size used here for Resnet-12 is 224x224, the authors should report results using 84x84 image size so that one can compare against existing literature fairly. Are the results for Resnet-12 so good because of the larger image size?\\n4. The training procedure is task-agnostic, why do you train a different model for the 1-shot and the 5-shot case?\\n\\nI will consider increasing my score if some of the concerns above are addressed. I am listing some more comments below which I would like the authors to consider.\\n\\n\\n1. Contributions: \\u201cconsistency between training and inference\\u201d, do you instead mean consistency between meta-training and few-shot training? There are no weight updates at inference time.\\n2. How essential is the metric-softmax loss? Training on the meta-training dataset without episodic updates has also been done in https://arxiv.org/abs/1909.02729. These authors seem to use standard soft-max training and perform standard fine-tuning, they report empirical performance that is significantly better than that in Table 4 and Figure 2. I am very skeptical as to why the accuracy of fine-tuning is only 21% in Figure 2.\\n3. Section 3.2 does not motivate or explain the metric-softmax loss. Why should one have the network learn the centroids of the meta-training dataset? Can you draw a TSNE of the centroids learnt during meta-training? The features of the support samples (or their transformations) can be the centroids of the few-shot classes in the prototypical loss so inference phase does not need these centroids.\\n4. I am not sure whether the matrix M is changed non-trivially during few-shot training. The weights W are already initialized to be the centroid of the features (eqn. 9). So the metric-softmax loss in eqn. 10 is expected to be small for the support samples after initialization. Why should the additional expression power afforded by M matter? There is no incentive for the network to change the matrix M. Can you show results on how much M changes from the identity?\\n5. I believe the reported numerical results for LEO (Rusu et al. 2019) are for a WRN-28-10 architecture, not ResNet-12.\\n6. The accuracy using Resnet-12 seem extremely high. I believe this is because the results reported in the literature, e.g., https://arxiv.org/abs/1904.03758, use images of size 84x84, not 224x224 as the authors here have used. Can you report results using 84x84 sized images?\\n7. I don\\u2019t understand the explanation at the end of Section 5. Since the prototypical loss is being used to classify the query datum, it should not matter whether the cluster is shrunk in the 5-shot case, or whether simply the distances between the clusters are increased as in the 1-shot case.\\n8. Table 1 is quite incomplete, the authors should mention other existing few-shot classification results are similar to the performance of this paper, e.g., https://arxiv.org/abs/1805.10123, among the ones listed above.\\n9. The entries in Table 1 and 2 are not made bold appropriately. All entries with overlapping standard error should be bold.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I carefully checked the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"# Summary\\n\\nThis paper deals with few-shot learning from a metric-learning perspective. The authors propose replacing the softmax loss, i.e. softmax + cross-entropy loss, with a so-called \\\"metric-softmax\\\" loss which imitates a Guassian kernel RBF over class templates/weights. This loss is used in both stages of training on base and on novel classes and the authors argue that it helps learning more discriminative feature while preserving consistency between train and test time.\\nSecondly the authors advance a task-adaptive transformation for stage 2 that maps the features from the previously learned feature extractor to a space which is easier to learn. The contributions are evaluated on the standard mini-ImageNet benchmark and on CUB-200-2011 individually and in domain shift mode.\\n\\n# Rating\\nAlthough some of the results in the paper might look impressive, my rating for this work is reject for the following reasons (which will be detailed below):\\n1) the main contribution, metric-softmax loss, is not novel. It has been used and described in multiple works in the past 1-2 years.\\n2) a part of the evaluations and comparisons do not follow the usual protocol and are not fair\\n3) the second contribution, Fast Task Adaptation (FTA), is not well described and it's unclear in what it actually consists, how does it work and how was it trained exactly.\\n\\n# Strong points\\n- This paper deals with a highly interesting and relevant topic for ICLR.\\n\\n# Weak points\\n\\n## Contributions\\n- This work ignores a large body of research in few-shot learning and metric learning aiming to improve the efficiency per training sample and feature discrimination. \\nThe proposed loss can be traced back to Goldberger et al. [i] in NCA (Neighborhood Component Analysis). Prototypical Networks are derived from this work and hence similar with the metric-softmax loss. \\nQi et al. [ii] point out that when h and W are l2-normalized (using the notations from this submission, eq. 7) maximizing their inner-product or cosine-similarity is equivalent to the minimization of the squared Euclidean distance between them. This leads to the loss from [ii], [iii], also known as Cosine Classifier and which also is accompanied by a scaling factor or temperature as here.\", \"other_related_works_on_improving_softmax_and_selecting_the_representative_weights_for_a_class_include\": \"center loss [iv], ring loss[v], L-GM loss[vi].\\nIn this light, the metric-softmax is actually not novel and can be found in several other contributions from last year.\\n\\n- In my opinion, it is difficult from the paper to understand what is actually the fast adatpation module. The authors describe $g$ as \\\"simply a zero-offset affine transformation\\\" $g(h)= M^T h$. In the implementation details we do not find out more about this module and we don't have more insights on what is it doing inside, other than a toy hand-drawn example in Figure 3. I find it difficult to assess.\\n\\n## Experiments\\n- The authors evaluate 3 backbone architectures, Conv-4, ResNet-10 and ResNet-12. For the former they use 84 x 84 images, while for the latter they use 224 x 224 images. The larger images are not standard in the few-shot ImageNet evaluation protocol. Data augmentation (jittering, flipping, etc.) is used here, while in most works it is not. Chen et al. are the first ones to introduce larger images and data augmentation and acknowledge that the large scores are due to this. \\nTesting out new configuration is not a problem as long as the baselines are evaluated in the same conditions. However, in this case they are not and this is not visible in the captions of the tables and descriptions in the paper. Training a network with data augmented images and/or higher resolution images and comparing to baselines without data augmentation and images with 7 times less pixels, for sure does not allow seeing the true impact of the proposed method. I would advise to either evaluate in the usual mini-ImageNet settings, either implement a few representative and easy to train baselines, e.g. ProtoNets, Cosine Classifier[iii] in the same conditions as here and compare against. This should provide a better idea on the effectiveness of the proposed methods. \\n\\n\\n## Other comments\\n- the scores for baseline methods are seemingly taken from the paper of Chen et al. who trained them themselves. This should be mentioned in the paper and in the caption\\n\\n# Suggestions for improving the paper:\\n1) Review the experimental section and make sure at least some of the baselines are trained in similar conditions as the proposed method or alternatively evaluate the proposed methods in standard mini-ImageNet settings\\n\\n2) Provide additional insights, experiments and implementation details for FTA to make it easier to understand, there are some examples in the references below.\\n\\n\\n\\n# References \\n[i] J. Goldberger et al., Neighbourhood components analysis, NIPS 2005\\n[ii] H. Qi et al., Low-Shot Learning with Imprinted Weights, CVPR 2018\\n[iii] S. Gidaris and N. Komodakis, Dynamic Few-Shot Visual Learning without Forgetting, CVPR 2018\\n[iv] W. Wen et al., A Discriminative Feature Learning Approach\\nfor Deep Face Recognition, ECCV 2016\\n[v] Y. Zeng et al., Ring loss: Convex Feature Normalization for Face Recognition, CVPR 2018\\n[wi] W. Wan et al., Rethinking Feature Distribution for Loss Functions in Image Classification, CVPR 2018\"}", "{\"comment\": \"In our work, we use an input size of 84x84 for Conv-4 and 224x224 for ResNet-10 and ResNet-12, which follows the setting of Baseline++ (Chen et al., 2019) and SubspaceNet (Devos & Grossglauser, 2019). TADAM uses an input size of 84x84 for ResNet-12. We include TADAM in the table to make the comparison more comprehensive. To make it clear, we will mark the difference in input size in the revision.\", \"title\": \"Explanation on input image size\"}", "{\"comment\": \"I want to mention that in TADAM, the image resolution they use is 84x84 while in your paper, you use 224x224, right?\", \"title\": \"Input image size\"}", "{\"comment\": \"To answer your question in short, we learn g using the (modified) Metric-Softmax loss.\\n\\nTraining of g is analogous to training of the feature extractor using the Metric-Softmax loss, which aims to minimize the divergence between predicted classification scores to one-hot labels using the cross-entropy loss. The differences lie in\\n 1) the classifier to predict the scores, i.e. Eq. (10) vs. Eq. (7),\\n 2) the weights to update, i.e. updating g only vs. updating the whole network,\\n 3) the number of classes in the classifier, e.g. 5 vs. 64 for miniImageNet.\\nThis consistency of learning process is one of the factors that contributes to the performance improvement.\\n\\nThe transformation g is applied to the extracted features (Eq. (8)). Since the matrix M is a square matrix, the dimension of the feature is kept unchanged.\", \"title\": \"The detailed learning process of fast task adaptation\"}", "{\"comment\": \"To the reviewers and readers,\\n\\nBy double-checking the implementation after submission of the paper, we noticed a few missing details that may affect reproducibility of this work.\\n\\n1) In the Metric-Softmax classifier (Eq. (7)), the feature vector $\\\\mathbf h$ should be L2-normalized, which eases optimization in our experiments.\\n2) Analogously, in the modified Metric-Softmax classifier used during the fast task adaptation stage (Eq. (10)), both the transformed feature vector $g(\\\\mathbf h)$ and each column of the weight matrix $\\\\bar{\\\\mathbf W}$ should be L2-normalized.\\n3) For the scaling factor $\\\\alpha$ in Metric-Softmax, we use different values in the two stages. In the feature learning stage, it is set to 15 and 1 for mini-ImageNet and CUB-200-2011 respectively. In the fast task adaptation stage, it is set to 0.25 for the 1-shot setting and 2 for the 5-shot setting on both datasets. We would like to stress that these parameters are tuned strictly on the validation set of each dataset.\\n\\nWe apologize for these issues and will fix them in the following revision.\", \"title\": \"More implementation details that may affect the reproducibility of this work\"}", "{\"comment\": \"Thanks for your clarification. It is still not clear why a fixed feature extractor can work pretty well in the 1-shot case but directly fine-tuning cannot.\", \"title\": \"Re: Key factors to the success of the proposed framework\"}", "{\"comment\": \"All parts are clear in the paper, except the training process of layer g, which is the essential part of the fast adaptation session. Would you mind explaining the process in detail? I am confused that you only mentioned g is trained with CrossEntropy loss. Do you have a classifier that decreases the dimension of feature while training g?\", \"title\": \"How is the matrix M trained in stage 2?\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"8: Accept\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"Authors propose a new method for adaptation in a few-shot learning setting. Their method comprises two different steps; first they propose a new metric-softmax loss, which aims at improving the transferability of features pre-trained on base data to novel data. They achieve this via redefining the probability score calculating function, which in practice means they replace the exponent term found in the softmax loss with a Gaussian kernel-based radial basis function. This first step improves the feature learning process at large scale but does not solve the problems found when trying to fit arbitrary novel classes. At this point comes step two, where a fast task adaptation process is proposed, i.e. Task-Adaptive Transformation based on affine transformation. This method converges fast and is learnt from the support set, vs step 1 which is trained on the training/base set. Post-training the affine transformation is applied to both support and query image sets.\\nAuthors have been fairly detailed in their experimental processes and used different backbone models, routinely found in papers focusing on similar issue. They compare against 4 other state-of-the art methods showing a significant improvement across all of them. They also demonstrate the superiority of the metric-softmax classifier vs softmax-classifier and finally the overall superiority of the whole method proposed.\\nSimilar results are obtained on domain sift settings as well as when comparing the step 2 of this method with a fine-tuning approach. \\nThe proposed method makes a good contribution towards reducing the problem of overfitting when very few examples of a new problem are available.\\nI have read the rebuttal and think the paper could be a good addition to the programme.\"}", "{\"comment\": \"Yes, you are right. If we classify a query image by comparing its feature with features of support images, we may still get a reasonable accuracy, thanks to the generalizability of the feature extractor learned on large-scale base data. In the proposed TAT, we initialize the weights of the Metric-Softmax classifier using the mean features of support images as stated in Section 3.3 (Eq. (9)). In this way, we easily inherit a good starting point. And by fine-tuning the task-adaptive transformation g, further improvement is obtained, as shown in Figure 2. In other words, the combination of the proposed Metric-Softmax classifier together with TAT circumvents the problem of overfitting on extremely few samples, which is one of the major contributions of this work.\", \"title\": \"Key factors to the success of the proposed framework\"}", "{\"comment\": \"Thanks for your reminder. But the paper you mentioned uses different network backbones from ours. We report the performance of Conv-4, ResNet-10 and ResNet-12, while they use ResNet-18, ResNet-34 and WRN-28-10, which are deeper than ours and less commonly used in previous works. As shown in Chen et al. (2019), size of the backbone matters. And the results of different backbones can not be directly compared. Nevertheless, we will include it as a related work in the revision.\", \"title\": \"Different network backbones are used\"}", "{\"comment\": \"I noticed that a fixed feature extractor can works pretty well in the 1-shot case. Based on your reasoning, fixed feature extractor should suffer from the same problem as fine-tuning?\", \"title\": \"Further questions\"}", "{\"comment\": \"At the very least would be good to add the results to the table. It's not my paper, but it seems like your results are often (but not necessarily always) better:\", \"https\": \"//arxiv.org/abs/1907.12087\", \"title\": \"Another baseline that perhaps should be added or cited\"}", "{\"comment\": \"The proposed TAT differs from direct fine-tuning in two aspects. One is initialization of the Metric-Softmax classifier's weight matrix $\\\\mathbf W$. For direct fine-tuning, it is re-initialized randomly. While for TAT, it is derived from features of the support images, which makes it equivalent to a nearest-neighbor classifier. That is why the accuracy starts from random guessing (19.95%) for fine-tuning and a reasonable value (58.76%) for TAT. Better initialization leads to better performance. The other difference is that rather than learning the weights directly as in the case of fine-tuning, TAT learns the transforming matrix g. It is initialized with an identity matrix, which further eases the optimization. In summary, poor initialization and ill-posed learning manner make direct fine-tuning difficult to improve the classifier. These issues emerge in the 1-shot setting, and can be alleviated with more training samples as shown in the 5-shot setting.\", \"title\": \"Issues that make fine-tuning difficult to improve the classifier\"}", "{\"comment\": \"Oops, it is a typo. Thanks for pointing out! It should be the initial learning rate rather than Adam's eps parameter, for which the default value (1e-8) of PyTorch's implementation is used. We will fix it in the revision.\", \"title\": \"It should be the initial learning rate\"}", "{\"comment\": \"The performance of fine-tuning on 5-way 1-shot in Figure 2 is very surprising. It means fine-tuning does not work at all in this case. However, a fixed feature extractor can still get a reasonable accuracy in this case. One possible reason is overfitting. But it is still hard to explain why the performance at the 5th or 10th epoch is very low. Could the authors explain the possible reasons? Thanks.\", \"title\": \"Question about Fine-tuning on miniImageNet\"}", "{\"comment\": \"Hi, in the section 4.2, you said you use Adam optimizer with $\\\\epsilon = 10^{-3}$. Is the $\\\\epsilon$ learning rate or the $\\\\epsilon$ parameter in the original paper which is used to improve numerical stability?\", \"title\": \"Question about Adam optimizer.\"}", "{\"comment\": \"Thanks for your clarification.\", \"title\": \"Thanks for your clarification\"}", "{\"comment\": \"For each randomly sampled episode, we fine-tune the transformer g using the support images of current episode only, rather than the whole test set. That is, g is not shared among different episodes, and a unique g is learned for each episode. Fine-tuning is performed strictly on the training (support) data of each task, and it is impossible for g to see the test query images.\\n\\nDuring fine-tuning, we use a batch size of 4. Thus 1 epoch involves 2 iteration steps for the 5-way 1-shot setting and 7 steps for the 5-way 5-shot setting. We will clarify this in the revision.\", \"title\": \"Clarification of fine-tuning in the task adaptation stage\"}", "{\"comment\": \"In the paper \\\"A closer look at few-shot classification\\\", they test average accuracy over 600 tasks(episodes).And for each task's adaptation stage , they use support set to train(fine-tune) a new classifier for 100 iteration steps (see section 4.1) .\\n In your paper,you test average accuracy over 1200 episodes(tasks).But in the task adaptation stage,you fine-tune transformer g for 20 epochs,which is very important differing with the former work.In general understanding,every epoch includes many tasks. 20 epochs might let the neural network traversal the whole test data set several times. I mean that the transformer g might remember the test data with so many adaptation epochs. \\n On the other hand,is 'epoch' equal to 'step' in your paper?\", \"title\": \"Compare Inference Stage With Others\"}", "{\"comment\": \"Hi Ning,\\n\\nThanks for your attention to our work. In a common few-shot learning setting like mini-ImageNet, both validation and test sets are drawn from novel classes unseen in the training set. This is distinct from normal supervised learning like the full ImageNet-1k classification. In Ravi & Larochelle (2017) which we followed, the three splits are referred to as meta-training, meta-validation and meta-test sets. And in each meta-set, the support and query images are referred to as training and test sets respectively. The support images in the novel data are meant to provide a hint for classification of the query images. Testing accuracy is measured on the query images only. In existing works, the support images have been exploited to learn a nearest-neighbor classifier (Ravi & Larochelle, 2017) or fine-tune a Softmax classifier (Chen et al., 2019). In our framework, we only utilize the support images (NOT query images) to fine-tune the transforming matrix, which is both conventional and reasonable.\\n\\nReferences\\nSachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR 2017.\\nWei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. In ICLR 2019.\", \"title\": \"How we split the dataset and fine-tune on the support images only\"}", "{\"comment\": \"Hi,I notice that on the Fast Task Adaptation Stage,you use the support data of NOVEL class images .Is it mean that you use test data (Novel class data) to train classifier in the inference time?\\nMay be I don't totally understand what the framework do in the inference time.\", \"title\": \"Question About Inference Time\"}" ] }
Bye2uJHYwr
Weighted Empirical Risk Minimization: Transfer Learning based on Importance Sampling
[ "Robin Vogel", "Mastane Achab", "Charles Tillier", "Stéphan Clémençon" ]
We consider statistical learning problems, when the distribution $P'$ of the training observations $Z'_1,\; \ldots,\; Z'_n$ differs from the distribution $P$ involved in the risk one seeks to minimize (referred to as the \textit{test distribution}) but is still defined on the same measurable space as $P$ and dominates it. In the unrealistic case where the likelihood ratio $\Phi(z)=dP/dP'(z)$ is known, one may straightforwardly extends the Empirical Risk Minimization (ERM) approach to this specific \textit{transfer learning} setup using the same idea as that behind Importance Sampling, by minimizing a weighted version of the empirical risk functional computed from the 'biased' training data $Z'_i$ with weights $\Phi(Z'_i)$. Although the \textit{importance function} $\Phi(z)$ is generally unknown in practice, we show that, in various situations frequently encountered in practice, it takes a simple form and can be directly estimated from the $Z'_i$'s and some auxiliary information on the statistical population $P$. By means of linearization techniques, we then prove that the generalization capacity of the approach aforementioned is preserved when plugging the resulting estimates of the $\Phi(Z'_i)$'s into the weighted empirical risk. Beyond these theoretical guarantees, numerical results provide strong empirical evidence of the relevance of the approach promoted in this article.
[ "statistical learning theory", "importance sampling", "positive unlabeled (PU) learning", "selection bias" ]
Reject
https://openreview.net/pdf?id=Bye2uJHYwr
https://openreview.net/forum?id=Bye2uJHYwr
ICLR.cc/2020/Conference
2020
{ "note_id": [ "e52kwC7Ozd", "B1xNiKQ3iH", "S1l8QYXniH", "Bkxejum3or", "BJeVudmnjB", "SkgpaoVCKr", "B1gYfQuTYH", "HyeE_uUstB" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1576798733172, 1573824924013, 1573824797785, 1573824664190, 1573824620412, 1571863492593, 1571812113157, 1571674220348 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1818/Authors" ], [ "ICLR.cc/2020/Conference/Paper1818/Authors" ], [ "ICLR.cc/2020/Conference/Paper1818/Authors" ], [ "ICLR.cc/2020/Conference/Paper1818/Authors" ], [ "ICLR.cc/2020/Conference/Paper1818/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1818/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1818/AnonReviewer3" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"This paper aims to address transfer learning by importance weighted ERM that estimates a density ratio from the given sample and some auxiliary information on the population. Several learning bounds were proven to promote the use of importance weighted ERM.\\n\\nReviewers and AC feel that the novelty of this paper is modest given the rich relevant literature and the practical use of this paper may be limited. The discussion with related theoretical work such as generalization bound of PU learning can be expanded significantly. The presentation can be largely improved, especially in the experiment part. The rebuttal is somewhat subjective and unconvincing to address the concerns.\\n\\nHence I recommend rejection.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Response to reviewer 3\", \"comment\": \"**Clarity**:\\n\\n1.We tried to give intuition about the type of setting in which auxiliary information is available to reweight the empirical risk at the beginning of page 2. Here, we provide specific examples in biometrics, that we may add in the camera-ready:\", \"example_1\": \"In border control with facial recognition, the countries of origin of travellers are known from their passport information and one can obtain easily the proportion of each country of origin of the travellers that pass through an airport. Strata reweighting can be used to adapt a system for a specific location for accuracy and to correct ethnicity bias. This side information (it is not the image data) was already used by the National Institute of Standards and Technology (NIST) to evaluate the FRVT benchmark participants, see Grother et al. 2019 [3], section 3.5 (begins at page 138).\", \"example_2\": \"The same type of evaluation was done on iris recognition technology, where some technologies were shown to perform differently on light-colored eyes and dark-colored eyes. Since this characteristic varies in distribution depending on geographical location, it can also be exploited in strata, see Grother et al. 2018 [4], pages 63-66. In this context, there are way fewer strata than in example 1.\\n\\n3.1. In the context of the sentence, i.e. for standard binary classification, it is possible to estimate $p'$ using the dataset at hand, but $p$ is supposed to be known, which is a common assumption in PU learning.\\n\\n3.2. The assumption that $p'<p$ occurs in many practical transfer learning situations. It happens when a model is trained on a global population in order to be used on only a part of it where the probability $p$ of being positive is higher. For instance in medical applications where being positive means being ill and the training dataset is composed of all patients whereas the testing dataset is only composed of patients having a specific type of symptoms which increase the risk of being ill.\\n\\n4.As you pointed out, some formulations see few-shot learning as learning from a small dataset, see section 2.1 in Wang et al. (2019) [5]. However, it seems that few-shot learning covers every problem where supervised information is scarce for the task, see Definition 2.2 in Wang et al. (2019) [5], e.g. learning to classify many new images classes (big dataset) with only one/few images per task (which makes our bound very loose). We will make our explanation clearer in the camera-ready.\\n\\n5.Binary classification with varying class probabilities is introduced as a first illustration of the general reweighting scheme with importance function $\\\\Phi$ of Section 2, as explained in the middle of page 2. Approaches for this problem were indeed studied under the name *class-prior change* in du Plessis et al. 2012 [1]. However, we derive a finite-time bound for Eq (7) that leverages an estimate of the train class prior through $n_+', n_-'$ which means that we deal with ratios of empirical means. To our knowledge, it constitutes original work.\\n\\n**Comments**:\\n\\nWhile the consistency of the Importance Weighted ERM is known, the derivation of learning bounds was tackled in Cortes et al. 2010 in the case where the whole importance function is known. This setting is not practical, since knowing the importance function requires knowing the distribution of the data. We show that many scenarios in practical situations and in the literature can be seen as WERM, and they all require information on the relation between the test and train distribution.\\n\\nThere is a difference between p' too small, i.e. `p'<<p` and `p'<p`.\\n\\n1.We agree that Eq (11) is Eq (3) in du Plessis et al. (2014) and the paper is cited in the section. We will refer explicitly to Eq (3) of du Plessis et al. (2014) in the camera-ready, before we introduce Eq (11) in our paper. Unlike our analysis, the derivations of du Plessis et al. (2014) and Niu et al. (2016) [2] focus on a specific type of functions and assume a fixed number of positive and unlabeled points. We will compare more extensively our results to [2] in the camera-ready.\\n\\n2.While we are aware of the variance of the Importance Sampling procedure, the experiments are performed on the ImageNet dataset, which contains 1.3 million images spread out over 1.000 classes. Hence, it would be computationally intensive to compute sensible standard errors for each setting. The settings in the Table of Figure 1 are described at the end of Section 4 (after Figure 3), but we will make it clearer by explicitly referring to their names in this paragraph.\\n\\nSince ImageNet is a balanced dataset (does not contain stratum shift), we generated stratum-shift artifically by removing instances (modifying academic datasets is common practice, see for example https://arxiv.org/abs/1803.09797 ) with strata based on the WordNet structure. The greyed out lines are runs with the full data, which are not attainable in our stratum-shift scenario but provided as reference.\"}", "{\"title\": \"Response to reviewer 2\", \"comment\": \"Since many points are common with reviewer 3 (R.3), we refer to our answers to their review here.\\n\\nIn many practical cases, the proportion of positive instances or the strata probabilities are known, as seen in **Clarity**-1 of R.3. Hence, we believe that our results are practical, even though they do not correspond to the setting in which one has no idea of the range of the class prior. The work of Sugiyama et al. (2008) provides a practical way (the Kullback-Leibler Importance Estimation Procedure - KLIEP) of learning with importance reweighting, but is limited to a specific type of functions and does not derives any theoretical guarantees.\\n\\nThe stratum-shift case covers all the cases where the train and test distributions are both mixtures with the same components but different probability weights (stratum S = mixture component to which an observation X belongs). See the answer in **Clarity**-1. of R.3 for specific examples. For the discussion with related work, see **Comments**-1 of R.3.\\n\\nThe paper shows that PU learning can be seen as a specific case of WERM, and derives guarantees for PU learning. There is no point in showing that the PU learning formulation in Eq (11) (which is also the formulation of du Plessis et al. (2014) Eq (3), see R.3) performs better than other approaches for PU learning. The iterative WERM procedure in the appendix will be studied experimentally in future work.\"}", "{\"title\": \"References and additional information for all reviewers\", \"comment\": \"We thank the reviewer for their helpful comments and remarks. Below we respond to specific points. The following articles were mentioned in the rebuttal and are not already in the paper's reference section. They will be added in the bibliography for the camera-ready paper. Typos were taken into account and corrected.\\n\\n[1] Semi-Supervised Learning of Class Balance under Class-Prior Change by Distribution Matching,\\n du Plessis et Sugiyama, 2012.\\n\\n[2] Theoretical Comparisons of Positive-Unlabeled Learning against Positive-Negative Learning,\\n Niu et al., 2016.\\n\\n[3] NIST FRVT 1:1 Verification report,\\n Grother et al., 2019.\\n\\n[4] IREX IX Part One Performance of Iris Recognition Algorithms,\\n Grother et al., 2018.\\n\\n[5] Generalizing from a Few Examples: A Survey on Few-Shot Learning,\\n Wang et al., 2019.\"}", "{\"title\": \"Response to reviewer 1\", \"comment\": \"You may find the definition of the weights w_i of Eq (5) right under Eq (3).\\n\\nMultiple Importance Sampling (MIS) uses several proposal functions to sample points that follow a target distribution. In the context of our work, the proposal function is the training dataset. I would interpret generalizing our work to multiple importance sampling to involve several training datasets, with different distributions.\\n\\nOne can straightforwardly generalize our analysis to this case, and leverage different sampling probabilities between the datasets reduce the magnitude of the impact of $\\\\|\\\\Phi\\\\|$ in the bound of Lemma 1.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I did not assess the derivations or theory.\", \"title\": \"Official Blind Review #1\", \"review\": [\"The authors consider the problem of a mismatch between the distribution of the training observations and the test distribution (a transfer learning setup). The paper seems technically sound but it is not easy to read. Even Section 2 it is difficult to read.\", \"Main drawback: Please define the weights w_i of Eq. (5) in Section 2.\", \"I have a question: is it possible to extend your work considering Multiple Importance Sampling and Generalized Multiple Importance Sampling schemes? please discuss.\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper targets the transfer learning problem. It wants to construct the unbiased estimator of the true risk for the target domain based on data from the source domain. To have the unbiased estimator, samples in the source domain are weighted based on some auxiliary information of both the source domain data distribution and the target domain data distribution. Especially, similar to previous works, the paper first assumes P(Y=1) is known for the target domain, and give a generalization bound for learning on the target domain. Then they consider two more concrete problems, one is learning with stratified information when the conditional probability given the stratified information of the source domain is equal to that of the target domain. Then the paper considers PU learning. Generalization bounds are also given for these two problems. Finally, the paper shows some empirical results showing the reweighting effect of its proposal.\\n\\nThe paper is a theoretical study of transfer learning, and a generalization of other learning problems including transfer learning, learning from stratified data and PU learning. It assumes that when some auxiliary information is known, generalization bound can be given by only minimizing a reweighted loss of the biased source domain data. However, the auxiliary information proposed in this paper is difficult to be got. Thus, the practical use of this paper may be limited. The paper also lacks discussion with related theoretical work (such as generalization bound of PU learning). Due to these reasons, I rate a weak reject for the paper.\\n\\nIn Sec. 2, to have an unbiased risk estimator as well as a generalization bound, the prior probability P(Y=1) should be known. However, the paper fails to provide any practical way to estimate this value. Although in the auxiliary part, some results when such a value cannot be accurately estimated are given, estimation methods are also required for the method to be practical. Moreover, such as result is already studied in Sugiyama et al. (2008). Thus, the novelty of this part is limited. \\n\\nIn Sec. 3.1, the paper focuses on the learning from stratified data problem, when some stratified information s for the data are given. The paper further assumes P(x|S=k) = P(x\\u2019|S\\u2019=k). First, in a general learning problem, no matter transfer learning or not, only information of x and y is available. To justify that the information s is available, some real applications should be given as motivations. Moreover, the assumption on the stratified data, i.e. P(x|S=k) = P(x\\u2019|S\\u2019=k) and P(S=k) \\\\neq P(S\\u2019=k) should also be justified. \\n\\nIn Sec. 3.2, the generalization bound of PU learning is also studied before in for example [Niu et al., Theoretical Comparisons of Positive-Unlabeled Learning against Positive-Negative Learning. NIPS 2016]. Discussion on the relationship between these theoretical results should be given. Also, in the experimental part, there is no empirical results comparing the proposed method with the existing PU learning methods. Since one of the main contributions of this paper is on PU learning, empirical studies should also be provided to show the superior of the proposed method. \\n\\n----------------------------\\nThe rebuttal is subjective (without enough support but expressions such as \\\"we believe\\\", \\\"there is no point\\\") and fails to address my concern. I will not raise my score.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #3\", \"review\": \"Summary: This paper aims to show that we can estimate a density ratio for using the importance weighted ERM from the given sample and some auxiliary information on the population. Several learning bounds were proven to promote the use of importance weighted ERM.\\n\\n========================================================\", \"clarity\": \"This paper is mathematically concise and understandable overall. Here, I list some comments on the clarity.\\n\\n1. I found that the word auxiliary information has been used extensively from the very beginning when referring to the estimation of the density ratio. However, there is no explanation what kind of auxiliary information we need to achieve this goal until page 5, where the authors discussed about strata random variable as the additional information (if I didn't make a mistake). I believe there is a better way to introduce the intuition about what kind of auxiliary information is sufficient to make learning possible.\\n\\n2. For the proposed PU-learning setting (case-control) by du Plessis et al. 2014, the assumption is the marginal density unlabeled data is identical to the test marginal density and the positive data is drawn from the class-conditional probability p(x|y=1). I am not sure if it's appropriate to discuss about it in page 2, where authors want to discuss about the situation where the train stage and test stage have different class probabilities.\\n\\n3. In page 3, authors suggested that \\n\\\"it is very common that the fraction of positive instances in the training dataset is significantly lower than the test set (p' < p), supposed to be known here\\\". I have two questions about this.\\n3.1 Does this mean we suppose to know p', p, or both? I am aware that the appendix discussed about when p is misspecified.\\n3.2 I am not convinced that it is common that p' < p. It maybe nice to cite some findings or provide more explanation why it is the case. \\n\\n4. In page 4, authors mentioned few shot learning problem, then describe that it is a scenario where almost no training data with positive labels is available. Is this the same problem setting as the well-known few-shot learning one? In my recognition, few-shot learning is the scenario where we want to learn from small data, e.g., p can be 0.5 but we have a very small number of data but balanced (n_pos=n_neg). Instead of few-shot, I feel it might be better to use the word like \\\"extreme class prior or extreme class probability scenario\\\".\\n\\n5. In page 3, I'm not sure why authors suddenly focused on binary classification with varying class probabilities. A bit of introduction or motivation would be helpful. As far as I understand this is learning from class-prior shift scenario (or class-prior change), which also has been considered in the literature. Authors may consider citing some work in this line and discuss the difference in the findings of the proposed results and the existing work.\\n\\n========================================================\", \"comments\": \"My impression is the novelty of this paper is modest. It is known that importance weighted ERM is unbiased and consistent to the true risk. I believe there exists theoretical analysis of learning under using WERM, especially in the situation where the weight is importance weight function is known. For Lemma 2 and Corollary 1, it is suggest that p' should not be too small but also the author suggested that p' < p. I would like to know more about the setting the authors described here, e.g., what is the example of the practical p' and p. \\n\\n1. Eq. (11) is identical to the proposed unbiased risk estimator of PU-learning in du Plessis et al. (NeurIPS2014). It would be better to clarify that they are equivalent (Eq. (3) of PU-learning in du Plessis et al. (NeurIPS2014)). They also provided a generalization error bound and the analysis when p is misspecified. More theoretical analysis of this empirical risk estimator for case-control PU learning (e.g., estimation error bound) can also be found in the following paper:\\n\\nNiu et al. Theoretical Comparisons of Positive-Unlabeled Learning against Positive-Negative Learning, NeurIPS2016.\\n\\n2. How many trials were run in the experiments? It would be nice to see the standard error not only the mean result. It is known that importance weighting method can have high variance and it might be expected that WERM may have high variance yet have better performance. It would be helpful to explain how to read the table, e.g., what is No Bias, top-5 error. Why half of the table are in gray?\\n\\nAlthough the paper is well-written overall. I found that it is difficult to quantify a novelty of this paper. I believe the goal, as suggested by page 2, is to \\\"set theoretical grounds for the application of ideas behind weighted ERM\\\". As the author suggested, this approach has been studied quite extensively both theoretically and experimentally. It would be helpful to explain what is new and the relationship of the proposed methods or bounds with the existing work to highlight the novelty of this paper.\\n\\nFor these reasons, I vote a weak reject for this paper.\\n\\n========================================================\", \"potential_typos\": \"1. There are \\\"du Plessis et al.\\\" and \\\"Du Plessis et al.\\\" in this paper. This indicates the same person and it should be better to use only one convention (I think du Plessis is preferable).\\n\\n=========================================================\", \"after_rebuttal\": \"Thanks for the reader for clarifying my several questions. I have read the rebuttal.\\nHowever, I feel that in the current form, I would like to stay with the same evaluation. The clarification of the difference between theoretical results is definitely crucial to highlight the novelty of the paper. I would like to add more comments on the PU learning part. I hope the authors find the comments useful.\\n\\n1. du Plessis et al. (ICML 2015) \\\"Convex formulation for learning from positive and unlabeled data\\\", which was already cited in the paper, suggested that if we replace the 0-1 loss to a loss that does not have the symmetric property (e.g., logistic, squared), the form of the unbiased estimator can be different from Eq. (11) in this submitted paper (please see the paper for more details).\\n \\n2. Although we obtain an unbiased risk estimator by the WERM-like method, in deep learning, minimizing such a risk may lead to overfitting, as we can see from Eq. (11) that although it is a cost-sensitive risk, it still treats all unlabeled data as negative. If we have a complex enough model, to minimize the risk, a classifier may classify all unlabeled data to negative, which undoubtedly leads to overfitting. This is discussed in Kiryo et al. (NeurIPS 2017), which also has been already cited in the submitted paper too.\\n\\nNext, I would like to add more comments on the experiment.\\n\\nFor the experiments, I appreciate the authors' effort to do experiments on such a big dataset. In that case, it may be nice to also include an experiment on a smaller dataset, e.g., MNIST, which I believe this has already been conducted but it was in the appendix, to the main body of the paper as well to strengthen the experimental results in the paper. \\n\\nI think the writing in the experiment section can be improved. For example, I don't see the first paragraph, which has lots of texts, contains much information. Also, instead of suggesting a reader to see Figure 1 for comparison, we may use more space to interpret the result. If I didn't miss it, Figure 2 and Figure 3 were never explained or referred to in the main body. In that case, we may consider removing these figures and adding the result on MNIST dataset.\"}" ] }
Hkls_yBKDB
Neural Program Synthesis By Self-Learning
[ "Yifan Xu", "Lu Dai", "Udaikaran Singh", "Kening Zhang", "Zhuowen Tu" ]
Neural inductive program synthesis is a task generating instructions that can produce desired outputs from given inputs. In this paper, we focus on the generation of a chunk of assembly code that can be executed to match a state change inside the CPU. We develop a neural program synthesis algorithm, AutoAssemblet, learned via self-learning reinforcement learning that explores the large code space efficiently. Policy networks and value networks are learned to reduce the breadth and depth of the Monte Carlo Tree Search, resulting in better synthesis performance. We also propose an effective multi-entropy policy sampling technique to alleviate online update correlations. We apply AutoAssemblet to basic programming tasks and show significant higher success rates compared to several competing baselines.
[ "Neural Program Synthesis", "Reinforcement Learning", "Deep learning", "Self-Learning" ]
Reject
https://openreview.net/pdf?id=Hkls_yBKDB
https://openreview.net/forum?id=Hkls_yBKDB
ICLR.cc/2020/Conference
2020
{ "note_id": [ "Bo0w9lrpaV", "yc1CisxbIB", "bzV3dNAsPw", "S1lSZTEhor", "ryg03qV3iB", "SJeVLtN2jB", "Hkg42vUycH", "HkldANPAYS", "rylmwZrCFB" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1577145910755, 1577123984573, 1576798733140, 1573829885469, 1573829302216, 1573828940063, 1571936172509, 1571874000320, 1571864923410 ], "note_signatures": [ [ "ICLR.cc/2020/Conference/Paper1817/AnonReviewer3" ], [ "ICLR.cc/2020/Conference/Paper1817/Authors" ], [ "ICLR.cc/2020/Conference/Program_Chairs" ], [ "ICLR.cc/2020/Conference/Paper1817/Authors" ], [ "ICLR.cc/2020/Conference/Paper1817/Authors" ], [ "ICLR.cc/2020/Conference/Paper1817/Authors" ], [ "ICLR.cc/2020/Conference/Paper1817/AnonReviewer2" ], [ "ICLR.cc/2020/Conference/Paper1817/AnonReviewer1" ], [ "ICLR.cc/2020/Conference/Paper1817/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Re: After rebuttal\", \"comment\": \"Thanks for the reply. I look forward to the revised manuscript and do wish the authors all the best for the next submission cycle.\"}", "{\"title\": \"After rebuttal\", \"comment\": \"Thanks for providing constructive comments and pointing out the missing references!\\n\\nDuring the rebuttal stage, we focused on getting new baseline results and made a quick update near the deadline. However, due to a miscommunication among us, we missed a few references and didn't discuss them all. We are very sorry for the mistake.\\n\\nWe are thoroughly revising the manuscript and will make an update soon. Thank you very much!\"}", "{\"decision\": \"Reject\", \"comment\": \"The authors consider the problem of program induction from input-output pairs.\\nThey propose an approach based on a combination of imitation learning from \\nan auto-curriculum for policy and value functions and alpha-go style tree search. \\nIt is a applied to inducing assembly programs and compared to ablation \\nbaselines. \\n \\nThis paper is below acceptance threshold, based on the reviews and my own \\nreading. \\nThe main points of concern are a lack of novelty (the proposed approach is \\nsimilar to previously published approaches in program synthesis), missing \\nreferences to prior work and a lack of baselines for the experiments.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Reply to review\", \"comment\": \"Thanks a lot for the valuable suggestions and provision of so many reference work. Our reply to some of the questions above is as follows:\", \"novelty\": \"Again thanks for providing many literatures. Some of them(like the RISC-V paper) are first seen and really explored our view.\\n\\nTo make Reinforcement Learning methods work well in code generation is not a trivial task. One critical problem in RL training is to deal reward sparseness and update correlation in sampling stage.To point out,\\nOur novelty is not introducing reinforcement learning and MCTS, which have been explored in many other similar scenarios. The slight step we move forward is by using task sampling and multiple policy to improve training stableness, forcing the agent to really learn to solve problems rather than circumventing them. We regard this as important because x86 state and action space are non-convex and relatively more complex, so it\\u2019s very likely that the area being explored are highly correlated with current policy, which in turn decide the explored area. This meant agent would overfit certain type of tasks instead of finding ad-hoc solutions for different tasks.\\n\\nThe problem, also called distribution shift, can result in worsening performance in RL as time goes by, because RL agent is trained on data sampled by the agent itself. To encourage stability, we encourage RL sample a balanced distribution of tasks by using the technique of policy sampling and task sampling. \\n\\nTo support our assumption, we revised our work by adding another baseline trained without task sampling.The performance will soon drop to less than 1% accuracy compared to 10%+ accuracy on test set without task sampling, because RL agent only choose to sample easy tasks in training time.\", \"baseline\": \"\", \"evaluate_a_model_optimizing_this_hybrid_loss\": \"To clarify, the REINFORCE result is done in this setting you suggested, which still uses imitation. That\\u2019s because without pretrain on imitation learning, training purely with REINFORCE only will converge with extreme low score due to distribution shift, which we also tried but not presented.\", \"comparing_the_proposed_framework_against_some_neural_induction_baselines\": \"We are lacking baseline because of our choice for assembly language as our playground, which is a relatively new setting. Reimplement is also not very easy and trivial because not many work published original code, especially for reinforcement learning.\\n\\nIn this situation, we choose to compare between several algorithms implemented ourselves to fit the setting. This is also part of the reason we choose a widely adopted language, behind which might be more training resources from different programs, which can encourage community joint effort in the future.is is also part of the reason we choose a widely adopted language, behind which might be more training resources from different programs, which can encourage community joint effort in the future.\", \"dataset\": \"the 50, 40, 40 testset is not used for calculating accuracy. We use a bigger test set of 1000 samples generated randomly same as the process of training. The smaller testset is written manually to test the model\\u2019s ability to solve some problems that are easier to interpret. We use it to exhibit the generated program and the corresponding intention within our paper.\", \"experiment_setting\": \"Without semantic guidance or training time regularization, we deem every \\u201caliasing program\\u201d as correct during test time. As is mentioned in [2](Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis), there would be a problem of \\u201cprogram aliasing\\u201d when there is not sufficient I/O examples to fully describe a task, thus many programs would be semantically equivalent, albeit not recover human intention except for only one program. But this literally resulting that less pairs actually leads to better accuracy not because performance improve, but lower threshold. To get a more fair result, we revised our work by testing all experiments with an unseen hold-out I/O pair. Although results from all models drop, AutoAssemblet still outperforms baselines reported.\"}", "{\"title\": \"Reply to review\", \"comment\": \"Thanks for the valuable suggestions as well as criticism. Our reply to some of the questions above is as follows:\\n\\n1. Novelty: \\nTo make Reinforcement Learning methods work well in code generation is not a trivial task. One critical problem in RL training is to deal reward sparseness and update correlation in sampling stage.To point out,\\nOur novelty is not introducing reinforcement learning and MCTS, which have been explored in many other similar scenarios. The slight step we move forward is by using task sampling and multiple policy to improve training stableness, forcing the agent to really learn to solve problems rather than circumventing them. We regard this as important because x86 state and action space are non-convex and relatively more complex, so it\\u2019s very likely that the area being explored are highly correlated with current policy, which in turn decide the explored area. This meant agent would overfit certain type of tasks instead of finding ad-hoc solutions for different tasks.\\n\\nThe problem, also called distribution shift, can result in worsening performance in RL as time goes by, because RL agent is trained on data sampled by the agent itself. To encourage stability, we encourage RL sample a balanced distribution of tasks by using the technique of policy sampling and task sampling. \\n\\nTo support our assumption, we revised our work by adding another baseline trained without task sampling.The performance will soon drop to less than 1% accuracy compared to 10%+ accuracy on test set without task sampling, because RL agent only choose to sample easy tasks in training time. \\n\\n \\n2. Dataset: \\nOur code dataset is generated by randomly sampling from syntax of assembly language. After generating the generated program, we feed random inputs within our number range into the simulator to get corresponding output of the program. Because the I/O pairs rely on generated program, our dataset differentiate across different experiments in lines of program and choices of registers which is necessary to compose an instruction. During the experiment, we generated 300k tasks, and sample batches tasks from them during training.\\n\\nDuring revision, we also trying to make our test dataset more convincing by separate observed and held-out example pairs. We updated new results in our paper. \\n\\nWe consider this is still a single-task setting in machine learning setting, where each programming task can be referred to as a datapoint. Multi-task learning usually refers to the setting introducing more than one learning objective, and it is never the intention of this paper. \\n\\n\\n3. Baseline:\\nWe also agree that we are lacking baselines to make our experiment more convincing, but we think that\\u2019s because of our choice for assembly language as our playground, which is a relatively new setting. Reimplement is also not very easy because not many work published original code, especially for reinforcement learning. To compensate, we choose MLE as our baseline,which is also adopted in other paper. (like https://arxiv.org/pdf/1805.04276.pdf). This is also part of the reason we choose a widely adopted language, behind which might be more training resources from different programs, which can encourage community joint effort in the future. We will also explore more literature to find other possibilities.\\n\\n4. Details:\\nWe add some details about our experiment at the end of this file.\"}", "{\"title\": \"Reply to review\", \"comment\": \"Thank you for your valuable review and suggestions. Our reply to some of the questions above is as follows:\\n\\nWe are also regret for not using big set of instructions from x86, but this problem is not trivial to scale up. For example, we adopted only a limited subset from assembly language because there is not sufficient methods to deal with control flow, so we only choose arithmetic instructions for this paper. Although the instructions are not as diverse as high-level language in functionality, our vocabulary has included registers, memories, and constant numbers to make the story complete. Taken all these components together, it became a huge searching space which make it hard to scale up. \\n\\nAs for previous work, we are glad to get to know more literature through your reviews. During our research, we try to cover as much literature as we can before deciding on the setting of assembly language. It\\u2019s no doubt that DSL is a good point not only in that many SOTA methods came from this research field, but also it already brought up practical ad-hoc solutions to real life scenario. Working on different settings, we hope by introducing assembly language into the program synthesis field, a joint effort from fields like programming language and deep learning can pour in consistent with a more general computer architecture.\\n\\nFor the question about held out examples, we think this is really a good point, which is also mentioned by another reviewer. We have updated our work by doing more experiments on testing on held-out examples. In the revised version, we use the same number of I/O pairs as before, but test all experiments with an unseen hold-out I/O pair. Although results from all models drop, AutoAssemblet still outperforms baselines reported. However, as stated in the updated figure, adding more I/O pairs does not help neural network recovering the correct program, which is quite interesting. \\n\\nAs for previous work combining execution information and search method, I think we are using some intuitive method which have been explored by other works, like executing information(intermediate state in our case) and search method(MCTS and some variant in our case), which we cannot declare are our distribution. However, to make reinforcement learning work in the assembly generation is not trivial, because agent trained on data collected by itself can lead to serious distribution shift in a complex and non-convex state space like assembly code.So we adopted task resampling and policy sampling to further mitigate the problem. Our generation process can also interact directly with gdb to get register memory value for planning, except for the problem that it is too slow for large scale training at current time.\"}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #2\", \"review\": \"This paper tackles the problem of program synthesis in a subset of x86 machine code from input-output examples. First, the paper uses a random code generation policy to generate many programs and executes them with random inputs to obtain I/O and program pairs. Then, the paper trains a model using imitation learning on this dataset, and then transitions to to using policy gradient and Monte Carlo Tree Search methods to train the network.\\n\\nI thought it was quite cool that the paper generates assembly code, but the set of instructions allowed is quite limited. While the paper seems a bit dismissive about prior work such as Balog et al 2017 that use \\\"query languages\\\", the higher-level primitives found in such languages (like \\\"filter\\\") could also mean that the models involved have to learn higher-level semantics than what this model needs.\\n\\nFurthermore, the paper only uses two input-output examples to specify the desired behavior, and the accuracy of the model's output is only evaluated on that pair of examples. While the paper discusses in Section 5.2 that it is important to learn general methods to solve a provided task, this evaluation setting prevents . Similar to previous work like Bunel et al 2018, I would encourage the authors to measure how well the generated programs can do on held-out example input-output pairs, to see whether the model could successfully recover the \\\"intended\\\" program; to assist with this, we can also increase the number of input-output pairs used to specify the task. Of course, this would make it probably impossible to recover the \\\"hard\\\" problems from section 4.3, since those require conditional execution.\\n\\nI think the paper should have cited works such as\\n- https://arxiv.org/abs/1906.04604\\n- https://openreview.net/forum?id=H1gfOiAqYm\\n- https://papers.nips.cc/paper/8107-improving-neural-program-synthesis-with-inferred-execution-traces\\nwhich also make use of execution information in order to predict the code. In particular, the first paper in this list also uses tree search methods to generate programs as a sequence of loop and branch-free instructions, so I believe it should be quite similar to this paper at a high level.\\n\\nConsidering the above limitations with the evaluation methodology, and the limited novelty of this work in light of these citations, I vote for weak reject.\"}", "{\"experience_assessment\": \"I have published in this field for several years.\", \"rating\": \"1: Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: I assessed the sensibility of the derivations and theory.\", \"title\": \"Official Blind Review #1\", \"review\": \"The authors propose to tackle the problem of neural inductive program synthesis using a combination of REINFORCE, imitation learning, and MCTS. Furthermore, they propose a method for sampling tasks that aims at minimising task correlation when optimizing the policy parameters. They test their method on a set of input-output sequences provided by automatically generated x86 programs, and a small set of manually designed programs.\\n\\n\\nGiven the current state of the manuscript, this is a clear reject. Some of the main issues I notice are:\\n\\n1. The novelty of the proposed (combined) method is unclear, given that it is a relatively straightforward combination of relatively simple and battle-tested techniques; I don't consider this in general to be a problem, but previous work has explored the problem way more significantly both algorithmically and in modeling terms.\\n\\n2. The experimental section is entirely composed of non-standard datasets, and - in general - the manuscript lacks almost entirely in critical details regarding how the datasets are generated; e.g. What is the pilot policy? Is there a finite set of IO tasks defined for all the experiments? How were the manual tasks designed? What are the qualitative differences in task dynamics between the two experiment settings?\\n\\n3. The baselines are extremely basic, especially considering that there are a multitude of papers - some of which are mentioned in sections 1 and 2 - that would provide for some excellent comparison. Furthermore, there's a lack of details about the model setup, hyperparameters, state featurization, and so on.\\n\\n4. Across the manuscript there seems to be some apparent confusion on whether they are tackling the problem as a multi-task setting, where each IO set is considered to be a separate task, or whether all of these tasks are just instances of a single MDP. Given that the program space is well defined, I would think that modelling the problem as a single task is more appropriate, however the authors seem to have chosen a multi-task approach. However, in such case there's also a lot of previous work on multi-task RL and meta-learning that should have been mentioned and potentially used / compared against. This also affects how sensible their proposed sampling method is (and whether the assumption wrt on-policyness actually is reasonable).\", \"i_would_encourage_the_authors_to_improve_the_work_in_the_following_ways\": [\"Please include the nitty gritty details about the setup, including dataset, simulator, training and algorithmic hyperparameters, state featurization, etc. - try to make experimental reproduction as easy as possible exclusively based on the manuscript.\", \"Clarify the setup wrt. point 4 above; ideally formalize the problem statement using MDPs, such that it can be properly reasoned upon.\", \"Review more closely previous work, and choose a suitable (and possibly recent and as close to SOTA as reasonably possible) baseline, such that the proposed methods can be quantitatively and qualitatively compared.\", \"Similarly, please attempt to test your method on existing environments and datasets, such that any analysis against previous baselines can be fairly assessed.\"]}", "{\"experience_assessment\": \"I have published one or two papers in this area.\", \"rating\": \"3: Weak Reject\", \"review_assessment\": \"_checking_correctness_of_derivations_and_theory: N/A\", \"title\": \"Official Blind Review #3\", \"review\": \"[Summary]\\nThis paper addresses the problem of synthesizing programs (x86 assembly code) from input/output (I/O) pairs. To this end, the paper proposes a framework (AutoAssemblet) that first learns a policy network and a value network using imitation learning (IL) and reinforcement learning (RL) and then leverages Monte Carlo Tree Search (MCTS) to infer programs. The experiments show that AutoAssemblet can synthesize assembly programs from I/O pairs to some extent. Ablation studies suggest the proposed IL and RL guided search is effective.\", \"significance\": \"are the results significant? 2/5\", \"novelty\": \"are the problems or approaches novel? 2/5\", \"evaluation\": \"are claims well-supported by theoretical analysis or experimental results? 4/5\", \"clarity\": \"is the paper well-organized and clearly written? 4/5\\n\\n[Strengths]\\n\\n*clarity*\\nThe overall writing is clear. The authors utilize figures well to illustrate the ideas. Figure 1 clearly shows the proposed pipeline as well as the MCTS process. In general, the notations and formulations are well-explained. \\n\\n*technical contribution*\\n- Optimizing both the imitation learning loss and the reinforcement learning loss yields better performance when more tasks are available and tasks are more difficult.\\n- Leveraging a learned policy network and value network for improving the efficiency of the MCTS seems effective.\\n\\n*ablation study*\\nAblation studies are comprehensive. The proposed framework first optimizes two losses (IL and RL) and leverages the learned policy network and the value network for improving MCTS. The provided ablation studies help analyze the effectiveness of each of them.\\n\\n*experimental results*\\n- All the descriptions of the experiments and the presentations of the results are fairly clear. \\n- The results demonstrate the effectiveness of the proposed RL guided MCTS.\\n\\n[Weaknesses]\\n\\n*novelty*\\nOverall, I do not find enough novelty from any aspects while the overall effort of this paper is appreciated. The reasons are as follows.\\n- This \\\"self-learning\\\" framework is not entirely novel since it has been proposed in [1], where the model is trained on a large number of programs that were randomly generated and tested on a real-world dataset (FlashFill). \\n- The hybrid objective (IL+RL) has been explored in neural program synthesis [2] (a supervised learning model is fine-tuned using RL), learning robot manipulation [3], character control [4], etc.\\n- Utilizing Monte Carlo Tree Search for program synthesis has been studied in many works. [5] proposes to treat the network outputs as proposals for a sequential Monte Carlo sampling scheme and [6] presents an RL guided Monte Carlo tree search framework. \\n- Program synthesis on assembly languages: RISC-V [6], etc.\\n\\n*related work*\\nThe descriptions of the related work are not comprehensive. While many neural synthesis works [1, 5, 7-10] have explored a wide range of settings for learning to perform program synthesis, they are not mentioned in the paper. I suggest the authors conduct a comprehensive survey on this line of works. \\n\\n*baselines*\\nIn my opinion, the baselines (imitation, REINFORCE, MCTS) presented in the paper are far from comprehensive. I believe the following baselines should also be considered:\\n- As the proposed model optimizes a combination of the IL loss and the RL loss, it would make sense to also evaluate a model optimizing this hybrid loss.\\n- Search-based program synthesis baseline (i.e. learning guided search vs heuristic search)\\n- Comparing the proposed framework against some neural induction baselines would confirm the importance and effectiveness of explicitly synthesizing programs instead of directly predicting the outcome/output. This has been shown in [1, 7].\\n\\n*testing set*\\nThe testing sets are extremely small (with only 50, 40, 40 programs), which makes the results less convincing. Also, how those testing sets were created is not mentioned anywhere in the paper. It only states \\\"we designed a set of human-designed tasks\\\".\\n\\n*number of given I/O pairs*\\nIt is not mentioned anywhere how the authors split the observed I/O pairs and the assessment I/O pairs such as what has been done in most of the works [1, 2, 7, 8]. While the observed I/O pairs are input to the program synthesis framework, the assessment I/O pairs are used to evaluate the synthesized programs. By doing so, the more observed I/O pairs are given, the more accurate the synthesized programs should be (assuming the model can find programs that fit the observed I/O pairs). \\n- The discovery (Figure 2d) in this paper is contradictory to what is mentioned above: the program synthesis accuracy decreases when more I/O pairs are given. I am assuming the authors do not split observed I/O pairs and assessment I/O pairs.\\n- With the setup where the observed I/O pairs are separated from the assessment I/O pairs, a larger number of K (the number of the observed I/O pairs) should be used so that it is more likely that the program for each task is unique and the evaluation would make more sense.\\n\\n[1] \\\"RobustFill: Neural Program Learning under Noisy I/O\\\" in ICML 2017\\n[2] \\\"Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis\\\" in ICLR 2018\\n[3] \\\"Reinforcement and Imitation Learning for Diverse Visuomotor Skills\\\" in RSS 2018\\n[4] \\\"DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills\\\" in SIGGRAPH 2018\\n[5] \\\"Learning to Infer Graphics Programs from Hand-Drawn Images\\\" in NeurIPS 2018\\n[6] \\\"Program Synthesis Through Reinforcement Learning Guided Tree Search\\\" arXiv 2018\\n[7] \\\"Neural Program Synthesis from Diverse Demonstration Videos\\\" in ICML 2019\\n[8] \\\"Execution-Guided Neural Program Synthesis\\\" in ICLR 2019\\n[9] \\\"Learning to Describe Scenes with Programs\\\" in ICLR 2019\\n[10] \\\"Learning to Infer and Execute 3D Shape Programs\\\" in ICLR 2019\\n\\n===== After rebuttal =====\\n\\nI appreciate the authors for the revision and for clarifying some points. I am still not entirely convinced by the response and the revision.\\n\\nFirst, I mentioned several papers that are related to this work, the authors failed to discuss the difference between this submission and these works. \\n- The \\\"self-learning\\\" paradigm is used in [1, 2, 7, 8] but [1, 7] are still not mentioned. If the authors intentionally ignore these works so that they can claim the novelty, it is not acceptable.\\n- While the proposed hybrid objective is very similar to the one proposed in [2, 3, 4], the revision does not mention this.\\n- Why [5, 6] that utilizing Monte Carlo Tree Search for program synthesis are still missing from the revision? I believe these works are very relevant to this submission.\\n- The authors acknowledged that they did not know about [6] that works on program synthesis for an assembly language. Yet, this paper is still missing from the paper.\\nI spent a lot of time conducting a survey in this field to help the authors to improve this submission and trying to identify the novelty of this submission. However, the authors just chose to ignore it, which is very disappointing.\\n\\nSecond, many suggestions that I made are just rejected by the authors because they believe it is \\\"not very easy and trivial\\\". Given this self-learning setting, I believe it would be easy to implement some program induction baselines using supervised learning. Or how about search-based program synthesis baselines?\\n\\nThe writing about 50, 40, 40 testing sets is very misleading. I believe all the reviewers just think the testing accuracy was computed using those testing sets.\\n\\nFrom my point of view, \\\"program aliasing\\u201d does not mean there are not sufficient I/O examples to fully describe a task; Instead, it means there are programs written differently can be semantically the same, which means no matter how many input examples are given, their outputs will always match.\\n\\nOverall, I believe this submission requires a serious revision and I firmly recommend this paper to be rejected.\"}" ] }