forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
SyeLno09Fm
Few-Shot Intent Inference via Meta-Inverse Reinforcement Learning
[ "Kelvin Xu", "Ellis Ratner", "Anca Dragan", "Sergey Levine", "Chelsea Finn" ]
A significant challenge for the practical application of reinforcement learning toreal world problems is the need to specify an oracle reward function that correctly defines a task. Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door). Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function. In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a "prior" that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.
[ "Inverse Reinforcement Learning", "Meta-Learning", "Deep Learning" ]
https://openreview.net/pdf?id=SyeLno09Fm
https://openreview.net/forum?id=SyeLno09Fm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJlncKTMlV", "rJglubvh14", "BJg0ATQ2yE", "ryxrkKm2kV", "SJl9jB7nJV", "rkeAjdT9JE", "rJgRd4i_RX", "SJgvMVsuCm", "BygVyNouCX", "BygzyiA4TQ", "SkeDtAClTX", "B1ef3ksY27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544898964481, 1544479080178, 1544465878444, 1544464604964, 1544463777972, 1544374437985, 1543185525860, 1543185422866, 1543185372329, 1541888730289, 1541627518550, 1541152682461 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper713/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper713/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper713/Authors" ], [ "ICLR.cc/2019/Conference/Paper713/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper713/Authors" ], [ "ICLR.cc/2019/Conference/Paper713/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper713/Authors" ], [ "ICLR.cc/2019/Conference/Paper713/Authors" ], [ "ICLR.cc/2019/Conference/Paper713/Authors" ], [ "ICLR.cc/2019/Conference/Paper713/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper713/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper713/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This work proposes to use the MAML meta-learning approach in order to tackle the typical problem of insufficient demonstrations in IRL.\\n\\nAll reviewers found this work to contain a novel and well-motivated idea and the manuscript to be well-written. The combination of MAML and MaxEnt IRL is straightforward, as R2 points out, however the AC does not consider this to be a flaw given that the main novelty here is the high-level idea rather than the technical details.\\n\\nHowever, all reviewers agree that for this paper to meet the ICLR standards, there has to be an increase in rigorousness through (a) a more close examination of assumptions, sensitivity of parameters and connections to imitation learning (b) expanding the experimental section.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Well-motivated idea but execution and analysis is not convincing\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank to the authors for the provided clarifications. I believe this answers my question on resemblance to other approaches and that this sufficiently different.\\n\\nHowever my concern regarding claim of \\\"avoiding the need for hand-crafted features for the IRL reward function\\\" by the suggested approach is still persists. I am of the opinion that this complexity is now shifted elsewhere. I would still suggest a more rigorous exploration of the sensitivity to these parameters, along with a detailed study of impact on solution quality is warranted. In addition including more experimental results providing a comprehensive picture of the proposed solution would help improve the paper. Given these reasons, I intend to leave with my original score unchanged.\"}", "{\"title\": \"Thanks for your continued feedback, but imitation learning is not the same as inverse RL\", \"comment\": \"Our paper is on inverse reinforcement learning. The goal of this setting to learn the cost function of another agent through observing behavior. Evaluation of the learned cost function should naturally be with respect to the original cost function. Since we are trying to explicitly recover the cost function of the expert, the value difference metric seems to us reasonable. We emphasize again that this comes from prior work.\\n\\nThis problem statement is different from the imitation setup. In fact, imitation learning very often does not recover a cost function at all.\\n\\nWe would like to address your concern and thank you for your time, but can you provide a more specific description of what normalized metric are you suggesting?\"}", "{\"title\": \"RE: Comment: the value difference metric is \\\"normalized\\\"\", \"comment\": \"I think my point is that in imitation learning with suboptimal experts (which is usually the case), a normalized metric should be relative to the performance of the suboptimal expert, not the optimal policy for the original RL. Being normalized wrt the true optimal policy does not provide any calibration, because the suboptimal expert that provides the demonstrations can also have a large value difference wrt V*.\"}", "{\"title\": \"Comment: the value difference metric is \\\"normalized\\\"\", \"comment\": \"We would just like to point out that the value difference metric is \\\"normalized\\\". When the value difference is zero, the recovered policy is optimal. The difference is computed with respect to V*, which we explain in the paper. There seems to have a been a persistent misunderstanding of this metric, which we hope is now clarified.\"}", "{\"title\": \"RE: Response\", \"comment\": \"Thanks for clarifying my questions. While the response does address most of my questions, I think that the paper needs to be reorganized to more clearly emphasize the assumptions made for this method to work, so the paper can be more rigorous. In addition, providing some failure cases will help also understanding its strengths and weakness in practice and the validity of those assumptions. Therefore, I intend not to raise the score.\\n\\nAbout the evaluation, the current paper uses value difference, which compares the algorithm versus the optimal policy with respect to the true reward. However, this is still an unnormalized performance measure, as the expert policy is not necessarily the optimal policy with respect to the true reward. Because IL in general can only achieve the expert-level performance, not necessarily the optimal one, in my viewpoint, at least the expert performance (e.g. its value difference) needs to be included to establish a baseline.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for your review. We have addressed the clarifications requested by the reviewer and added clarifying comments on the additional domains required by the reviewers.\\n\\n\\u201cThe major weakness of the paper is that its hypothesis is not tested exhaustively enough to draw sound conclusions \\u2026 A single result on a simplified computer game does not shed much light on where an allegedly state-of-the-art model stands toward such an ambitious goal. I would at least like to see results on some manipulation tasks, say on half-cheetah, ant etc. \\n..\\nCombination of MaxEnt IRL and MAML is novel. That said, the paper combines them in the most straightforward way\\u201d\\n\\nWe appreciate the feedback. However, we believe that this criticism is somewhat unfair: the tasks in our evaluation are comparable in complexity or more complex than prior work in IRL. IRL is a difficult problem, and almost all prior IRL papers employ tabular domains. Our domains have raw pixel observations for the reward, which makes them more complex. We summarize the evaluation domains in prior work here: \\nRatliff et al. ICML '06 consider a discretized driving domain, Ziebart et al., AAAI '08 considered a driver route modelling setting with roads featurized by 22 dimensional vectors, Hadfield-Menell et al. NIPS '16 considered a tabular domain and used a featurized state representation with dimensionality 3 or 10, Hadfield-Menell et al. NIPS '17 considered 4 variants of a tabular domain with a featurized state representation of 50 dimensions, Malik et al. ICML '18 considered a tabular domain on the order of a 5 x 5 grid with 4 object classes and 4 objects.\\n\\nRegardless of what experiments are presented, it is always the case that more experiments on more complex domains provides stronger evidence for an approach. From this perspective, an approach should provide results comparable to prior work in the literature. It is not uncommon for prior work in the IRL domain to demonstrate the benefits their approach in a single domain as demonstrated above. \\n\\nIn addition, to our knowledge there is no prior work on the problem of meta-IRL. There is also no prior work demonstrating IRL on half-cheetah to the best of our knowledge. The only example on Ant comes from (Fu et al. 2017) but this environment is not suited to generating a large diversity of tasks for meta-learning.\\n\\n\\u201cThe parameter \\\\phi and the loss \\\\mathcal{L}_{IRL} have not been introduced.\\u201d\\n\\nThat equation was meant to provide the definition of the Max-Ent IRL loss. We have fixed this.\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for their comprehensive review. We have addressed the clarification required by the reviewer. We have also requested some important clarifications on certain comments made by the reviewer.\\n\\n\\u201cThe paper assumes the expert is produced by the MaxEnt model.\\u201d\\n\\nWe have made the MaxEnt modeling assumption more explicit in the paper (see page 3). In the IRL literature, the MaxEnt model is a standard assumption (Ziebart et al. 2008, Levine et al. 2012, Huang et al. 2014, Ho et al. 2016, Finn et al. 2016, Fu et al. 2017) as it allows for sub-optimal demonstrations and has a connection to maximum likelihood estimation. \\n\\n\\u201cIn imitation learning, it is known that the expert policy is often sub-optimal, and therefore the goal in imitation learning is mostly only to achieve expert-level performance. Given this, the way this paper evaluate the performance is misleading and improper to me, which leads to an overstatement of the benefits of the algorithm.\\u201d\\n\\nCan you provide details on how you believe the evaluation should be done? Using value difference to evaluate the quality of the rewards follows prior work in inverse reinforcement learning (Levine et al. NIPS '11, Wulfmeier et al, 2016, Brown et al. AAAI 2018), and therefore seemed like the most appropriate evaluation metric. Value difference measures the difference between the learned policy\\u2019s performance and the expert, which seems to be a good measure of whether the policy achieves expert-level performance. We would be happy to add some other metric that you might recommend, if you have specific metrics in mind.\\n\\n\\u201cEquation (8)...\\u201c\\n\\nWe have fixed this. You are correct that there is a missing index over i that was dropped by mistake in the appendix. Thank you for pointing that out.\\n\\n\\u201ca) The meta-training set {T_i; i=1,...,N} and the meta-test set {T_j; i=1,...,M} seems to overload the notation. I suppose this is unintentional but it may appear that the two sets share the first T_1,.., T_M tasks, e.g., when N>=M, instead of being disjoint.\\u201d\\n\\nWe have addressed this in the paper by stating that meta-train and meta-test sets are explicitly disjoint. \\n\\n\\u201cb) The set over which the summation is performed in (4) is unclear; alpha in (4) is not defined, though I guess it's a positive step size.\\u201d\\n\\nWe have clarified this to indicate that alpha is a step size. \\n\\n\\u201cc) On p4, \\\"we can view this problem as aiming to learn a prior over the intentions of human demonstrators\\\" is an overstatement to me. At best, this algorithm learns a prior over rewards for solving maximal entropy IRL, not intention. And the experiment results do not corroborate the statement about \\\"human\\\" intention.\\u201d\\n\\nWe provided this sentence to give some intuition for our approach (the first word in the quoted sentence is \\u201cIntuitively\\u201d). We clarified to this sentence to use the word \\u201creward\\u201d in place of \\u201cintentions\\u201d and \\u201cexpert\\u201d instead of \\u201chuman\\u201d.\\n\\n\\u201cd) On p4, \\\"since the space of relevant reward functions is much smaller than the space of all possible rewards de\\ufb01nable on the raw observations\\\" needs to be justified. This may not be true in general, e.g., learning the set of relevant functions may require a larger space than learning the reward functions.\\u201d\\n\\nWe clarified this sentence in the paper. We are referring to reward functions that can explain a particular behavior. In this sense, it is a strict subset of the reward functions we can define. \\n\\n\\u201ce) The authors call \\\\mu_\\\\tau the \\\"state\\\" visitation, but this is rather confusing, as it is the visiting frequency of state and action (which is only made clear late in the appendix).\\u201d\\n\\nThis terminology comes from the original MaxEnt IRL paper (see Ziebart 2008). We agree however, and have clarified this in the paper. \\n\\n\\u201cf) On p5, it writes \\\"... taking a small number of gradient steps on a few demonstrations from given task leads\\\" But the proposed algorithm actually only takes \\\"one\\\" gradient step in training.\\u201d\\n\\nTo clarify, in our experiment, we use one gradients step during meta-training, but up to 20 at meta-test. We discuss this in the paper. We note that is it is possible to take more than one gradient step at meta-training time although it is computationally more expensive.\\n\\n\\u201cg) The convention of derivatives used in the appendix is the transpose of the one used in the main paper.\\u201d\\n\\nWe will correct this. \\n\\n\\u201c3) T^{tr} seems to be typo in (11)\\u201d\\n\\nTo clarify, this is not a typo. It is consistent with our notation in the preliminary section. In meta-learning, there is a meta-training and meta-testing dataset which consists of tasks. For each task, there is T^{tr} and T^{test}, which are training and test points. It is easy to see that few-shot learning is one such example of this.\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for their comments. We have addressed the clarifications requested by the reviewer and added comments regarding requested comparisons.\\n\\n\\u201cthere is a close resemblance to other similar approaches - mainly, imitation learning.\\u201d\\n\\nBehavior cloning is a simple solution, but only tends to succeed with large amounts of data. This is due to the problem of compounding error due to covariate shift which has been studied in prior work (e.g. Ross AISTATS 2010, Ross 2011). Learning reward functions that reason about outcomes, and prioritize entire trajectories over others, can mitigate this effect by avoiding per time-step fitting. The difference between these two approaches has been further discussed in prior work in the literature (e.g. MacGlashan et al. 2015) and IRL has been shown to be a better solution in settings where lots of data is not available (e.g. Finn et al. 2016, Fu et al. 2017).\\n\\n\\u201cFurthermore, given that this work primarily attempts to improve performance with using meta-learned reward function instead of default initialization - it might make sense to also compare with method such as Finn 2017, Ravi & Larochelle 2016\\u201d\\n\\nThank you for the suggestion. Neither Finn et al. 2017 nor Ravi & Larochelle 2016 actually tackle the inverse reinforcement learning problem, so these methods are not directly applicable. Our method is an extension of Finn et al. 2017 to the IRL setting. An extension of Ravi & Larochelle is likely also possible, but would itself constitute a novel method. We do compare to an alternative meta-learning approach based on recurrent models, which is somewhat similar to Duan et al. 2017 (see Figure 6 red line), and we find that our approach substantially outperforms this alternative method.\\n\\n\\u201cThe results are limited, with experiments using only the synthetic (seemingly quite simple) SpriteWorld data-set. Given the stated objective of this work to extend IRL to beyond simple cases, one would expect more results and with larger problems/data-sets\\u201d\\n\\nThe complexity of the experiments presented in the IRL literature are generally less complex than the RL literatures. Our experiments however are able to scale to high-dimensional image inputs, in contrast to many prior methods in IRL (see below). The complexity of the experiments should be judged relative to the literature, rather than a different problem formulation. In the IRL literature, it is not uncommon to report results in domains which we summarize here: Ratliff et al. ICML '06 consider a discretized driving domain, Ziebart et al., AAAI '08 considered a driver route modelling setting with roads featurized by 22 dimensional vectors, Hadfield-Menell et al. NIPS '16 considered a tabular domain and used a featurized state representation with dimensionality 3 or 10, Hadfield-Menell et al. NIPS '17 considered 4 variants of a tabular domain with a featurized state representation of 50 dimensions, Malik et al. ICML '18 considered a tabular domain on the order of a 5 x 5 grid with 4 object classes and 4 objects. We also emphasize that our domain is more challenging than those in prior work as it requires scaling to high-dimensional observations spaces.\\n\\n\\u201cImages are referred to as high dimensional observation spaces, can this be further clarified?\\u201d\\n\\nThe observation spaces considered in recent prior IRL work typically operates on features spaces on the order of 10-50 dimensions (e.g. Hadfield-Menell et al. NIPS\\u201916, Hadfield-Menell el al. NIPS\\u201917) or tabular states (e.g. Malik et al. ICML '18). In contrast, our reward function is defined directed on image observations, which in the case of our 84 x 84=7056 images is much higher dimensional.\"}", "{\"title\": \"An interesting application of MAML to Inverse RL but lacks rigorousness and persuasive experimental results\", \"review\": \"This paper aims to address the problem of lacking sufficient demonstrations in inverse reinforcement learning (IRL) problems. They propose to take a meta learning approach, in which a set of i.i.d. IRL tasks are provided to the learner and the learner aims to learn a strategy to quickly recover a good reward function for a new task that is assumed to be sampled from the same task distribution. Particularly, they adopt the gradient-based meta learning algorithm, MAML, and the maximal entropy (MaxEnt) IRL framework, and derive the required meta gradient expression for parameter update. The proposed algorithm is evaluated on a synthetic grid-world problem, SpriteWorld. The experimental results suggest the proposed algorithm can learn to mimic the optimal policy under the true reward function that is unknown to the learner.\", \"strengths\": \"1) The use of meta learning to improve sample efficiency of IRL is a good idea.\\n2) The combination of MAML and MaxEnt IRL is new to my knowledge. \\n3) Providing the gradient expression is useful, which is the main technical contribution of this paper. (But it needs to be corrected; see below.)\\n4) The paper is well motivated and clearly written \\\"in a high level\\\" (see below).\", \"weakness\": \"1) The derivation of (5) assumes the problem is tabular, and the State-Visitations-Policy procedure assumes the dynamics/transition of the MDP is known. These two assumption are rather strong and therefore should be made explicitly in the problem definition in Section 3.\\n\\n2) Equation (8) is WRONG. The direction of the derivation takes is correct, but the final expression is incorrect. This is mostly because of the careless use of notation in derivation on p 15 in the appendix (the last equation), in which the subscript i is missed for the second term. The correct expression of (8) should have a rightmost term in the form (\\\\partial_\\\\theta r_\\\\theta) D (\\\\partial_\\\\theta r_\\\\theta)^T, where D is a diagonal matrix that contains \\\\partial_{r_i} (\\\\E_{\\\\tau} [ \\\\mu_\\\\tau])_i and i is in 1,...,|S||A|. \\n\\n3) Comparison with imitation learning and missing details of the experiments. \\na) The paper assumes the expert is produced by the MaxEnt model. In the experiments, it is unclear whether this is true or not, as the information about the demonstration and the true reward is not provided. \\nb) While the experimental results suggest the algorithm can recover the similar performance to the optimal policy of the true reward function, whether this observation can generalize outside the current synthetic environment is unclear to me. In imitation learning, it is known that the expert policy is often sub-optimal, and therefore the goal in imitation learning is mostly only to achieve expert-level performance. Given this, the way this paper evaluate the performance is misleading and improper to me, which leads to an overstatement of the benefits of the algorithm. \\nc) It would be interesting to compare the current approach with, e.g., the policy-based supervised learning approach to imitation learning (i.e. behavior cloning). \\n\\n4) The rigorousness in technicality needs to be improved. While the paper is well structured, the writing at the mathematical level is careless, which leads to ambiguities and mistakes (though one might be able to work out the right formula after going through the details of the entire paper). Below I list a few points. \\n a) The meta-training set {T_i; i=1,...,N} and the meta-test set {T_j; i=1,...,M} seems to overload the notation. I suppose this is unintentional but it may appear that the two sets share the first T_1,.., T_M tasks, e.g., when N>=M, instead of being disjoint. \\n b) The set over which the summation is performed in (4) is unclear; alpha in (4) is not defined, though I guess it's a positive step size.\\n c) On p4, \\\"we can view this problem as aiming to learn a prior over the intentions of human demonstrators\\\" is an overstatement to me. At best, this algorithm learns a prior over rewards for solving maximal entropy IRL, not intention. And the experiment results do not corroborate the statement about \\\"human\\\" intention.\\n d) On p4, \\\"since the space of relevant reward functions is much smaller than the space of all possible rewards de\\ufb01nable on the raw observations\\\" needs to be justified. This may not be true in general, e.g., learning the set of relevant functions may require a larger space than learning the reward functions.\\n e) The authors call \\\\mu_\\\\tau the \\\"state\\\" visitation, but this is rather confusing, as it is the visiting frequency of state and action (which is only made clear late in the appendix). \\n f) On p5, it writes \\\"... taking a small number of gradient steps on a few demonstrations from given task leads\\\" But the proposed algorithm actually only takes \\\"one\\\" gradient step in training. \\n g) The convention of derivatives used in the appendix is the transpose of the one used in the main paper.\", \"minor_points\": \"1) typo in (2) \\n2) p_\\\\phi is not defined, L_{IRL} is not defined, though the definition of both can be guessed.\\n3) T^{tr} seems to be typo in (11)\\n4) A short derivation of (2) in the Appendix would be helpful.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Novel solution to an important problem, but needs further details and experimentation\", \"review\": \"This paper attempts to the solve data-set coverage issue common with Inverse reinforcement learning based approaches - by introducing a meta-learning framework trained on a smaller number of basic tasks. The primary insight here is that there exists a smaller set of unique tasks, the knowledge from which is transferable to new tasks and using these to learn an initial parametrized reward function improves the coverage for IRL. With experiments on the SpriteWorld synthetic data-set, the authors confirm this hypothesis and demonstrate performance benefits - showcasing better correlation with far fewer number of demonstrations.\", \"pros\": [\"The solution proposed here in novel - combining meta-learning on tasks to alleviate a key problem with IRL based approaches.\", \"The fact that this is motivated by the human-process of learning, which successfully leverages tranferability of knowledges across a group of basic tasks for any new (unseen) tasks, makes it quite interesting.\", \"Unburdens the needs for extensive datasets for IRL based approach to be effective\", \"To a large extent, circumvents the need of having to manually engineered features for learning IRL reward functions\"], \"cons\": \"- Although the current formulation is novel, there is a close resemblance to other similar approaches - mainly, imitation learning. It would be good if the authors could contrast the differences between the proposed approach and approach based on imitation learning (with similar modifications). Imitation learning is only briefly mentioned in the related work (section-2), it would be helpful to elaborate on this. For instance, with Alg-1 other than the specific metric used in #3 (MaxEntIRLGrad), the rest seems close similar to what would be done with imitation learning?\\n- One of main contributions is avoiding the need for hand-crafted features for the IRL reward function. However, even with the current approach, the sampling of the meta-learning training and testing tasks seem to be quite critical to the performance of the overall solution and It seems like this would require some degree of hand-tuning/picking. Can the authors comment on this and the sensitivity of the results to section of meta-learning tasks and rapid adaption?\\n- The results are limited, with experiments using only the synthetic (seemingly quite simple) SpriteWorld data-set. Given the stated objective of this work to extend IRL to beyond simple cases, one would expect more results and with larger problems/data-sets.\\n\\t- Furthermore, given that this work primarily attempts to improve performance with using meta-learned reward function instead of default initialization - it might make sense to also compare with method such as Finn 2017, Ravi & Larochelle 2016.\\n\\nMinor questions/issues:\\n> section1: Images are referred to as high dimensional observation spaces, can this be further clarified?\\n> section3: it is not immediately obvious how to arrive at eqn.2. Perhaps additional description would help.\\n> section4.1 (MandRIL) meta-training: What is the impact/sensitivity of computing the state visitation distribution with either using the average of expert demos or the true reward? In the reported experiments, what is used and what is the impact on results, if any ?\\n> section4.2: provides an interesting insight with the concept of locality of the prior and establishes the connection with Bayesian approaches.\\n> With the results, it seems like that other approaches continue to improve on performance with increasing number of demonstrations (the far right part of the Fig.4, 5) whereas the proposed approach seems to stagnate - has this been experimented further ? does this have implications on the capacity of meta-learning ?\\n> Overall, given that there are several knobs in the current algorithm, a comprehensive sensitivity study on the relative impact would help provide a more complete picutre\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Nice and novel idea but not tested enough\", \"review\": \"The paper defines a new machine learning problem setup by applying the meta-learning concept to inverse reinforcement learning (IRL). The motivation behind this setup is that expert demonstrations are scarce, yet the reward functions of related tasks are highly correlated. Hence, there is plenty of transferrable knowledge across tasks.\", \"strengths\": [\"--\", \"The proposed setup is indeed novel and very ecologically-valid, meaning meta-learning and IRL are natural counterparts for providing a remedy to an important problem.\", \"The paper is well-written, technically sound, and provides a complete and to-the-point literature survey. The positioning of the novelty within literature is also accurate.\"], \"weaknesses\": [\"--\", \"The major weakness of the paper is that its hypothesis is not tested exhaustively enough to draw sound conclusions. The paper reports results only on the SpriteWorld data set, which is both synthetic and grid-based. Having acknowledged that the results reported on this single data set are very promising, I do not find this evidence sufficient to buy the proposed hypothesis. After all, IRL is meant mainly for real-world tasks where rewards are not easy to model. A single result on a simplified computer game does not shed much light on where an allegedly state-of-the-art model stands toward such an ambitious goal. I would at least like to see results on some manipulation tasks, say on half-cheetah, ant etc.\", \"Combination of MaxEnt IRL and MAML is novel. That said, the paper combines them in the most straightforward way, which does not incur any complications that call for technical solution that can be counted as a contribution to science. Overall, I find the novelty of this work overly incremental and its impact potential very limited.\", \"A minor issue regarding clarity. Equation 3 is not readable at all. The parameter \\\\phi and the loss \\\\mathcal{L}_{IRL} have not been introduced.\", \"This paper consists of a mixture of strong and weak aspects as detailed above. While the proposed idea is novel and the first results are very promising, I view this work to be at a too early stage to appear in ICLR proceedings as a full-scale paper. I would like to encourage the authors to submit it to a workshop, strengthen its empirical side and resubmit to a future conference.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SygInj05Fm
Physiological Signal Embeddings (PHASE) via Interpretable Stacked Models
[ "Hugh Chen", "Scott Lundberg", "Gabe Erion", "Su-In Lee" ]
In health, machine learning is increasingly common, yet neural network embedding (representation) learning is arguably under-utilized for physiological signals. This inadequacy stands out in stark contrast to more traditional computer science domains, such as computer vision (CV), and natural language processing (NLP). For physiological signals, learning feature embeddings is a natural solution to data insufficiency caused by patient privacy concerns -- rather than share data, researchers may share informative embedding models (i.e., representation models), which map patient data to an output embedding. Here, we present the PHASE (PHysiologicAl Signal Embeddings) framework, which consists of three components: i) learning neural network embeddings of physiological signals, ii) predicting outcomes based on the learned embedding, and iii) interpreting the prediction results by estimating feature attributions in the "stacked" models (i.e., feature embedding model followed by prediction model). PHASE is novel in three ways: 1) To our knowledge, PHASE is the first instance of transferal of neural networks to create physiological signal embeddings. 2) We present a tractable method to obtain feature attributions through stacked models. We prove that our stacked model attributions can approximate Shapley values -- attributions known to have desirable properties -- for arbitrary sets of models. 3) PHASE was extensively tested in a cross-hospital setting including publicly available data. In our experiments, we show that PHASE significantly outperforms alternative embeddings -- such as raw, exponential moving average/variance, and autoencoder -- currently in use. Furthermore, we provide evidence that transferring neural network embedding/representation learners between distinct hospitals still yields performant embeddings and offer recommendations when transference is ineffective.
[ "Representation learning", "transfer learning", "health", "machine learning", "physiological signals", "interpretation", "feature attributions", "shapley values", "univariate embeddings", "LSTMs", "XGB", "neural networks", "stacked models", "model pipelines", "interpretable stacked models" ]
https://openreview.net/pdf?id=SygInj05Fm
https://openreview.net/forum?id=SygInj05Fm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJetjhWyxN", "Hye0kEE907", "HyxUt7VqRQ", "S1xZdQVcRm", "H1rL7VqAQ", "ByxCez4qA7", "Skg7B-V5AQ", "rkliQbEqAX", "H1eEGZV9AX", "S1xzxx4q0m", "Skl3RJ4cR7", "HJxTNKXs3m", "Bkx_Q3Kq37", "rkg-azY9hX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544653985414, 1543287782048, 1543287677822, 1543287657425, 1543287628516, 1543287286412, 1543287098636, 1543287074954, 1543287052320, 1543286761870, 1543286739719, 1541253428944, 1541213216280, 1541210808697 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper711/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper711/Authors" ], [ "ICLR.cc/2019/Conference/Paper711/Authors" ], [ "ICLR.cc/2019/Conference/Paper711/Authors" ], [ "ICLR.cc/2019/Conference/Paper711/Authors" ], [ "ICLR.cc/2019/Conference/Paper711/Authors" ], [ "ICLR.cc/2019/Conference/Paper711/Authors" ], [ "ICLR.cc/2019/Conference/Paper711/Authors" ], [ "ICLR.cc/2019/Conference/Paper711/Authors" ], [ "ICLR.cc/2019/Conference/Paper711/Authors" ], [ "ICLR.cc/2019/Conference/Paper711/Authors" ], [ "ICLR.cc/2019/Conference/Paper711/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper711/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper711/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"Authors present a technique to learn embeddings over physiological signals independently using univariate LSTMs tasked to predict future values. Supervised methods are them employed over these embeddings. Univariate approach is taken to improve transferability across institutions, and Shapley values are used to provide interpretable insight. The work is interesting, and authors have made a good attempt at answering reviewers' concerns, but more work remains to be done.\", \"pros\": [\"R1 & R3: Well written.\", \"R3: Transferrable embeddings are useful in this domain, and not often researched.\"], \"cons\": [\"R3: Method builds embeddings that assume that future task will be relevant to drops in signals. Authors confirm.\", \"R3: Performance improvement is marginal versus baselines. Authors essentially confirm that the small improvement is the accurate number.\", \"R2 & R3: Interpretability evaluation is not sufficient. Medical expert should rate interpretability of results. Authors did not include or revise according to suggestion.\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Method to create unsupervised feature embeddings over physiological signals. Interesting study, but needs additional work.\"}", "{\"title\": \"Demarcation of changes for revision\", \"comment\": \"Most major changes are made in red. The appendix section is entirely new.\"}", "{\"title\": \"Rebuttal Part 4\", \"comment\": \"*\\u201d- Lack of description on experiment setup. The authors do not describe how they pre-trained the LSTMs to obtain Min^h, Auto^h and Hypox^h, which significantly hurts reproducibility. Also I couldn't find any description regarding train/test splits or cross validations, or size of the LSTM cells.\\u201d\\n\\nWe thank the reviewer for raising this important point. The number of samples in each training, validation, and test set varies depending on the label being evaluated - we have included these samples sizes in the Appendix: Section 6.1 Tables 3, 4, and 6. Additionally we have added a paragraph describing the methods for obtaining the labels in more detail in Section 6.1 as well. To better clarify our experimental setup, more details on the architectures of our LSTM and XGB models have been added in the Appendix: Section 6.2. As a final note, for reproducibility, we plan to release code pertinent to training the LSTM models, obtaining embeddings, predicting with XGB models, and model stacking feature attributions - submitted as a pull request to the SHAP github (https://github.com/slundberg/shap). We have indicated our intent to do so in the conclusion (Section 5 Paragraph 3). Additionally, we intend to release our embedding models, which we recommend for use in forecasting \\\"hypo\\\" predictions.\\n\\n*\\u201d- More description is necessary as to how Raw was used to train XGB. Was the entire sequence of 15 signals fed to XGB? \\u201c\\n \\nWe fed in 60 minutes of the 15 Raw signals concatenated together. To better clarify our experimental setup, more details on the architectures of our LSTM and XGB models have been added in the Appendix: Section 6.2. Additionally, we have a more in-depth description of the experimental setup for each experimental result in Appendix: Section 6.3 Figure 8, Section 6.4 Figure 10, and Section 6.5 Figure 11.\\n\\n*\\u201d- Y-axis of Figure 5 is not on the same scale. This makes it hard to intuitively understand the change of SaO2.\\u201d\\n\\nWe thank the reviewer for this feedback; however, because our primary aim is to evaluate Shapley values, of more importance to see changes in the signal rather than the absolute value. As such, we scale for each example, rather than across all examples. Additionally, we have moved Figure 5 to the Appendix: Section 6.5.1 and added many more examples and a brief discussion into the feature attributions provided for true positives, true negatives, and random samples with a fully neural network model (LSTM->MLP) and a hybrid one (LSTM->XGB).\"}", "{\"title\": \"Rebuttal Part 3\", \"comment\": \"*\\u201d- The claim for transferable embedding is further weakened by Figure 4\\u2026 hurt the performance of XGB.\\u201d\\n \\nWe thank the reviewer for bringing up an excellent point. In the revised paper, we show that Reviewer #1\\u2019s suggestion - fine tuning LSTM model (i.e., Model 4p) transferred from the source hospital - can remove this concern. Figure 3 shows that even in this very challenging setting (i.e., extracting information from SaO2 signals in ICU), PHASE can extract relevant information from SaO2 signals. We have applied this clarification to Section 4.2.2.\\n \\n*\\u201d- Evaluating the interpretation of the model is weak\\u2026 as well.\\u201d\\n\\nWe thank the reviewer for bringing up a great point. Figure 5 (now removed) was primarily meant to serve as a sanity check that the proof for feature attributions we presented is correct. Rather than explore anecdotal examples, our aim in the revision is to quantitatively ensure our local feature attributions are correct on a straightforward univariate model (corresponding to model 9 in Figure 2a). Our quantitative validation is a standard ablation/perturbation test, in a similar fashion to other interpretability evaluations: Arras et al. (arXiv 2017), Hooker et al. (arXiv 2018), Ancona et al. (ICLR 2018), and Samek et al. (IEEE 2017). The test consists of the following. For a single sample, we sort the input features according to their attributions, and iteratively impute each feature by the mean of the last two minutes. In order to ensure our interpretations generalize, we evaluate on the test set. Additionally, we use the top 1000 positive samples sorted by the predicted probability of hypoxemia (true positives). Then we evaluate the mean predicted probability across all samples, which will start high (for true positives) and monotonically decrease as we impute features, leading to an overall decrease in the average probability. Good interpretability methods should result in an initial steepness because the most \\\"important\\\" hypoxemic pathology is imputed first similar to Ancona et al. (2018).\\n\\nHowever, it is difficult to validate our model stacking feature attributions because no other attribution methods exist for stacks of neural networks and trees. Instead, we replace the tree part of our model stack (LSTM->XGB) with a multi-layer perceptron, which creates a fully neural network model stack (LSTM->MLP) that we can directly apply an existing attribution method to. Then, we would hope to see that our novel model stacking feature attribution method (DeepSHAP + Independent TreeSHAP on LSTM->XGB) provides attributions that performs similarly or better to the pre-existing method (DeepSHAP on LSTM->MLP) on the ablation test (shown in Figure 4).\\n\\nLastly, we augment the previous quantitative assessment with a qualitative one. We look at true positive, true negative, and randomly selected examples in Appendix: Section 6.5.1, accompanied by a brief discussion to visually demonstrate that our novel method for feature attributions matches with a more conventional approach applied to a fully neural network model.\"}", "{\"title\": \"Rebuttal Part 2\", \"comment\": \"*\\u201d- The authors claim that transferred PHASE embeddings significantly outperform EMA or Raw. \\u2026 the gap is not that large.\\u201d\\n \\nWe thank the reviewer for bringing up a great point. We argue that like other prediction problems in ML, the percentage improvement is often more relevant than absolute difference in AP. We believe that our results showing the improvement of PHASE over other representations are significant for the following reasons:\\n\\n1.\\tOur intention was to show that PHASE embeddings improve over state-of-the-art prediction models even when using the same dataset. However, the largest clinical impact will likely come from sharing these embeddings, allowing embeddings from a large dataset to be used for a training problem in a smaller dataset. In these situations the AP gain of 0.04 is just a lower bound of the improvement we get from the method, much larger gains are possible when people use these to add power to smaller datasets.\\n2.\\tIn Lundberg et. al. (2018), the best performing model (analogous to model 4 in Figure 2) is able to achieve higher predictive accuracy than practicing anesthesiologists in predicting hypoxemia and increase doctors\\u2019 ability to forecast hypoxemia by providing Shapley value attributions. With PHASE, we gain further improvement over Prescience (up to 11% improvement in AP - Figure 2f: Model 12 compared to Model 4) and provide Shapley value attributions for our stacked models, leaving little reason to prefer Prescience over PHASE in a hospital. In health, this level of improvement in predictive performance may impact a number of patients over long periods of time, as pointed out in Lundberg et al. (2018).\\n3.\\tThe relative improvement of being able to increase precision across all recalls (by 5.6% on average over EMA and 4.6% on average over Raw) would mean substantially better retrieval of adverse outcomes, beneficial in the face of alarm fatigue and for patient care. Additionally, the absolute improvement of 0.02 is fairly large given that AP ranges between 0 and 1.\\n4.\\tFinally, Model 12 is shown to be statistically significantly better than competing models at a p-value of 0.01 based on one hundred bootstraps of the test set with adjusted pairwise comparisons via ANOVA with Tukey\\u2019s HSD test. Moreover, the p-values comparing Model 12 to all others were significant at a much lower threshold than 0.01 (often Model 12 is significantly better even at a threshold of 1e-10). We have added these p-values for Figures 2 and 3 to the Appendix: Section 6.3 Tables 7 and 8 as well as Section 6.4 Tables 9 and 10.\\n \\n*\\u201d- More importantly, the fact that model 10 and model 12 show similar performance is not very surprising. \\u2026 does not have a strong ground. \\u201c\\n\\nWe thank the reviewer for bringing up a great point. As Reviewer #2 said \\u201cthe physical distance between hospital should not be mentioned as a way to compare hospitals,\\u201d we removed the discussion of the distance between hospitals 0/1. Instead, one major difference between hospitals 0/1 is that one hospital is an academic medical center, whereas the other is a level 1 trauma center. This causes significant differences in the patient populations, which reflects in distributions that we illustrate in Appendix Section 6.1: Figure 6. Additionally, we report the top ten diagnoses from each hospitals in Appendix: Section 6.1. We find no overlap apart from \\u201cCALCULUS OF KIDNEY\\u201d between hospitals 0 and 1. One such distributional shift is that Hospital 0 data had roughly 58% female patients and hospital 1 data had roughly 39% female patients. Other differences include the fact that hospital 1 serves more young patients than hospital 0 and the fact that only hospital 1 deals with ASA codes of VI. \\n\\nFor medical research, transferability results across two distinct hospitals have been considered very important (Wiens et. al. JAMIA 2014, Choi et. al. KDD 2016, Lee et. al. IEEE 2012). Our results imply one potential model where a large medical center in a given state shares representation learners with small neighboring medical centers, boosting the smaller medical center\\u2019s capability to predict adverse outcomes without risking patient privacy. Given that there is no prior work examining transference of embedding functions, even transference under a small domain shift is an important first step towards medical credibility.\"}", "{\"title\": \"Rebuttal Part 1\", \"comment\": \"We would like to thank the reviewers for their careful consideration of this manuscript and many suggestions for improvement. In response to the reviewers\\u2019 comments we have made changes that we feel substantially improve the manuscript and address the reviewers\\u2019 concerns, which we have responded to point-by-point.\\n\\n*\\u201d- The authors claim PHASE learns signal embeddings that are transferable. \\u2026 to predict a very relevant task.\\u201d\\n\\nWe thank the reviewer for bringing up an excellent point. When we considered prediction problems for our paper, we focused on largely two aspects: (i) clinical importance, and (ii) real-time prediction problems, which are an appropriate evaluation setting for time-series embedding methods. The outcomes we considered - hypoxemia, hypotension, and hypocapnia - are representative adverse real-time events caused by surgery complications and are a significant cause of anesthesia-related complications (Lundberg et al. Nature BME 2018, Barak et al. Sci. World. Journal, 2015; Curley et al. Crit. Care. Med., 2010). As further justification, perioperative adverse outcomes are often due to signals that are too low in terms of magnitude (Exclamado et. al. The Laryngoscope 1989). Therefore training models on the lower boundaries of signals (\\u201chypo\\u201d) would, in all likelihood, cover a non-trivial group of important adverse outcomes. Future work training \\u201chyper\\u201d models as well as working with physicians to identify other such groupings of physiological prediction tasks would certainly be meaningful as well.\\n\\nFinally, to further address the reviewer\\u2019s comments, we have mitigated claims of PHASE being unsupervised and instead called our LSTM models \\u201cpartially supervised\\u201d throughout the entirety of the manuscript. We denote \\u201cpartially supervised\\u201d to mean LSTMs trained with prediction tasks related to the final downstream prediction. Furthermore, we have refined the discussion in Sections 4.2.1 Paragraph 2 and 4.2.2 Paragraph 3 to emphasize that completely unsupervised LSTMs (e.g., autoencoders) are insufficient for downstream \\u201chypo\\u201d predictions, which are clinically important perioperative outcomes. In fact, on our datasets, we found that closeness in the LSTM prediction tasks to the ultimate downstream prediction tasks is beneficial to performance as well as transference. In order to change the message of our paper, we have added this to our conclusion as well (Section 5 Paragraph 2). As a last note, we recommend our models for use in forecasting \\\"hypo\\\" predictions, a statement we have added to the conclusion (Section 5 end of Paragraph 3).\"}", "{\"title\": \"Rebuttal Part 3\", \"comment\": \"*\\u201cAny reason why more conventional attention mechanisms have not been looked at for interpretability?\\u201d\\n \\nWe thank the reviewer for addressing a point of confusion. There are two primary reasons why we did not use attention mechanisms for interpretability. \\n\\n1.\\tThe main reason is because our aim is to ensure interpretability for arbitrary downstream models. When sharing embeddings you don\\u2019t want to force the downstream user to use a specific model type. We enable a broad set of stacked models, where the model stack we propose combines a tree model (XGB) with neural network embeddings as features. Because attention mechanisms are specific to particular kinds of neural networks, they cannot provide attributions for the entire stack. We illustrate that Shapley values, which naively have exponential computational complexity, can be obtained in polynomial time for stacked models of arbitrary combinations of trees and neural network components.\\n2.\\tWe chose Shapley values, because of their theoretical basis in game theory. As shown in \\u201cA Unified Approach to Interpreting Model Predictions\\u201d (Lundberg et al. NIPS 2017), the Shapley values are the only solution that maintain three desirable properties for feature attributions: local accuracy, missingness, and consistency.\\n \\n*\\u201cOverall, I have found the problem addressed here interesting. However, I think that the paper needs work, both on the presentation of the methodology and also on the presentation of more convincing experimental arguments.\\u201d\\n\\nWe thank the reviewer for helping us to improve our paper through better descriptions of our data, of the model architectures, of the feature learning process, and of the interpretability. To support our argument that PHASE can provide interpretable explanations, we presented quantitative evaluation results based on a standard ablation test.\"}", "{\"title\": \"Rebuttal Part 2\", \"comment\": \"*\\u201dIs the data coming from the same type of operating rooms ... provide details on the type of patients that are being monitored.\\u201d\\n \\nWe thank the reviewer for addressing a point of confusion. We have provided details on the type of patients being monitored for hospitals 0/1 as well as hospital P in Appendix: Section 6.1 Figures 6 and 7. The distributions of hospitals 0/1 are far closer to each other than they are to hospital P. Hospitals 0/1 differ primarily in the fact that one is a level 1 trauma center, whereas the other is a university medical center. \\n\\nHospital P data is obtained from intensive care units, whereas hospital 0/1 data is obtained from operating rooms. Another stark difference is that Hospital P data contains a great deal of data from newborns. Although these datasets are quite different, using ICU data still makes sense for two reasons. The first reason is simply that they may still capture something useful. Similarly to computer vision, the learned LSTM embeddings may have lower-level and higher-level representations. Although transferring the specific (higher-level) representations may not be useful, transferring some of the lower level ones may be. The second reason is that even if the fixed LSTMs do not create performant embeddings, they may still contain useful information like model architectures and hyperparameters that are known to be good for predicting hypoxemia, for example. Additionally, fine-tuning has been shown to outperform of match neural networks trained from scratch, with other benefits such as robustness to size of training sets (Tajbakhsh et. al. IEEE 2016).\\n\\n*\\u201cIt is quite hard to argue from what\\u2019s presented in 4.3.3 that the proposed approach is interpretable. Can the authors explain how a visual inspection of Figure 5 \\u201cmakes sense\\u201d as stated in the paper? What is the point that\\u2019s being made here?\\u201c\\n\\nWe thank the reviewer for bringing up a great point. Figure 5 (now removed) was primarily meant to serve as a sanity check that the proof for feature attributions we presented is correct. Rather than explore anecdotal examples, our aim in the revision is to quantitatively ensure our local feature attributions are correct quantitatively on a straightforward univariate model (corresponding to model 9 in Figure 2a). Our quantitative validation is a standard ablation/perturbation test, in a similar fashion to other interpretability evaluations: Arras et al. (arXiv 2017), Hooker et al. (arXiv 2018), Ancona et al. (ICLR 2018), and Samek et al. (IEEE 2017). The test consists of the following. For a single sample, we sort the input features according to their attributions, and iteratively impute each feature by the mean of the last two minutes. In order to ensure our interpretations generalize, we evaluate on the test set. Additionally, we use the top 1000 positive samples sorted by the predicted probability of hypoxemia (true positives). Then we evaluate the mean predicted probability across all samples, which will start high (for true positives) and monotonically decrease as we impute features, leading to an overall decrease in the average probability. Good interpretability methods should result in an initial steepness because the most \\\"important\\\" hypoxemic pathology is imputed first -- similar to Ancona et al. (2018).\\n\\nHowever, it is difficult to validate our model stacking feature attributions because no other attribution methods exist for stacks of neural networks and trees. Instead, we replace the tree part of our model stack (LSTM->XGB) with a multi-layer perceptron, which creates a fully neural network model stack (LSTM->MLP) that we can directly apply an existing attribution method to. Then, we would hope to see that our novel model stacking feature attribution method (DeepSHAP + Independent TreeSHAP on LSTM->XGB) provides attributions that performs similarly to the pre-existing method (DeepSHAP on LSTM->MLP) on the ablation test (shown in Figure 4).\\n\\nLastly, we augment the previous quantitative assessment with a qualitative one. We look at true positive, true negative, and randomly selected examples in Appendix: Section 6.5.1, accompanied by a brief discussion to visually demonstrate that our novel method for feature attributions matches with a more conventional approach applied to a fully neural network model.\"}", "{\"title\": \"Rebuttal Part 1\", \"comment\": \"We would like to thank the reviewers for their careful consideration of this manuscript and many suggestions for improvement. In response to the reviewers\\u2019 comments we have made changes that we feel substantially improve the manuscript and address the reviewers\\u2019 concerns, which we have responded to point-by-point.\\n\\n*\\u201cThe description of this work needs more details\\u2026 Is it something else? \\u201c\\n\\nWe thank the reviewer for the beneficial feedback. In terms of the loss functions. we utilize MSE loss for regression objectives (Min and Auto), whereas we use binary cross entropy for classification objects (Hypox). The LSTM autoencoder is a seq2seq model with two layers with 200 LSTM cells each. To better clarify our experimental setup, hyperparameter and model architecture details on the architectures of our LSTM and XGB models have been included in the Appendix: Section 6.2.\\n\\n*\\u201d3) They propose a way to estimate interpretability \\u2026 I recommend shedding some light on the structure of this model that generates these Shapley values.\\u201d\\n \\nWe thank the reviewer for highlighting this point of confusion. Our objective is to validate our model stacked feature attributions on a straightforward univariate model (corresponding to model 9 in Figure 2a), which has been clarified in Section 4.2.3 along with a new quantitative evaluation of interpretability. Appendix: Section 6.5 Figure 11 illustrates the model setup for the interpretability analysis.\\n \\n*\\u201dThe experimental result section also needs work in my opinion \\u2026 hyper-paremeter tuning?\\u201d\\n \\nWe thank the reviewer for raising this important point. Although the number of patients is reported in Table 1, the number of samples in each training, validation, and test set varies depending on the label being evaluated (which we have included in the Appendix: Section 6.1 Tables 3, 4, and 6). Additionally we have added a paragraph describing the labelling methodology in more detail in Section 6.1 as well. There was minimal hyperparameter tuning primarily due to the amount of models trained (90+ LSTMs and 108+ XGB models). Instead, we have now included the final hyper parameter settings utilized in Appendix: Section 6.2.\\n \\n*\\u201cI have found the \\u201ctransference\\u201d arguments a bit weak. First of all, the physical distance between hospital should not be mentioned as a way to compare \\u201chospitals\\u201d. \\u201c\\n\\nWe thank the reviewer for bringing up a great point. Our aim was simply to suggest the distance between the hospitals might imply a domain shift without revealing the location of hospitals 0 and 1. We have removed this point. Instead, we describe in detail the distributions of statistics in each hospital - one being operating room data from a level 1 trauma center, one being operating room data from a university medical center, and the third one being waveform data from an ICU (in Appendix: Section 6.1, Figures 6 and 7). Additionally, we report the top ten diagnoses from each hospitals in Appendix: Section 6.1. We find no overlap apart from \\u201cCALCULUS OF KIDNEY\\u201d between hospitals 0 and 1.\\n\\n*\\u201cHow did the authors select these features shown on Figure 2? MIMIC has more features than this. Why were these additional features discarded? \\u201c\\n\\nWe thank the reviewer for the question. To clarify Figure 5 (previously Figure 2), the features are from hospitals 0/1, not hospital P (MIMIC). The hospital P dataset has a set of features that cover 7 (out of 15) features collected in hospitals 0/1. Therefore, incorporating MIMIC data to our experiments enables us to test PHASE\\u2019s ability to transfer relevant information from physiological signals in a challenging situation where hospitals have different features.\\n\\nHere, we chose to simplify our analysis and focus on hypoxemia, because (i) most of the signal for forecasting hypoxemia comes from SaO2 (which is also a feature that is consistently measured in hospital P), and (ii) we can test PHASE in a very challenging setting. Additionally, this experimental setting has the added benefit of investigating whether the hospital P embeddings have interaction effects with hospital 0/1 embeddings.\"}", "{\"title\": \"Rebuttal Part 2\", \"comment\": \"*\\\"- Differences between PHASE and EMA are statistically significant but \\u2026 significant way.\\\"\\n \\nWe thank the reviewer for bringing up a great point. We argue that like other prediction problems in ML, the percentage improvement is often more relevant than absolute difference in AP. We believe that our results showing the improvement of PHASE over other representations are significant for the following reasons:\\n\\n1.\\tOur intention was to show that PHASE embeddings improve over state-of-the-art prediction models even when using the same dataset. However, the largest clinical impact will likely come from sharing these embeddings, allowing embeddings from a large dataset to be used for a training problem in a smaller dataset. In these situations the AP gain of 0.04 is just a lower bound of the improvement we get from the method, much larger gains are possible when people use these to add power to smaller datasets.\\n2.\\tIn Lundberg et. al. (2018), the best performing model (analogous to model 4 in Figure 2) is able to achieve higher predictive accuracy than practicing anesthesiologists in predicting hypoxemia and increase doctors\\u2019 ability to forecast hypoxemia by providing Shapley value attributions. With PHASE, we gain further improvement over Prescience (up to 11% improvement in AP - Figure 2f: Model 12 compared to Model 4) and validate a method to obtain Shapley value attributions for our stacked models, leaving little reason to prefer Prescience over PHASE in a hospital. In health, this level of improvement in predictive performance may impact a number of patients over long periods of time, as pointed out in Lundberg et al. (Nature BME 2018).\\n3.\\tThe relative improvement of being able to increase precision across all recalls by roughly 2-10% would mean substantially better retrieval of adverse outcomes, beneficial in the face of alarm fatigue and for patient care. Additionally, the absolute improvement of 0.04 is fairly large given that AP ranges between 0 and 1.\\n4.\\tFinally, Model 12 is shown to be significantly better than competing models at a p-value of 0.01 based on one hundred bootstraps of the test set with adjusted pairwise comparisons via ANOVA with Tukey\\u2019s HSD test. Moreover, the p-values comparing Model 12 to all others were significant at a much lower threshold than 0.01 (often Model 12 is significantly better even at a threshold of 1e-10). We have added the p-values for Figures 2 and 3 to the Appendix: Section 6.3 Tables 7 and 8 as well as Section 6.4 Tables 9 and 10.\\n \\n*\\\"- I appreciate the use of XGBoost \\u2026 of fine tuning the base model.\\\"\\n \\nWe thank the reviewer for the excellent suggestion. We have tried fine tuning our LSTM models, with a presentation of the results included in Figures 2 and 3 (MinAtoB, denotes that the best performing LSTM model trained on hospital A data is trained on hospital B data until convergence for each feature). As one might expect, fine tuned models (14) generally perform on par or better than using just using target hospital data (i.e., without transference) (10). Additionally, we have modified the discussion in the results, Section 4.2.1 Paragraph 3 to recommend fine tuning as a way to repurpose models in the face of transferring across very different hospitals.\"}", "{\"title\": \"Rebuttal Part 1\", \"comment\": \"We would like to thank the reviewers for their careful consideration of this manuscript and many suggestions for improvement. In response to the reviewers\\u2019 comments we have made changes that we feel substantially improve the manuscript and address the reviewers\\u2019 concerns, which we have responded to point-by-point.\\n\\n*\\\"The authors do not explicitly state \\u2026 reduced.\\\"\\n\\nWe thank the reviewer for the excellent point. We intend to release code pertinent to training the LSTM models, obtaining embeddings, predicting with XGB models, and model stacking feature attributions - submitted as a pull request to the SHAP github (https://github.com/slundberg/shap). We have indicated our intent to do so in the conclusion (Section 5 Paragraph 3). Additionally, we intend to release our embedding models, which we recommend for use in forecasting \\\"hypo\\\" predictions.\\n \\n*\\\"However, I do have a few concerns about the paper, listed below:\\n- It might not be fair to truly call this an unsupervised model \\u2026 useful for transfer learning.\\\"\\n\\nWe thank the reviewer for the great point. When we considered prediction problems for our paper, we focused on largely two aspects: (i) clinical importance, and (ii) real-time prediction problems, which are an appropriate evaluation setting for time-series embedding methods. Although predicting mortality makes PHASE a purely unsupervised method, mortality is neither a real-time outcome nor is it reliably measured in our data set. The outcomes we considered - hypoxemia, hypotension, and hypocapnia - are representative adverse real-time events caused by surgery complications and are a significant cause of anesthesia-related complications (Barak et al. Sci. World. Journal, 2015; Curley et al. Crit. Care. Med., 2010). Predicting these events in advance has been considered a promising approach to enable proactive intervention of these events (Lundberg et al. Nature BME 2018).\\n\\nIn order to address the reviewer\\u2019s great point, we create a simulated \\u201cunsupervised\\u201d setting - when predicting each event, we excluded the corresponding physiological signal from our features. For example, we assumed that SaO2 is not recorded when predicting hypoxemia. Under this setting, we must rely on the remaining signals to predict hypoxemia. This setting is a more unsupervised evaluation in the sense that our outcome is not derived from a signal we create an embedding for. As our results show (Section 6.3; Figure 9), PHASE\\u2019s outperformance is consistent in this setting for hypocapnia and hypotension. For hypoxemia, all representations perform poorly because predicting hypoxemia heavily relies on SaO2, leaving little signal for the remaining features.\\n\\nFinally, to further address the reviewer\\u2019s comments, we have mitigated claims of PHASE being unsupervised and instead called our LSTM models \\u201cpartially supervised\\u201d throughout the entirety of the manuscript. We denote \\u201cpartially supervised\\u201d to mean LSTMs trained with prediction tasks related to the final downstream prediction. Furthermore, we have refined the discussion in Sections 4.2.1 and 4.2.2 to emphasize that completely unsupervised LSTMs (e.g., autoencoders) are insufficient for downstream \\u201chypo\\u201d predictions, which are clinically important perioperative outcomes. In fact, on our datasets, we found that closeness in the LSTM prediction tasks to the ultimate downstream prediction tasks is beneficial to performance as well as transference. In order to change the message of our paper, we have added this to our conclusion as well (Section 5 Paragraph 2).\"}", "{\"title\": \"Transfer learning for physiological signals in the OR and ICU\", \"review\": \"The authors present a new method for learning unsupervised embeddings of physiological signals (e.g. time series data) in a healthcare setting. The primary motivation of their paper is transfer learning - the embeddings created by their approach are able to generalize to other hospitals and healthcare settings.\\n\\nOverall I did like this paper. I found it to be easy to read, well motivated, and addressing an important problem in the healthcare domain. As a researcher in this area, it is very true that we are all using our own \\\"siloed\\\" data and do not generally have access to large pre-trained models. I hope that others will produce these kinds of models for the community to use. The authors do not explicitly state that they plan to release their code and pre-trained models, but I sincerely hope that is there intent. If they do not plan to do this, then the impact of this work is dramatically reduced. \\n\\nHowever, I do have a few concerns about the paper, listed below:\\n\\n- It might not be fair to truly call this an unsupervised model. The labels used for evaluation are thresholds on the signals themselves (e.g. SaO2 < 92%) , so the \\\"unsupervised\\\" model actually receives some form of supervision, at least using the current evaluation method. Using a truly different prediction task not directly based on the physiological signals (e.g. mortality, complication during surgery, etc) would provide a cleaner example of unsupervised embeddings that are useful for transfer learning.\\n\\n- Differences between PHASE and EMA are statistically significant but unlikely to be clinically meaningful - the largest absolute difference in AP is 0.04, and most are much smaller than this. It's unclear if the performance gains enjoyed by PHASE would meaningfully change clinical decision making in any significant way.\\n\\n- I appreciate the use of XGBoost due to its impressive Kaggle performance, but it strikes me as odd that the authors did not try to fine tune their base model, as that is standard practice for transfer learning. The successes they point to in CV and NLP all use a fine tuning approach, so the evaluation seems incomplete without a performance assessment of fine tuning the base model.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Well-motivated, well-written, but some issues with the experiments.\", \"review\": \"Summary of the paper:\\nThis paper proposes PHASE, a framework to learn the embeddings for physiological signals from medical records, which can be used in downstream prediction tasks, possibly across domains (i.e. different patient distribution). The authors employ separate LSTMs for each signal channel that are trained to predicts the minimum value of the signal in the fixed future time window (5 minutes in this paper). After training the LSTMs, the learned signal embeddings are fed to gradient boosted trees for a specific prediction task (e.g. predicting whether hypoxemia will occur in 5 minutes). Once the LSTMs are trained, they can be re-used for another dataset; the LSMTs are fixed, and generate embeddings that are fed to a new trainable gradient boosted trees for performing a similar task. The authors also combine existing attribution methods (DeepSHAP and Independent TreeSHAP) to provide some explanation of PHASE. The authors use three different datasets to test PHASE's prediction performance, transferability of the embeddings, and interpretation.\", \"pros\": [\"The paper is well-motivated, well-organized and clearly written. The reading experience was smooth.\", \"Given the importance of physiological signals in ICU settings, transferable embeddings can be an important technique in practice\", \"As the authors claim, I am not aware of any notable prior work on transferable physiological signal embeddings. The authors tackle a relatively unexplored territory.\"], \"issues\": [\"The authors claim PHASE learns signal embeddings that are transferable. However, the authors train the embeddings to predict the minimum value within the next five time steps, because the downstream tasks are all predicting whether a certain signal goes below some threshold (\\\"hypo\\\"xenia, \\\"hypo\\\"capnia, \\\"hypo\\\"tension). This means the authors designed the embedding learning process with a priori knowledge of the downstream tasks, which significantly weakens theirs claim that PHASE learns transferable embeddings. Word embeddings trained on Wikipedia, or ConvNets trained on ImageNet are not designed to be used in a specific type of downstream tasks. What PHASE demonstrates is basically that \\\"hypo\\\"xxxx predictions can be accurately made with pre-training the embeddings to predict a very relevant task.\", \"The authors claim that transferred PHASE embeddings significantly outperform EMA or Raw. But I wouldn't call 0.005-0.02 AP improvement \\\"significant\\\". Model 12 in Figure 3 shows better performance than model 2 and 4, but the gap is not that large.\", \"More importantly, the fact that model 10 and model 12 show similar performance is not very surprising. The two hospitals are in the same city, only miles away. Naturally the distribution of the patients would not be too different. Given this, claiming that PHASE embeddings are transferable does not have a strong ground.\", \"The claim for transferable embedding is further weakened by Figure 4. Model 1^p in Figure 4 clearly performs worse than Raw, which means embeddings learned from significantly different setting (hospital P) is actually making it harder for XGB than simply looking at raw signals. If PHASE was learning a robust embeddings, then the learned embeddings should at least not hurt the performance of XGB.\", \"Evaluating the interpretation of the model is weak. All the authors did was pick four examples and provide qualitative explanation. And they do not even describe whether this interpretation is from model 9 or 10. It would have been much better if at least one medical expert took a look at more than a few examples. In the current form, we cannot be sure if the model is using the SaO2 signal in a medically meaningful way. Also, if this is the interpretation of model 10 or 12, then we should look at the attributions for other signals as well.\", \"Lack of description on experiment setup. The authors do not describe how they pre-trained the LSTMs to obtain Min^h, Auto^h and Hypox^h, which significantly hurts reproducibility. Also I couldn't find any description regarding train/test splits or cross validations, or size of the LSTM cells.\", \"More description is necessary as to how Raw was used to train XGB. Was the entire sequence of 15 signals fed to XGB?\", \"Y-axis of Figure 5 is not on the same scale. This makes it hard to intuitively understand the change of SaO2.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"This paper presents an approach to produce embeddings for physiological signals that are interpretable.\", \"review\": \"The authors claim contributions in three areas:\\n1) Learning representations on physiological signals. The proposed approach uses LSTMS with a loss function that aims at predicting the next five minutes of the physiological signals. Based on their experiments, using this criteria outperforms \\n LSTM autoencoder approaches that are tuned to reconstruct the original signals. The description of this work needs more details. It would be good to have clarity on these loss functions and also on the architecture of the LSTM autoencoder that is claimed here. Is it a standard seq2seq model? Is it something else?\\n\\n2) They use the hidden state of the LSTMs as a representation of the inputs signals. From this representation, they have setup a set of supervised/predictive tasks to measure the efficacy of the representation. For this, they used gradient boosting machines. \\n\\n3) They propose a way to estimate interpretability by tracking the impact of the input data on the predictions using an model agnostic approach using Shapley values. I have found this part of the paper particularly obscure. I recommend shedding some light on the structure of this model that generates these Shapley values. \\n\\nThe experimental result section also needs work in my opinion. First of all, the authors may want to better describe the data used. How many patients are in this set? How was the data partitioned for training, testing, validation? Any hyper-paremeter tuning? I have found the \\u201ctransference\\u201d arguments a bit weak. First of all, the physical distance between hospital should not be mentioned as a way to compare \\u201chospitals\\u201d. How did the authors select these features shown on Figure 2? MIMIC has more features than this. Why were these additional features discarded? Is the data coming from the same type of operating rooms in the case of hospital 0 and 1? I am somehow skeptical on the transfer of embeddings learned in an ICU setting to an OR setting. It would be great to provide details on the type of patients that are being monitored. \\n\\nIt is quite hard to argue from what\\u2019s presented in 4.3.3 that the proposed approach is interpretable. Can the authors explain how a visual inspection of Figure 5 \\u201cmakes sense\\u201d as stated in the paper? What is the point that\\u2019s being made here? Any reason why more conventional attention mechanisms have not been looked at for interpretability?\\n\\nOverall, I have found the problem addressed here interesting. However, I think that the paper needs work, both on the presentation of the methodology and also on the presentation of more convincing experimental arguments.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SkGH2oRcYX
DEEP ADVERSARIAL FORWARD MODEL
[ "Morgan Funtowicz", "Tomi Silander", "Arnaud Sors", "Julien Perez" ]
Learning world dynamics has recently been investigated as a way to make reinforcement learning (RL) algorithms to be more sample efficient and interpretable. In this paper, we propose to capture an environment dynamics with a novel forward model that leverages recent works on adversarial learning and visual control. Such a model estimates future observations conditioned on the current ones and other input variables such as actions taken by an RL-agent. We focus on image generation which is a particularly challenging topic but our method can be adapted to other modalities. More precisely, our forward model is trained to produce realistic observations of the future while a discriminator model is trained to distinguish between real images and the model’s prediction of the future. This approach overcomes the need to define an explicit loss function for the forward model which is currently used for solving such a class of problem. As a consequence, our learning protocol does not have to rely on an explicit distance such as Euclidean distance which tends to produce unsatisfactory predictions. To illustrate our method, empirical qualitative and quantitative results are presented on a real driving scenario, along with qualitative results on Atari game Frostbite.
[ "forward model", "adversarial learning" ]
https://openreview.net/pdf?id=SkGH2oRcYX
https://openreview.net/forum?id=SkGH2oRcYX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HkeDLTW-lV", "Byl3zi5K2X", "BklJKsfYnQ", "BygIV9GUnQ" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544785230682, 1541151507529, 1541118839469, 1540921902228 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper710/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper710/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper710/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper710/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents an action conditioned video prediction method that combines previous losses in the literature, such as, perceptual, adversarial and infogan type of losses. The reviewers point out the lack of novelty in the formulation, as well as the lack of experiments that would verify its usefulness in model based RL. There is no rebuttal thus no ground for discussion or acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"novelty not well justified\"}", "{\"title\": \"Review for Deep Adversarial Forward Model\", \"review\": \"Summary: Model-based RL that work on pixel-based environments tend to use forward models trained with pixel-wise loss. Rather than using pixel-wise loss for an action-conditioned video prediction model (\\\"Forward Model\\\"), they use an adversarial loss combined with mutual-information loss (from InfoGAN) and content loss (based on difference in convnet features of VGG network, rather than pixels). They run experiments on video-action sequences collected from an Atari game (Frostbite), and on a Udacity driving dataset.\", \"pros\": \"The introduction and related work section is very well written, and motivation of why one should try adversarial loss for forward models is clear.\\n\\nWhile I think this work has potential, this paper is clearly not ready for publication, and below are a few suggestions on what I think the authors need to do to improve the work:\\n\\n(1) The authors emphasize novelty, and being \\\"first\\\" a few times in the paper, but fail to mention the large existing work done on video prediction (i.e. [1]), many of which also used these triplet loss or adversarial losses. Sure, those works focus on video prediction, while this work focus on building a \\\"forward model\\\"and is supposed to be for model-based RL, but this work has not performed any model-based RL experiments, so from my point of view, it is a video-prediction model contingent on an action input. Regardless, I believe the approach and results should be compared to existing work on video prediction, and similarities and differences to existing approaches should be highlighted. Adding an action-conditioned element to existing video-prediction techniques is also fairly simple.\\n\\n(2) From reading the intro/related work section, this work is clearly motivated in the direction of model-based RL, and the authors has already used this model for Frostbite. If this method is useful for model-based RL, I would expect to see experimental results for RL, at least for Frostbite (rather than just the training loss in Table 1). Rather than focusing on saying this method is the first to use triplet loss, or the first to use adversarial loss for forward models, I am much more interested in seeing a forward model that works well for RL tasks, since, that's the point right?\\n\\nAlthough the work is promising, I can only give it a score of 4 at the moment. If the author fixes the writing to include detailed discussion with video prediction literature, with good quantitative and qualitative comparison to existing methods, that is worth 1 extra point. If the author has good results on using this forward model on environments that have previously used older forward models (such as Atari environments in [2] or CarRacing/VizDoom in [3]), and presents those results in a satisfactory way, that may increase my score by another 1-2 points depending on the depth of the experiments. Currently the paper is only < 7 pages, so I believe there is room for more substance.\", \"minor_points\": \"- in related work section, should be f_{theta} not f_theta\\n\\n[1] Denton et al., \\\"Unsupervised Learning of Disentangled Representations from Video\\\", (NIPS 2017). https://arxiv.org/abs/1705.10915\\n[2] https://arxiv.org/abs/1704.02254\\n[3] https://arxiv.org/abs/1803.10122\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Straightforward application of existing techniques to forward modeling; experiments & writing could be improved\", \"review\": \"This paper proposed to train a forward model used in reinforcement learning (RL) by task-independent losses. The idea is to use the adversarial loss, infoGAN, and perception loss to replace the task-specific losses in RL.\\n\\nHowever, the experiments did not show any benefits for the RL tasks. While it is possible that the improved prediction in terms of the Euclidean distance could lead to better results for RL, it is better to directly verify it. \\n\\nMany style transfer methods can be modified to solve the problem considered in the paper. Some works on conditional GAN can also be employed. However, there is no baseline compared in the experiments. \\n\\nThe notations in Section 3 change from one sub-section to another. It is hard to obtain a coherent understanding about the proposed approach. \\n\\nOverall, the paper identifies a key component, forward modeling, in RL and aims to improve the solution to that component. However, the proposed approach is a straightforward application of existing techniques to this problem. Both the writing and the experiments could be strengthened, per the suggestions above.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting but not novel; could be better evaluated.\", \"review\": \"This paper describes an approach for training conditional future frame prediction models, where the conditioning is with respect to the current frame and additional inputs - specifically actions performed in a reinforcement learning (RL) setting.\\n\\nThe authors suggest that one can predict future frames from a vector comprised of an observation encoding and an action. To train the model, they suggest using a linear combination of three different losses: (1) an adversarial loss that encourages the generated sample to look similarly to training data, (2) an InfoGAN-inspired loss that is supposed to maximise mutual information between the conditioning (e.g. action) and the generated sample, and (3) a content loss, taken to be the mean-squared error of the prediction and ground-truth in the VGG feature space.\\n\\nThe major contribution of this work seems to be using these three losses in conjunction, while doing conditional frame prediction at the same time. While interesting, there exist very similar approaches that also use adversarial losses [1] as well as approaches using different means to reach the same goal [2, 3]. None of these are mentioned in the text, nor evaluated against. It is true that [1] is not action-conditional, but adding actions as conditioning could be a simple extension.\\n\\nExperimental section consists of an ablation study, which evaluates importance of different components of the loss, and a qualitative study of model predictions. With no comparison to state of the art (e.g. [1, 3]), it is hard to gauge how valuable this particular approach is. \\nThe qualitative evaluation starts with \\u00a74.4\\u00b61 \\u201cwe follow the customary GAN literature to include some qualitative results for illustration\\u201d, as if there was no other reason for including samples than to follow the custom. Since the paper is about action-conditional prediction, it would be interesting to see predictions conditioned on the same initial sequence but different actions, which are not present, however. Moreover, this work is developed in the context of RL applications, and since prior art [4] has shown that better predictive models do not necessarily lead to better RL results, it would be interesting to evaluate the proposed approach against baselines in an RL setting.\\n\\nThe paper is clearly written, but some claims in the text are not supported by any citations (e.g. \\u00a71\\u00b62 \\u201cMore recently, several papers have shown that forward modelling\\u2026\\u201d without a citation). Some claims are misleading (e.g. \\u00a71\\u00b63 says that by using adversarial training we don\\u2019t need to use task-specific losses and it does not put constraints on input modality. While true, using MSE loss is equally general). Some other claims are not supported at all or may not be true (e.g. \\u00a73.2\\u00b61 \\u201cResNet \\u2026 aims at compressing the information in the raw observation\\u201d - to the best of my knowledge, there is no evidence for this).\\n\\nTo conclude, the suggested approach is not novel, the experimental evaluation is lacking, and the text contains a number of unsupported statements. I recommend to reject this paper.\\n\\n[1] Lee, A.X., Zhang, R., Ebert, F., Abbeel, P., Finn, C., & Levine, S. (2018). Stochastic Adversarial Video Prediction. CoRR, abs/1804.01523.\\n[2] Eslami, S.M., Rezende, D.J., Besse, F., Viola, F., Morcos, A.S., Garnelo, M., Ruderman, A., Rusu, A.A., Danihelka, I., Gregor, K., Reichert, D.P., Buesing, L., Weber, T., Vinyals, O., Rosenbaum, D., Rabinowitz, N.C., King, H., Hillier, C., Botvinick, M.M., Wierstra, D., Kavukcuoglu, K., & Hassabis, D. (2018). Neural scene representation and rendering. Science, 360, 1204-1210.\\n[3] Denton, E.L., & Fergus, R. (2018). Stochastic Video Generation with a Learned Prior. ICML.\\n[4] Buesing, L., Weber, T., Racani\\u00e8re, S., Eslami, S.M., Rezende, D.J., Reichert, D.P., Viola, F., Besse, F., Gregor, K., Hassabis, D., & Wierstra, D. (2018). Learning and Querying Fast Generative Models for Reinforcement Learning. CoRR, abs/1802.03006.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
Bklr3j0cKX
Learning deep representations by mutual information estimation and maximization
[ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ]
This work investigates unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder. Importantly, we show that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation's suitability for downstream tasks. We further control characteristics of the representation by matching to a prior distribution adversarially. Our method, which we call Deep InfoMax (DIM), outperforms a number of popular unsupervised learning methods and compares favorably with fully-supervised learning on several classification tasks in with some standard architectures. DIM opens new avenues for unsupervised learning of representations and is an important step towards flexible formulations of representation learning objectives for specific end-goals.
[ "representation learning", "unsupervised learning", "deep learning" ]
https://openreview.net/pdf?id=Bklr3j0cKX
https://openreview.net/forum?id=Bklr3j0cKX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HyxfyUFyxN", "SJxEJLX2CQ", "rJekflpQCX", "SJxOvyaQCm", "SkxJeJTX07", "ryxmvC2mC7", "B1lMGywi6Q", "SJxqzmcVp7", "rkgBfaIahQ", "BkxA0Kt3nQ", "B1gEEtgAjm", "rygYR8UMoQ", "rklhZYMJom" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1544685018097, 1543415259900, 1542864902778, 1542864735840, 1542864614716, 1542864474707, 1542315786329, 1541870354161, 1541397772874, 1541343701711, 1540389163707, 1539626704791, 1539414276466 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper709/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper709/Authors" ], [ "ICLR.cc/2019/Conference/Paper709/Authors" ], [ "ICLR.cc/2019/Conference/Paper709/Authors" ], [ "ICLR.cc/2019/Conference/Paper709/Authors" ], [ "ICLR.cc/2019/Conference/Paper709/Authors" ], [ "ICLR.cc/2019/Conference/Paper709/Authors" ], [ "ICLR.cc/2019/Conference/Paper709/Authors" ], [ "ICLR.cc/2019/Conference/Paper709/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper709/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper709/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper709/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a new unsupervised learning approach based on maximizing the mutual information between the input and the representation. The results are strong across several image datasets. Essentially all of the reviewer's concerns were directly addressed in revisions of the paper, including additional experiments. The only weakness is that only image datasets were experimented with; however, the image-based experiments and comparisons are extensive. The reviewers and I all agree that the paper should be accepted, and I think it should be considered for an oral presentation.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Oral)\", \"title\": \"meta review\"}", "{\"title\": \"On SOTA comparisons\", \"comment\": \"Thank you for your updated review. We actually had an internal debate about how to best phrase this, as we don't want to overclaim anything. Your suggested edit is better, and we will change the sentence at the next revision opportunity.\"}", "{\"title\": \"Thank you for your thorough review.\", \"comment\": [\"We\\u2019re delighted that this approach excites you, and hopefully the comments above and revision address your previous and latest concerns.\", \"On baselines: See \\\"On architectures and baselines\\\" and \\\"Comparisons to CPC above\\\".\", \"Overall spin: We never meant to introduce the prior as a means of addressing trivial solutions to the first bullet point. Rather, the prior term is meant to impose constraints on the marginal distribution of the representation. Disentanglement, for example, is an important property in many fields (neuroscience or RL for instance), and prior matching is a common method for this (e.g., ICA).\", \"Ablation studies: See Figure 10, last subfigure in the revision for the ablation study you requested. The prior term has only a small effect on classification accuracy, yet has a strong effect on dependence (it decreases it), according to the NDM measure. If you feel this should be included in the main text, we can add it before the final revision deadline.\", \"On the role of the global term: it is true that alone the global term can exhibit some degenerate behavior, and this is especially apparent by classification results. However, its use depends what the end-goal of the representation is. For example, a combined global local version of DIM improves both reconstruction and mutual information estimates considerably over one or the other (Table 4 in the revision). We feel that the global term can still be useful, but it does seem like the global objective without the local objective is not useful.\", \"On the DV representation: our initial experiments showed very poor DV performance, but this changed recently when we adopted the strategy of using a very large number of negative samples as in NCE. However, this approach performs only comparably or worse than using the JSD (Tables 1 and 2 in the revision), supporting our claim that the JSD is better for this task. In addition, we added DV to Figure 9, which shows that DV performance decays quickly as fewer images are used in negative sampling.\"]}", "{\"title\": \"Thank you for your review:\", \"comment\": \"Key points:\\n- Image only: As the structural assumptions are important to the MI maximization task of DIM, we wanted to do an in-depth analysis and comparison in this setting. The core ideas of DIM transfer very easily, however, and we anticipate these ideas being successful in the NLP, graph, and RL settings, for example.\", \"minor_comments\": [\"Trial solutions: This is true (see discussion with Reviewer 1) and obviously we need a bottleneck or noise in the global variable. One potential solution to this is presented in our occlusion experiments (Table 5 in the revision), where some local features are masked out from computation of the global objective.\", \"Using DIM with supervised learning: It sounds reasonable to use DIM directly as a regularizer for supervised learning, and our fine-tuning experiments for STL10 support this. However, we have not tried this experiment specifically.\", \"C and X: C_i is the feature map location that corresponds to the receptive field X_i.\"]}", "{\"title\": \"Official rebuttal\", \"comment\": \"Thank you for your detailed review, and we hope that our revisions address your concerns.\", \"key_points\": [\"No comparison to autoencoders/beta-VAE/etc: See our discussions above under \\u201cNew baselines\\u201d. We\\u2019ve now added these comparisons for classification results.\", \"DIM vs CPC: see our previous comments on differences between CPC and DIM, as well as usage of the softmax-type \\u201cNCE\\u201d.\", \"Comparison to CPC: See discussion above \\u201cComparisons to CPC\\u201d.\", \"Weak performance compared to older methods on Cifar10 (e.g. Coates et al., 2011): See discussions above \\u201cOn architectures and baselines\\u201d. We have added some more details w.r.t. other models in Section 4.2, in \\u201cclassification comparisons\\u201d. Also see \\\"Comparisons to CPC\\\" for improved results on CIFAR10.\", \"NCE versus JSD: It would be difficult to conclude that NCE is uniformly superior. While NCE tends to be superior with a large number of negative samples, the differences diminish with larger datasets (Table 2). In addition, JSD outperforms NCE as you reduce the number of images used as negative samples (Figure 9). This will play a factor when choosing the right loss, as more negative samples means more computations / more memory in order to compute the softmax.\", \"Sensitivity of the beta term: There was an error in the ranges presented in Figures 9 (accidentally cropped). The last subfigure shows that there is relative insensitivity of gamma (prior term), and much more sensitivity to beta (local term). The performance variation is only ~1% across the gamma, which is not enough to change conclusions of baseline comparisons.\", \"We modified the text to improve clarity w.r.t. comments from all reviewers. Many of the experiments we put into the Appendix were related to questions we had about the model / representation, and we excluded them from the main text precisely because they do not relate directly to the main story. However, we chose to keep them in the Appendix as we found them interesting and informative.\"], \"minor_comments\": [\"On mutual information and constraints: See the updated version of Section 2, next to last paragraph.\", \"On local definition: We have modified the text in the first paragraph to help define the \\u201clocal\\u201d MI objective earlier.\", \"On SOM: We modified this sentence to read \\\"generally lack the representational capacity of deep neural networks\\\".\", \"On reconstruction and MI / VAEs: There was an error in Equation (1), which has now been fixed.\", \"On the JSD and the mutual information: This is an important point, and we added a discussion appendix A1 to show the JSD between the joint and the product of marginals is related to PMI as well as some empirical analysis under a discrete setting.\", \"Stochasticity: We have tried dropout as a form of stochasticity, and this does not significantly change classification performance, though it is reasonable to posit this might affect the encoder\\u2019s ability to match the marginal output to a given prior.\", \"NDM vs MINE: NDM is small as the prior term is adversarial and is encouraging the aggregated posterior to match the prior. Small NDM indicates more independence / disentanglement, which is the desired effect (see Figure 12 for the study with beta VAE). DIM encourages the MINE measure to be large, though a combined global / local objective works best (Table 3). There is no straightforward direct relationship between disentanglement and mutual information.\", \"Trivial solutions: Trivial solutions are a possibility, and surely this risk increases as the size of the global vector increases, though we never ran across this issue in our experiments (the dimension of 64 was chosen somewhat arbitrarily and to match other latent space sizes, such as those found in GANs. Our limited experiments with larger global vectors had no issues). The experiment you describe is nearly identical to our occlusion experiments (Table 5), which do indeed improve classification performance. It is reasonable to posit that other occlusion-type tasks would modify the representation in desirable ways.\"]}", "{\"title\": \"Thank you for your helpful feedback\", \"comment\": \"We thank the reviewers for providing productive comments and critiques. We believe this input has improved the quality of our work. We first address key shared concerns, and then respond to specific points from individual reviews.\", \"on_architectures_and_baselines\": \"Our baselines and architectures were chosen to provide a level comparison across methods, rather than to maximize performance of our method. We tried to stay true to common / popular architectures from papers on unsupervised representation learning -- namely, DCGAN- and Alexnet-type encoders. We did not perform significant hyper-optimization on these architectures. For the classification results, our method and all baselines were trained in the same setting with the same architecture. The CIFAR10 supervised results are poor compared to SOTA results that rely on data augmentation and more sophisticated architectures. We did not intend to mislead. We modified Section 4.2 to help readers correctly interpret comparisons with supervised results. To our knowledge, our STL-10 results are SOTA for the unsupervised setting.\", \"new_baselines\": \"\", \"we_have_included_new_baselines_to_address_concerns_from_reviewer_1\": \"CPC, beta VAE with low beta, and an unregularized autoencoder. See Tables 1, 2, and 3 in the revision. We did not implement NICE or real NVP as these involve specialized architectures. Following the same settings as our existing baselines, DIM(L) significantly outperformed all new baselines in classification results. The overall effect of beta in beta VAE is unremarkable. We report results for beta=0.5, which performed best, but also tested beta in {0.01, 0.1, 0.2, 0.5}.\", \"comparisons_to_cpc\": \"We spent considerable time implementing a CPC baseline, and had difficulty getting results that were significantly better than even BiGAN in our test setting. To achieve strong results with CPC, we needed to use an encoder architecture closer to that in the CPC paper. Specifically, we extract each local feature from a patch cropped from the full image. The patches form a 7x7 grid and have 50% overlap between neighboring patches. With this architecture, DIM(L) outperforms CPC on CIFAR10 using a ResNet-type encoder for the cropped patches. When classifying based on the full 7x7 grid of local features, DIM(L) achieves 80.9% accuracy and CPC achieves 77.5%. When strided crops were used with data augmentation on STL10, DIM(L) and CPC performed comparably, both achieving ~77% without fine-tuning through the Alexnet encoder. When we used a version of DIM with multiple global representations using a single convolutional layer, DIM got over 78%. Some of these differences could be architectural so DIM and CPC are at worst comparable in this setting, but we can conclude that the complex strictly ordered autoression in CPC is unnecessary. We have added a paragraph to Section 4.2, in \\u201cclassification comparisons\\u201d to discuss these comparisons.\"}", "{\"title\": \"Followup on the loss function we and CPC call \\\"NCE\\\"\", \"comment\": \"While searching for more prior work based on different versions of the original \\\"binary\\\" form of NCE, we found an explicit presentation of the \\\"multinomial\\\" NCE used in CPC and DIM.\\n \\nThe loss presented in CPC is less novel than we previously thought. The multinomial version of NCE is precisely described in Section 3 of [1]. A rigorous analysis of the relation between binary and multinomial NCE was also recently published in [2, page 3], which was submitted for review prior to CPC's appearance on arXiv. \\n\\n[1] \\\"Exploring the Limits of Language Modeling\\\" (Jozefcowicz et al., 2016)\\n[2] \\\"Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency\\\" (Ma and Collins, EMNLP 2018),\"}", "{\"title\": \"Similarities and differences between DIM and CPC and the use of the term \\\"NCE\\\"\", \"comment\": \"We will provide a complete rebuttal soon, but first we address some concerns about our use of the terms DIM/CPC/NCE etc.\\n\\nDIM(L) and CPC have many similarities, but they are not the same. The key difference between CPC and DIM is the strict way in which CPC structures its predictions, as illustrated in Figure 1 of [1]. CPC processes local features sequentially (fixed-order autoregressive style) to build a partial \\u201csummary feature\\u201d, then makes separate predictions about several specific local features that weren\\u2019t included in the summary feature. \\n\\nFor DIM (without occlusions), the summary feature is a function of all local features, and this \\u201cglobal\\u201d feature predicts all of those features simultaneously in a single step, rather than forming separate predictions for a few specific features as in CPC. A consequence of this difference is that DIM is more easily able to perform prediction across all local inputs, as the predictor feature (global) is allowed to be a function of the predicted features (local). DIM with occlusions shares more similarities with CPC, as it mixes self-prediction for the observed local features with orderless autoregression for the occluded local features (see [6] for further discussion of ordered vs orderless autoregression).\\n\\nUsing Noise Contrastive Estimation (NCE) to estimate and maximize mutual information was first proposed in [1], and we credit them in the manuscript (and we will further emphasize this in the revision). While there are a variety of NCE-based losses [2, 3, 4], they all revolve around training a classifier to distinguish between samples from the intractable target distribution and a proposal noise distribution. E.g., [5] uses NCE based on an unbalanced binary classification task, and the loss in [1] is a direct extension of this approach. While novel to [1], we do not consider this NCE-based loss the defining characteristic of CPC, which could instead use, e.g. the DV-based estimator proposed in [7]. The authors of [1] specifically mention this as a reasonable alternative. Due to significant differences in which mutual informations they choose to estimate and maximize, we think it would be ungenerous to consider our method equivalent to CPC whenever we use this estimator.\\n\\n[1] Oord, Aaron van den, Yazhe Li, and Oriol Vinyals. \\\"Representation learning with contrastive predictive coding.\\\" arXiv preprint arXiv:1807.03748 (2018).\\n[2] Gutmann, Michael, and Aapo Hyv\\u00e4rinen. \\\"Noise-contrastive estimation: A new estimation principle for unnormalized statistical models.\\\" Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. 2010\\n[3] Gutmann, Michael U., and Aapo Hyv\\u00e4rinen. \\\"Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics.\\\" Journal of Machine Learning Research 13.Feb (2012): 307-361.\\n[4] Mnih, Andriy, and Yee Whye Teh. \\\"A fast and simple algorithm for training neural probabilistic language models.\\\" arXiv preprint arXiv:1206.6426 (2012).\\n[5] Mikolov, Tomas, et al. \\\"Distributed representations of words and phrases and their compositionality.\\\" Advances in neural information processing systems. 2013.\\n[6] Benigno Uria, Marc-Alexandre Cote, Karol Gregor, Iain Murray, and Hugo Larochelle. \\u201cNeural Autoregressive Distribution Estimation.\\u201d arXiv preprint arXiv:1605.02226 (2016).\\n[7] Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, Devon Hjelm ;Proceedings of the 35th International Conference on Machine Learning, PMLR 80:531-540, 2018.\"}", "{\"title\": \"Interesting take on representation learning, but text needs improvement\", \"review\": \"This paper proposes Deep InfoMax (DIM), for learning representations by maximizing the mutual information between the input and a deep representation. By structuring the network and objectives to encode input locality or priors on the representation, DIM learns features that are useful for downstream tasks without relying on reconstruction or a generative model. DIM is evaluated on a number of standard image datasets and shown to learn features that outperform prior approaches based on autoencoders at classification.\\n\\nRepresentation learning without generative models is an interesting research direction, and this paper represents a nice contribution toward this goal. The experiments demonstrate wins over some autoencoder baselines, but the reported numbers are far worse than old unsupervised feature learning results on e.g. CIFAR-10. There are also a few technical inaccuracies and an insufficient discussion of prior work (CPC). I don't think this paper should be accepted in its current state, but could be persuaded if the authors address my concerns.\", \"strengths\": [\"Interesting new objectives for representation learning based on increasing the JS divergence between joint and product distributions\", \"Good set of ablation experiments looking at local vs global approach and layer-dependence of classification accuracy\", \"Large set of experiments on image datasets with different evaluation metrics for comparing representations\"], \"weaknesses\": [\"No comparison to autoencoding approaches that explicitly maximize information in the latent variable, e.g. InfoVAE, beta-VAE with small beta, an autoencoeder with no regularization, invertible models like real NVP that throws out no information. Additionally, the results on CIFAR-10 are worse than a carefully tuned single-layer feature extractor (k-means is 75%+, see Coates et al., 2011).\", \"Based off Table 9, it looks like DIM is very sensitive to hyperparameters like gamma for classification. Please discuss how you selected hyperparameters and whether you performed a similar scale sweep for your baselines.\", \"The comparison with and discussion of CPC is lacking. CPC outperforms JSD in almost all settings, and CPC also proposed a \\\"local\\\" approach to information maximization. I do not agree with renaming CPC to NCE and calling it DIM(L) (NCE) as the CPC and NCE loss are not the same. Please elaborate on the similarties and differences!\", \"The clarity of the text could be improved, with more space in the main text devoted to analyzing the results. Right now the paper has an overwhelming number of experiments that don't fit concisely together (e.g. an entirely new generative model experimentsin the appendix).\"], \"minor_comments\": [\"As noted by a commenter, it is known that MI maximization without constraints is insufficient for learning good representations. Please cite and discuss.\", \"Define local/global earlier in the paper (intro?). I found it hard to follow the first time.\", \"Why can't SOMs represent complex relationships?\", \"\\\"models with reconstruction-type objectives provide some guarantees on the amount of information encoded\\\": what do you mean by this? VAEs have issues with posterior collapse where the latents are ignored, but they have a reconstruction term in the objective.\", \"\\\"JS should behave similarly as the DV-based objective\\\" - do you have any evidence (empirical or theoretical) to back up this statement? As you're maximizing JSD and not KL, it's not clear that DIM can be thought of as maximizing MI.\", \"Have you tried stochastic encoders? This would make matching to a prior much easier and prevent the introduciton of another discriminator.\", \"I'm surprised NDM is much smaller than MINE given that your encoder is deterministic and thus shouldn't throw out any information. Do you have an explanation for this gap?\", \"there's a trivial solution to local DIM where the global feature can directly memorize everything about the local features as the global feature depends on *all* local features, including the one you're trying to maximize information with. Have you considered masking each individual local feature before computing the global feature to avoid this trivial solution?\", \"-----------------------\"], \"update\": \"Apologies for the slow response. The new version with more baselines, comparisons to CPC, discussion of NCE, and comparisons between JS and MI greatly improve the paper! I've increased my score (5 -> 7) to reflect the improved clarity and experiments.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Mutual information-based representation learning with additional tricks for performance gain.\", \"review\": \"This paper presents a representation learning approach based on the mutual information maximization.\\nThe authors propose the use of local structures and distribution matching for better acquisition of representations (especially) for images.\", \"strong_points_of_the_paper_are\": [\"This gives a principled design of the objective function based on the mutual information between the input data point and output representation.\", \"The performance is gained by incorporating local structures and matching of representation distribution to a certain target (called a prior).\"], \"a_weak_point_i_found_was\": \"The local structure and evaluation are specialized for classification task of images. \\n\\nQuestions and comments.\\n* Local mutual information in (6) may trivially be maximized if the summarizer f (E(x) = f \\\\circ C(x) with \\\\psi omitted for brevity) concatenates all local features into the global one.\\nHow was f implemented? Did you compare this concatenation approach?\\n* Can we add DIM like a regularizer to an objective of downstream task? \\nIt would be very useful if combining an objective of classification/regression or reinforcement learning with the proposed (8) is able to improve the performance of the given task.\\n* C^(i)_\\\\psi(X) in (6), but X^(i) in (8): are they the same thing?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Possibly important paper.\", \"review\": \"Revision 2: The new comparisons with CPC are very helpful. Most of my other comments are addressed in the response and paper revision. I am still uncomfortable with the sentence \\\"Our method ... compares favorably with fully-supervised learning on several classification tasks in the settings studied.\\\" This strongly suggests to me that you are claiming to be competitive with SOTA supervised methods. The paper does not contain supervised results for the resnet-50 architecture. I would recommend that this sentence should either be dropped from the abstract or have the phrase \\\"in the settings studied\\\" replaced by \\\"for an alexnet architecture\\\". If you have supervised results for resnet-50 they should be added to table 3 and the abstract could be adjusted to that. I apologize that this is coming after the update deadline (I have been traveling). The authors should simply consider the reaction of the community to over-claiming. Because of the new comparisons with CPC on resnet-50 I am upping my score. My confidence is low only because the real significance can only be judged over time.\", \"revision_1\": \"This is a revision of my earlier review. My overly-excited earlier rating was based on tables 1 and 2 and the claim to have unsupervised features that are competitive with fully-supervised features. (I also am subject to an a-priori bias in favor of mutual information methods.) I took the authors word for their claim and submitted the review without investigating existing results on CIFAR10. It seems that tables 1 and 2 are presenting extremely weak fully supervised baselines. If DIM(L) can indeed produce features that are competitive with state of the art fully supervised features, the result is extremely important. But this claim seems misrepresented in the paper.\", \"original_review\": \"There is a lot of material in this paper and I respect this groups\\nhigh research-to-publication ratio. However, it might be nice to have\\nthe paper more focused on the subset of ideas that seem to matter.\\n\\nMy biggest comment is that the top level spin seems wrong.\\nSpecifically, the paper focuses on the two bullets on page 3 ---\\nmutual information and statistical constraints. Here mutual\\ninformation is interpreted as the information between the input and\\noutput of a feature encoder. Clearly this has a trivial solution\\nwhere the input equals the output so the second bullet --- statistical\\nconstraints --- are required. But the empirical content of the paper\\nstrongly undermines these top level bullets. Setting the training\\nobjective to be the a balance of MI between input and output under a\\nstatistical consrtraint leads to DIM(G) which, according the results in\\nthe paper, is an empirical disaster. DIM(L) is the main result and\\nsomething else seems to be going on there (more later). Furthermore,\\nthe empirical results suggest that the second bullet --- statistical\\nconstraints --- is of very little value for DIM(L). The key ablation\\nstudy here seems to be missing from the paper. Appendix A.4 states\\nthat \\\"a small amount of the [statistical constraint] helps improve\\nclassification results when used with the [local information\\nobjective]. No quantitative ablation number is given. Other measures\\nof the statistical constraint seem to simply measure to what extent\\nthe constraint has been successfully enforced. But the results\\nsuggest that even successfully enforcing the constraint is of little,\\nif any, value for the ability of the features to be effective in\\nprediction. So, it seems to me, the paper to really just about the\\nlocal information objective.\\n\\nThe real power house of the paper --- the local information objective\\n--- seems related to mutual information predictive coding as\\nformalized in the recent paper from deep mind by van den Oord et al\\nand also an earlier arxiv paper by McAllester on information-theoretic\\nco-training. In these other papers one assumes a signal x_1, ... x_T\\nand tries to extract low dimensional features F(x_t) such that F(x_1),\\n..., F(x_t) carries large mutual information with F(x_{t+1}). The\\nlocal objective of this paper takes a signal x1, ..., x_k (nXn\\nsubimages) and extracts local features F(x_1), ... F(x_k) and a global\\nfeature Y(F(x_1), ..., F(x_k)) such that Y carries large mutual\\ninformation with each of the features F(x_i). These seem different\\nbut related. The first seems more \\\"on line\\\" while the second seems\\nmore \\\"batch\\\" but both seem to be getting at the same thing, especially\\nwhen Y is low dimensional.\\n\\nAnother comment about top level spin involves the Donsker-Varadhan\\nrepresentation of KL divergence (equation (2) in the paper). The\\npaper states that this is not used in the experiments. This suggests\\nthat it was tried and failed. If so, it would be good to report this.\\nAnother contribution of the paper seems to be that the mutual\\ninformation estimators (4) and (5) dominate (2) in practice. This\\nseems important.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"DIM and regularization\", \"comment\": \"Thank you for the references. The works cited essential do the global version of DIM, but with discrete representations rather than continuous. Solutions for \\\"global\\\" infomax become degenerate, which motivates the use of regularization in the encoder. Using regularization such as those used in the referenced works (weight decay in [1] and data augmentation [2]) is essential for these approaches to work. This problem also affects us, and this probably is the reason for poor performance of \\\"global DIM\\\" with deterministic input->representation mappings.\\n\\nWe find that the regularization used in [2] is far more relevant to our work, as it \\\"regularizes\\\" the model by making it more robust to data augmentation / sensible transformation at the input space. This is similar in spirit to what we do in the occlusion experiments, where augmentation is done by removing part of the input when computing the global vector. Overall, [2] is essentially equivalent to adding data augmentation to the global version of DIM in the discrete setting. While the goal of the local version of DIM is to improve generalization by spatial consistency across features, the connection to data augmentation in [2] is not as clear-cut. We do agree that [2] is highly relatable to our work and will add it in the related works on the topic of \\\"leveraging known structure\\\" / data augmentation.\"}", "{\"comment\": \"It has been already pointed out that InfoMax alone is not enough to learn useful representations [1][2]. [1][2] apply regularization to resolve this problem, and your method can be also regarded as (a different kind of) regularization.\\n\\n[1] Gomes, R., Krause, A., and Perona, P. Discriminative clustering by regularized information maximization. In NIPS, 2010.\\n[2] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., and Sugiyama, M. Learning discrete representations via information maximizing self-augmented training. In ICML, 2017.\", \"title\": \"Missing reference on InfoMax-based unsupervised learning\"}" ] }
HkGSniC9FQ
An Analysis of Composite Neural Network Performance from Function Composition Perspective
[ "Ming-Chuan Yang", "Meng Chang Chen" ]
This work investigates the performance of a composite neural network, which is composed of pre-trained neural network models and non-instantiated neural network models, connected to form a rooted directed graph. A pre-trained neural network model is generally a well trained neural network model targeted for a specific function. The advantages of adopting such a pre-trained model in a composite neural network are two folds. One is to benefit from other's intelligence and diligence and the other is saving the efforts in data preparation and resources and time in training. However, the overall performance of composite neural network is still not clear. In this work, we prove that a composite neural network, with high probability, performs better than any of its pre-trained components under certain assumptions. In addition, if an extra pre-trained component is added to a composite network, with high probability the overall performance will be improved. In the empirical evaluations, distinctively different applications support the above findings.
[ "composite neural network", "analysis", "work", "neural network models", "overall performance", "high probability", "function composition", "function composition perspective", "performance" ]
https://openreview.net/pdf?id=HkGSniC9FQ
https://openreview.net/forum?id=HkGSniC9FQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ByeGlJp1eN", "rJe7YK4gaX", "SyeedwNlpX", "r1e5AB4xaX", "r1lt6QF02X", "r1xSK7vq2Q", "Hkeeh48IhX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544699625887, 1541585274904, 1541584744445, 1541584338486, 1541473217409, 1541202812867, 1540936871769 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper707/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper707/Authors" ], [ "ICLR.cc/2019/Conference/Paper707/Authors" ], [ "ICLR.cc/2019/Conference/Paper707/Authors" ], [ "ICLR.cc/2019/Conference/Paper707/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper707/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper707/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"Dear authors,\\n\\nAll reviewers pointed out the fact that your result is about the expressivity of the big network rather than its accuracy, a result which is already known for the literature.\\n\\nI encourage you to carefully read all reviews should you wish to resubmit this work to a future conference.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Limited contribution\"}", "{\"title\": \"A Reply to Reviewer2\", \"comment\": \"1. Thank you for comments. Actually, we considered one or more pre-trained neural network in the paper.\\n\\n2. Please pardon our non- scientific/mathematical tone that we just tried to emphasize of arrival of pre-trained neural network. \\n\\n3. Yes, in simple wording, it is the main claim of this paper. Many people intuitively think so, but so far no work solves this problem. On the other hand, according to our survey (the last second paragraph of Introduction), many empirical studies point out that pre-trained models are often harmful. That\\u2019s the motivation of this work.\\n\\n4. As you mentioned, \\u201cadding more features can be statistically problematic\\u201d, while Reviewer 3 said \\u201cthis is a very straight forward result \\u2026 we can of course represent more objects\\u201d. The different comments shows the experts do not have consensus in the effect of adding objects/features and that was the motivation of this work to study the conditions of performance improvement.\\n\\n5. Pre-trained components are useful and valuable, especially it is provided by reputable individuals or organizations, such as ResNet50 provided in Keras. We believe the pre-trained components will become popular soon. \\nFurthermore, the performance of adopting pre-training is unclear in the literature.\", \"we_quote_from_some_papers_for_the_evidence_that_the_performance_of_adopting_pre_training_is_unclear\": \"In [1]: \\u201d Even so, relatively little is known about the behavior of pretraining with datasets that are multiple orders of magnitude larger.\\u201d\\nIn [2]: \\u201cthere remain open questions about the performance of pretrained distributed word representations and their interaction with weight initialization and other hyperparameters.\\u201d\\n\\n6. The condition \\u201clinearly independent\\u201d is given to assure the result is theoretically sound, but as all we know that the output of several neural networks are hardly \\u201clinearly dependent\\u201d. So, the proposed theory is generally applicable to most neural networks.\\n\\n7. To generate convex hull from several vectors, the weights must be positive and summing to 1. The weights in Example 1 are not satisfying these two conditions. (In particular, \\u201cw_1=3 and w_2=-1\\u201d is not the convex combination.) Besides, the since the weights in a pre-trained model are frozen, we can see it as a black box or a function. That is why we denote x_1x_2 as f_3 and so on.\\n\\n8. In our pdf file, the X is shown as \\u201cX = {(0, 0), (0 1), (1, 0), (1, 0)}\\u201d. We have no idea why commas become semicolons. We apologize for all the other typos. We will correct them and also clarify the obscure statements.\", \"reference\": \"[1] D. Mahajan, R. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. van der Maaten. \\u201cExploring the Limits of Weakly Supervised Pretraining,\\u201d ECCV2018.\\n[2] I. Cases, M.-T. Luong, and C. Potts, \\u201cOn the effective use of pretraining for natural language inference\\u201d, arXiv:1710.02076\"}", "{\"title\": \"A Reply to Reviewer1\", \"comment\": \"Thank you for comments. Most people intuitively know with more object, more representation can be obtained. But according to our survey (the last second paragraph of Introduction), many empirical studies point out that pre-trained models are on average harmful. Besides, so far no work studies this issue and that is why we wanted to give a rigorous analysis of this issue.\\u3000In this paper, we consider pre-trained neural network module with all its weights frozen, without any fine-tuning, while the composite network is trained, which will become an important issue soon.\\n\\nFurthermore, the performance of adopting pre-training is unclear in the literature. We quote from some papers for the evidence that the performance of adopting pre-training is unclear:\\nIn [1]: \\u201d Even so, relatively little is known about the behavior of pretraining with datasets that are multiple orders of magnitude larger.\\u201d\\nIn [2]: \\u201cthere remain open questions about the performance of pretrained distributed word representations and their interaction with weight initialization and other hyperparameters.\\u201d\", \"reference\": \"[1] D. Mahajan, R. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. van der Maaten. \\u201cExploring the Limits of Weakly Supervised Pretraining,\\u201d ECCV2018.\\n[2] I. Cases, M.-T. Luong, and C. Potts, \\u201cOn the effective use of pretraining for natural language inference\\u201d, arXiv:1710.02076\"}", "{\"title\": \"A Reply to Reviewer3\", \"comment\": \"Thank you for comments.\\n\\n1. In fact, the paper proposes not just \\u201ca simple linear mixture of the output\\u201d, rather, the paper also considers various activation functions, such as sigmoid and tanh. In our experiment shown in Table 2, the notation \\u03c3 is the sigmoid. \\nThe condition \\u201clinearly independent\\u201d is given to assure the result is theoretically sound, but as all we know that the outputs of several neural networks on a large dataset are hardly \\u201clinearly dependent\\u201d. The proposed theory is generally applicable to most neural networks.\\n\\n2a. Most people know intuitively that add more neural network components may enhance the performance in classification and regression result, but so far, we have not seen work directly pointing to this problem. On the other hand, according to our survey (please find it in the last second paragraph of Introduction in the paper), many empirical studies point out that pre-trained models are on average harmful. That is what we believe the contribution of this paper.\\n\\n2b. We also quote from some papers for the evidence that the performance of adopting pre-training is unclear:\\nIn [1]: \\u201d Even so, relatively little is known about the behavior of pretraining with datasets that are multiple orders of magnitude larger.\\u201d\\nIn [2]: \\u201cthere remain open questions about the performance of pretrained distributed word representations and their interaction with weight initialization and other hyperparameters.\\u201d\\n\\n3. Yes, indeed, we spent a tremendous time (months) in conducting the experiments on our poor server with 4 GPU cards (NVIDIA 1040), and we apologize for all the writing problems in the submission and will correct all the writing problems.\", \"reference\": \"[1] D. Mahajan, R. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. van der Maaten. \\u201cExploring the Limits of Weakly Supervised Pretraining,\\u201d ECCV2018.\\n[2] I. Cases, M.-T. Luong, and C. Potts, \\u201cOn the effective use of pretraining for natural language inference\\u201d, arXiv:1710.02076\"}", "{\"title\": \"Review\", \"review\": \"The paper considers the problem of building a composite network from several pre-trained networks and whether it is possible to ensure that the final output has better accuracy than any of its components.\\n\\nThe analysis done in the paper is that of a simple linear mixture of the outputs produced by each component and then by showing that if the output of the components are linearly independent then you can find essentially a better ensemble. This is a natural and straightforward statement with a straightforward proof. It is unclear to me what theoretical value does the analysis of the paper add. Further the linear independence assumption in the paper seems very strong to make the results of value. \\n\\nFurther the paper seems very hastily written with inconsistent notation throughout making the paper very hard to read. Especially the superscript and the subscript on x have been jumbled up throughout the paper. I recommend rejection and encourage the authors to first clean up notation to make it readable.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"The result seems straight forward\", \"review\": \"This paper studies composite neural network performance from function composition perspective. In theorems 1, 2 and 3, the authors essentially prove that as the basis functions (pre trained components) increases (satisfying LIC condition), there are more vectors/objects can be represented by the basis.\\n\\nTo me, this is a very straight forward result. As the basis increases while the LIC condition is satisfied, we can of course represent more objects (the new component is one of them). I don't see any novelties here. The result is straightforward, and this should be a clear rejection.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Not ready for publication\", \"review\": \"The paper aims at justifying the performance gain that is acquired by the use of \\\"composite\\\" neural networks (e.g., composed of a pre-trained neural network and additional layers that will be trained for the new task).\\n\\nI found the paper lacking in terms of writing and in terms of clarity in expressing scientific/mathematical ideas especially for a theory paper.\", \"example_from_the_abstract\": \"\\\"The advantages of adopting such a pre-trained model in a composite neural network are two folds. One is to benefit from other\\u2019s intelligence and diligence, and the other is saving the efforts in data preparation and resources\\nand time in training\\\"\\n\\nThe main results of the paper (Theorem 1,2,3) are of the following nature: if you use more features (i.e., \\\"components\\\") in the input of a network then you have \\\"more information\\\", and this cannot be bad. Here are the corresponding claims in the Abstract:\\n\\n\\\"we prove that a composite neural network, with high probability, performs better than any of its pre-trained components under certain assumptions.\\\"\\n\\n\\\"if an extra pre-trained component is added to a composite network, with high probability the overall performance will be improved.\\\"\\n\\nHowever, this argument seems to be just about expressiveness; adding more features can be statistically problematic. \\n\\nFurthermore, why is it specific to pre-trained components? Essentially the theorems are about adding any features.\\n\\nFinally, the assumption that the pre-trained components are linearly independent is invalid and the makes the whole analysis somewhat simplistic.\\n\\n\\nThe motivating Example 1 just shows that the convex hull of a class of hypotheses can include more hypotheses than the class itself. I don't see any connection between this and the use of pre-training.\", \"other_examples_unclear_statements_from_the_intro\": \"\\\"One of distinctive features of the complicated applications is their applicable data sources are boundless. Consequently, their solutions need frequent revisions.\\\"\\n\\n\\\"Although neural networks can approximate arbitrary functions as close as possible (Hornik, 1991), the major reason for not existing such competent neural networks for those complicated applications is their problems are hardly fully understood and their applicable data sources cannot be identified all at once.\\\"\", \"there_are_many_typos_in_the_paper_including_this_one_about_x_for_the_xor_function\": \"\\\"Assume there is a set of locations indexed as X = {(0; 0); (0; 1); (1; 0); (1; 0)} with the corresponding values Y = (0; 1; 1; 0). Obviously, the observed function is the XOR\\\"\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BylBns0qtX
On Learning Heteroscedastic Noise Models within Differentiable Bayes Filters
[ "Alina Kloss", "Jeannette Bohg" ]
In many robotic applications, it is crucial to maintain a belief about the state of a system, like the location of a robot or the pose of an object. These state estimates serve as input for planning and decision making and provide feedback during task execution. Recursive Bayesian Filtering algorithms address the state estimation problem, but they require a model of the process dynamics and the sensory observations as well as noise estimates that quantify the accuracy of these models. Recently, multiple works have demonstrated that the process and sensor models can be learned by end-to-end training through differentiable versions of Recursive Filtering methods. However, even if the predictive models are known, finding suitable noise models remains challenging. Therefore, many practical applications rely on very simplistic noise models. Our hypothesis is that end-to-end training through differentiable Bayesian Filters enables us to learn more complex heteroscedastic noise models for the system dynamics. We evaluate learning such models with different types of filtering algorithms and on two different robotic tasks. Our experiments show that especially for sampling-based filters like the Particle Filter, learning heteroscedastic noise models can drastically improve the tracking performance in comparison to using constant noise models.
[ "bayesian filtering", "heteroscedastic noise", "deep learning" ]
https://openreview.net/pdf?id=BylBns0qtX
https://openreview.net/forum?id=BylBns0qtX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BklsDJHXgE", "H1x3H8msRX", "B1enjzWvCm", "SyxX8G-w0m", "rkgsGzWwRX", "rJl-GybDAQ", "H1xwa0lvC7", "B1gBt34n3X", "SkxvJDW937", "Bkxqg9aD27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544929123075, 1543349827690, 1543078564451, 1543078474840, 1543078419075, 1543077641076, 1543077566557, 1541323901116, 1541179103272, 1541032433903 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper706/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper706/Authors" ], [ "ICLR.cc/2019/Conference/Paper706/Authors" ], [ "ICLR.cc/2019/Conference/Paper706/Authors" ], [ "ICLR.cc/2019/Conference/Paper706/Authors" ], [ "ICLR.cc/2019/Conference/Paper706/Authors" ], [ "ICLR.cc/2019/Conference/Paper706/Authors" ], [ "ICLR.cc/2019/Conference/Paper706/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper706/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper706/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper shows experiments in favor of learning and using heteroscedastic noise models for differentiable Bayes filter. Reviewers agree that this is interesting and also very useful for the community. However, they have also found plenty of issues with the presentation, execution and evaluations shown in the paper. Post rebuttal, one of the reviewer increased their score, but the other has reduced the score. Overall, the reviewers are in agreement that more work is required before this work can be accepted.\\n\\nSome of existing work on variational inference has not been included which, I agree, is problematic. Simple methods have been compared but then why these methods were chosen and not the other ones, is not completely clear. The paper definitely can improve on this aspect, clearly discussing relationships to many existing methods and then picking important methods to clearly bring some useful insights about learning heteroscedastic noise. Such insights are currently missing in the paper.\\n\\nReviewers have given many useful feedback in their review, and I believe this can be helpful for the authors to improve their work. In its current form, the paper is not ready to be accepted and I recommend rejection. I encourage the authors to resubmit this work.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting but not good enough.\"}", "{\"title\": \"Comment on the revised version\", \"comment\": \"We want to again thank our reviewers for their helpful and kind feedback. This comment provides a short summary of our contributions and the changes we made in the revised version of the paper.\", \"contributions\": \"In this work, we analyzed the advantages of learning heteroscedastic models of observation and especially process noise through differentiable bayesian filters. \\nFor this, we evaluated training the noise models through 4 different filtering algorithms: Differentiable versions of Extended Kalman Filter and Particle Filter had already been proposed in related work (Haarnoja et al., 2016, Jonschkowski et al., 2018, Karkus et al., 2018), but we also added differentiable versions of two Unscented Kalman Filter variants. In addition to comparing within the different filters, we also evaluated learning the noise models on two different tasks: The Kitti Visual Odometry task has a low-dimensional state and smooth dynamics. In contrast, the planar pushing task has discontinuous dynamics and is known to follow a heteroscedastic noise model. It also has a much higher-dimensional state than any other task evaluated in related work.\", \"summary_of_revisions\": [\"Minor changes following the questions of our reviewers to clarify things or add information\", \"Following the suggestion of Reviewer 1, we added a section about variational methods for learning in state space models to the related works section. They offer an alternative approach to learning as compared to our method of backpropagation through differentiable filtering algorithms. Experiments that compare both approaches would be very interesting, but could not be carried out within the rebuttal period.\", \"We updated the results for the Kitti experiments to use the full testset available. To improve the comparability of the obtained results with related work (Haarnoja et al., 2016, Jonschkowski et al., 2018), we changed the experimental setting to allow for finetuning of the perception network during training. This improved the overall results on the full testset, but did not change the conclusions drawn from the experiment. We also rewrote the discussion of the results to better explain the differences in performance between the different filtering algorithms.\"], \"references\": \"Tuomas Haarnoja, Anurag Ajay, Sergey Levine, and Pieter Abbeel. Backprop kf: Learning discriminative deterministic state estimators. In Advances in Neural Information Processing Systems. 2016\\n\\nRico Jonschkowski, Divyam Rastogi, and Oliver Brock. Differentiable particle filters: End-to-end learning with algorithmic priors. In Proceedings of Robotics: Science and Systems, Pittsburgh, USA, 2018.\\n\\nPeter Karkus, David Hsu, and Wee Sun Lee. Particle filter networks: End-to-end probabilistic localization from visual observations. 2018\"}", "{\"title\": \"Response (3 /3)\", \"comment\": [\"Section 6:\", \"\\\"Large outliers in the prediction of the preprocessing networks were not associated with higher observation noise.\\\" I don't see on what presented results these conclusions were drawn, as this is the first time the word \\\"outlier\\\" is mentioned in the paper. Outliers seem indeed important, as they contradict the typical assumptions e.g. of Gaussian noise, so it would be useful to clarify how the proposed techniques handle such outliers.\"], \"reply\": \"Agreed, we tried to make it clearer what we meant by outliers.\\nIn this case, outliers were mostly meant to mean \\\"unusually bad predictions\\\" especially of the object position in the pushing task. An important point here is that on the pushing task there is no structural explanation for the bad predictions (such as for example occlusions). Therefore we do not think that they actually violate a gaussian assumption about the observation noise.\", \"on_the_question_of_how_the_method_handles_outliers\": \"The idea behind using a heteroscedastic noise model is that it allows to assign different levels of noise to different inputs. For example, if the object is occluded in the image, a high observation noise can be predicted. This flags the observations in this timestep as unreliable, such that the filters rely more on the process model prediction.\\nSuch outliers would indeed violate a global gaussian assumption about the observation noise. To alleviate this, our method instead learns input-dependent \\\"local\\\" noise distributions. This allows it to capture e.g. the noise in the unoccluded case with one distribution and the prediction errors in the case of occlusion with another one.\", \"references\": \"Rico Jonschkowski, Divyam Rastogi, and Oliver Brock. Differentiable particle filters: End-to-end learning with algorithmic priors. In Proceedings of Robotics: Science and Systems, Pittsburgh, USA, 2018.\\n\\nAlina Kloss, Stefan Schaal, and Jeannette Bohg. Combining learned and analytical models for\\npredicting action effects. arXiv preprint arXiv:1710.04102, 2017.\\n\\nSubham Sahoo, Christoph Lampert and Georg Martius. Learning Equations for Extrapolation and Control. In Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4442-4450, 2018.\"}", "{\"title\": \"Response (2/3)\", \"comment\": \"* Section 4.1: [...] How is z learned, via supervised learning (what is the target value for z)?\\nOr is z some latent representation that is jointly optimized with the filters? [..] \\nSo if I understand correctly, the function o for z = o(D) is thus learned offline w.r.t. some designed observation variables for which GT is available (from manual annotations?).\", \"reply\": \"True, there is also no reason why the (MC)UKF or PF should have any advantage on the kitti task: For once, the process model is not \\\"heavily\\\" nonlinear, such that the EKF is a good choice. \\nThe main problem here is that the sampling and the sigma points generate additional uncertainty about the state-estimate. Usually, this would be resolved by observations, such that more weight can be given to the particles/sigma points that are closer to the true state. But in the visual odometry task, there are no observations of heading and position. Varying those parts of the state thus only adds uncertainty, but does not help. We updated the results section to better explain this.\\n\\nHowever, (Jonschkowski et al., 2018) obtained much better results with their version of the particle filter. The main difference to our version is that their perception model directly predicts likelihoods from the observations and the particles. This approach seems to allow for a better weighting of the particles. (We chose to use the same perception model as for the other filters to allow for better comparability between them)\"}", "{\"title\": \"Response (1/3)\", \"comment\": \"Thanks a lot for the positive and detailed review. The suggested experiments about weighting\\nthe different loss terms are a very interesting direction which we would like to explore further. However, due to the rather long training time of such models, we can not include such experiments within the rebuttal period.\", \"replies_to_your_questions\": [\"p6. Footnote [...] So, are the current results on a single fold? Will the numbers in the tables, or the conclusions change after this review?\"], \"reply\": \"True, it is mostly an ablation study of concepts that were already proposed in other work but not evaluated in isolation. However, in contrast to previous work, we actually evaluate the heteroscedastic noise model on a task that is known to follow heteroscedastic process noise (pushing). In addition, we proposed the two differentiable UKF variants and did a comparison of the different filtering techniques on two different tasks.\", \"concerning_the_question_if_all_could_be_learned_jointly\": \"For the kitti task, (Jonschkowski et al., 2018) demonstrated that it is possible to learn everything jointly from scratch. However, they reported better results when pretraining the models first and then finetuning through the filter.\\nWe did not try it on the pushing task, which we expect to be more difficult to learn from scratch, as the state and observations have more dimensions. In general, the results found in e.g.\\n(Kloss et al., 2017) and (Sahoo et al., 2018) also suggest that learning process models might not be desirable if a suitable analytical model is available, due to limited generalization abilities of neural networks. \\n\\n* Is it possible to add priors on Q and R parameters for Bayesian treatment of learning model parameters? I can imagine that priors can guide the optimization to either adjust more of the Q or more of the R variance to improve the likelihood.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks a lot for the helpful review. We agree that the suggested baseline experiment would be very useful and will try to add it in future versions. We can unfortunately not include it within the rebuttal period.\", \"replies_to_your_questions\": \"* In the UKF the Julier paper of 1997 also notes a heuristic solution for ensuring positive \\ndefiniteness of the estimated covariance matrix is lambda is negative. Was this tried?\", \"reply\": \"We chose the number of test particles to be the same as was used in (Jonschkowski et al., 2018) to ensure comparability on the kitti task. \\nAlthough we did not attempt timing experiments, the test-time did not seem to increase much\\nbetween 100 and 1000 particles, as long as the computations for each particle can still be run in parallel. This is of course dependent on the GPU in use. Due to this parallelism, there also seems to be no big difference between the computation times for the different filtering methods.\", \"references\": \"Rico Jonschkowski, Divyam Rastogi, and Oliver Brock. Differentiable particle filters: End-to-end learning with algorithmic priors. In Proceedings of Robotics: Science and Systems, Pittsburgh, USA, 2018.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks a lot for the positive and encouraging feedback on our work! We highly appreciate the pointer\\nto the variational bayes methods and agree that not mentioning them was an oversight on our part. We have updated the related works section accordingly. \\nIn future work, it would be very interesting to see how our approach compares to a variational method. Running such experiments is however not possible during the rebuttal period.\"}", "{\"title\": \"Nice study which (sadly) ignores large parts of the related work\", \"review\": \"# Review for \\\"On Learning Heteroscedastic Noise Models within Differentiable Bayes Filters\\\"\\n\\nThe method revisits Bayes filters. It evaluates the benefit of training the observation and process noise models, while keeping all other models fixed. Experimentally, a clear performance boost is verified if heteroscedastic noise is used.\\n\\nFirst, I want to applaud the effort done to do the study. I think it is very beneficial for the community to revisit classic algorithms and evaluate them in a broader and more recent context. I certainly will revisit this article and point colleagues to it. \\n\\nThe paper is well-written and the experiments seems to be well done. The review of the relevant models is adequate, although space filling, since the methodology is not at the core of ICLR. I however consider it highly relevant for the future of the field.\\n\\nHowever, there is a major flaw: the variational state-space model literature is completely ignored. I consider this blank spot unacceptable. Especially, the models proposed have already explored heteroskedastic noise models in contexts where state-space models and posterior approximations were fully trained. It is just that an ablation study was never done.\\n\\nI am very torn, as I like the paper in general but think that the recognition of the variational SSM literature needs to be added, and not having it in here would foster a separation of two \\\"micro communities\\\".\\n\\nHere is an incomplete list of articles, which can serve as starting points for a more thorough literature review.\\n\\n- Archer, E., Park, I. M., Buesing, L., Cunningham, J., & Paninski, L.\\n(2015). Black box variational inference for state space models. arXiv preprint arXiv:1511.07367.\\n- Fraccaro, M., S\\u00f8nderby, S. K., Paquet, U., & Winther, O. (2016). Sequential neural models with stochastic layers. In Advances in neural information processing systems (pp. 2199-2207).\\n- Karl, M., Soelch, M., Bayer, J., & van der Smagt, P. (2016). Deep variational bayes filters: Unsupervised learning of state space models from rawdata. ICLR 2017.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Small novelty with insufficient novelty\", \"review\": \"This paper presents a method to learn and use state and observation dependent noise in traditional Bayesian filtering algorithms. For observation noise, the approach consists of constructing a neural network model which takes as input the raw observation data and produces a compact representation and an associated (diagonal) covariance. Similarly for state process noise, a network predicts the (diagonal) covariance of the temporal process given the current state.\\n\\nThe paper notes that these noise models can be trained end-to-end by instantiating an (approximate) Bayesian filter. In particular, they explore the use of a Kalman Filteer, Extended Kalman Filter, (Monte Carlo and regular) Unscented Kalman Filter and a Particle Filter.\\n\\nThe technique is applied to two different tasks, visual odometry on the KITTI dataset and a \\\"planar pushing\\\" task. The results show that the addition of a learned noise model made no significant difference on the KITTI dataset, with the EKF without learning performing as well as any of the other variations. The planar pushing task has a higher dimensional state space and more challenging noise dynamics. In that case some gains are seen with learning.\\n\\nOverall the contribution of this paper seems small and the experimenal results insufficient. The observation that gradient based training can be done through a Bayesian filter, as the paper pointed out, was developed elsewhere. Extending that to a more complex noise model seems like a rather small contribution. Indeed, the observational noise component was not found to have a significant or consistent impact and hence only the process noise is particularly notable. Further, at least one obvious and important baseline was missing. Specifically, process noise models could be trained independently by simply maximizing the likelihood of the next predicted state. It's not clear that there's a significant benefit to training the model end-to-end in this case. There may well be, but that is something that should be demonstrated.\\n\\nA number of other, smaller issues:\\n - Eq (4) should be written as a matrix inverse, not a fraction.\\n - In the UKF the Julier paper of 1997 also notes a heuristic solution for ensuring positive definiteness of the estimated covarance matrix is lambda is negative. Was this tried?\\n - How was the number of particles selected for the PF at test time? In particular, how did the computation time between the methods compare?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good exploration of optimizing Bayesian filter noise variance through back propagation, but with incomplete results\", \"review\": \"This is a well written paper which proposes to learn heteroscedastic noise models from data by optimizing the prediction likelihood end-to-end through differentiable Bayesian Filters. In addition to existing Bayesian filters, the paper also proposes two different versions of the [differentiable] Unscented Kalman Filter. Performance of the different filters and noise models is evaluated on two real-world robotic problems: Visual Odometry and visual tracking of an object pushed by the robot.\\nWhile the general idea of learning the noise variances through backpropagation are straightforward extensions of existing work on differential Bayesian filters, the questions that the paper explores are important to make end-to-end learning of Bayesian filter more common. The results will help future research select the correct differential filter for their use case, and insight in potential benefits (or lack thereof) by learning heteroscedastic or homoscedastic process noise, and/or observation noise.\\nA downside is that the paper does not further explore how to weigh different loss terms which are apparently important to successfully train such models. Also unfortunate is the footnote which states that the current results are incomplete and will be updated, hence as a reviewer I am not sure which results and conclusions are valid right now.\", \"pros\": [\"clearly written\", \"useful experiments for those seeking to select a differential Bayesian filter, and learning (heteroscedastic) noise from data.\", \"experiments on real-world use cases rather than toy problems\"], \"cons\": [\"Incomplete experiments according to footnote, thus results and conclusions might change after this review.\", \"Unclear what the effect of the selected process / observation model is on the learned noise\"], \"below_are_more_detailed_comments_and_questions\": [\"p6. Footnote: \\\"due to time constraints, ..., results will be updated\\\" Is this acceptable? I have never seen such a notice when reviewing. So, are the current results on a single fold? Will the numbers in the tables, or the conclusions change after this review?\", \"If I understand correctly, the paper 'only' focuses on learning the heteroscedastic noise variance, but assumes that the deterministic non-linear parts of the process and observation models are fixed. I did not find this very clearly stated in the paper, though at least the Appendix explicitly states the used functions for the process models.\", \"I would have liked to see in the paper more explanation on how the process and observations models were selected and validated in the experiments, since I expect that the validity of these functions affects the learned noise variances. Since the noise needs to account for the inaccuracies in the deterministic models, would the choice for these functions not impact your conclusions? And, would it or would it not be possible to learn both these deterministic models and the noise jointly from the training data?\", \"Is it possible to add priors on Q and R parameters for Bayesian treatment of learning model parameters? I can imagine that priors can guide the optimization to either adjust more of the Q or more of the R variance to improve the likelihood.\", \"Section 1:\", \"\\\"Our experiments show that ... \\\" This may be a matter of taste, but I did not expect to see the main conclusions already in the introduction. They should appear in the abstract to help out the quick reader. In the introduction, it appears as if you are talking about some separate preliminary experiments, and which you base some conclusions that will be used in the remainder of this paper.\", \"Section 3:\", \"So, mostly empirical study, since heteroscedastic noise models were already used?\", \"\\\"Previous work evaluated ... \\\" please add citations\", \"Section 4.1:\", \"\\\"train a discriminative neural network o with parameters wo to preprocess the raw sensory data D and thus create a more compact representation of the observations z = o(D;wo).\\\" At this point in the paper, I don't understand this. How is z learned, via supervised learning (what is the target value for z)? Or is z some latent representation that is jointly optimized with the filters? This only became somewhat clearer in Sec. 5.2 on p.8 where it states that \\\"We ... train a neural network to extract the position of the object, the contact point and normal as well as ...\\\". So if I understand correctly, the function o for z = o(D) is thus learned offline w.r.t. some designed observation variables for which GT is available (from manual annotations?).\", \"Section 4.2:\", \"\\\"we predict a separate Qi for every sigma point and then compute Q as the weighted mean\\\" \\u2192 So, separate parameters w_g for each sigma point i, or is a single learned non-linear function applied to all points?\", \"Section 4.3:\", \"Equation 14: inconsistent use of boldface script: should use bold sigma_t, and bold l_t ?\", \"\\\"In practice, we found that during learning ... by only increasing the predicted variance\\\" \\u2192 This is an interesting observation, which I would have liked to see explored more. I understand that term (ii) is needed to guide the learning processes, but in the end wouldn't we want to optimize the actual likelihood? So, could you (after the loss with (ii) converged) reduce \\\\lambda_2 to zero to properly optimize only the log likelihood without guidance from a good initial state? Or is it not possible to reliably optimize the likelihood via back-propagation at all from some reason?\", \"Section 5.1.1\", \"\\\"... of varying length (from 270 to over 4500 steps) ...\\\" it would be good to mention the fps, to get understand to what real-world time horizons 50 / 100 frames correspond.\", \"Section 5.1.2:\", \"Table 1: How are the parameters of the filters in the \\\"no learning\\\" column obtained? Are these tuned in some other way, or taken form existing implementations? Also, can you clarify if the 'no learning' parameters served as the initial condition for the learning approaches?\", \"Table 1, first row column Q+R: \\\"0.2\\\" \\u2192 Is there a missing zero here, i.e. \\\"0.20\\\"? Otherwise, the precision of reported results in this table is not consistent. Hard to say: is the mean of R+Q 0.2, and slightly lower than R+Qh, or could it be as high as 0.24 ?\", \"\\\"learning a heteroscedastic process noise model leads to big improvements and makes the filters competitive with the EKF\\\". Results for EKF still appear significantly better than the novel UKF, and even the PF (especially rotational error).\", \"Section 6:\", \"\\\"Large outliers in the prediction of the preprocessing networks were not associated with higher observation noise.\\\" I don't see on what presented results these conclusions were drawn, as this is the first time the word \\\"outlier\\\" is mentioned in the paper. Outliers seem indeed important, as they contradict the typical assumptions e.g. of Gaussian noise, so it would be useful to clarify how the proposed techniques handle such outliers.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1eB3sRqtm
Exploring Curvature Noise in Large-Batch Stochastic Optimization
[ "Yeming Wen", "Kevin Luk", "Maxime Gazeau", "Guodong Zhang", "Harris Chan", "Jimmy Ba" ]
Using stochastic gradient descent (SGD) with large batch-sizes to train deep neural networks is an increasingly popular technique. By doing so, one can improve parallelization by scaling to multiple workers (GPUs) and hence leading to significant reductions in training time. Unfortunately, a major drawback is the so-called generalization gap: large-batch training typically leads to a degradation in generalization performance of the model as compared to small-batch training. In this paper, we propose to correct this generalization gap by adding diagonal Fisher curvature noise to large-batch gradient updates. We provide a theoretical analysis of our method in the convex quadratic setting. Our empirical study with state-of-the-art deep learning models shows that our method not only improves the generalization performance in large-batch training but furthermore, does so in a way where the training convergence remains desirable and the training duration is not elongated. We additionally connect our method to recent works on loss surface landscape in the experimental section.
[ "optimization", "large-batch training", "generalization", "noise covariance" ]
https://openreview.net/pdf?id=S1eB3sRqtm
https://openreview.net/forum?id=S1eB3sRqtm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryg83cWWeE", "BkeBSvna1E", "SJgEZScp14", "BJedjk56yE", "HyeuKhg814", "BylA5I6SyE", "SyeuC9hH1N", "BJxR7rTV1V", "SylgkRc4J4", "HJlZqUPEkV", "HJlD8AIEyE", "HJeyrOZyyV", "SkeavTvnRX", "rke-BB7qAX", "rklLxHQcAQ", "SyxV0EQqAQ", "B1lHKVmqCQ", "SJehhkWGpX", "B1gCwCJzaQ", "HJlX9S0bpm", "rJlPbhPypm", "HJxOq24cnX", "ryg_FtoF2m", "HJx0hZqPnm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544784557694, 1544566589474, 1544557819544, 1544556447878, 1544060031557, 1544046229883, 1544043216080, 1543980325984, 1543970264183, 1543956104899, 1543953998555, 1543604279233, 1543433572845, 1543284024610, 1543283950390, 1543283916386, 1543283836619, 1541701555708, 1541697126088, 1541690763428, 1541532671350, 1541192847673, 1541155200314, 1541018038083 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper705/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper705/Authors" ], [ "ICLR.cc/2019/Conference/Paper705/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper705/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper705/Authors" ], [ "ICLR.cc/2019/Conference/Paper705/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper705/Authors" ], [ "ICLR.cc/2019/Conference/Paper705/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper705/Authors" ], [ "ICLR.cc/2019/Conference/Paper705/Authors" ], [ "ICLR.cc/2019/Conference/Paper705/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper705/Authors" ], [ "ICLR.cc/2019/Conference/Paper705/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper705/Authors" ], [ "ICLR.cc/2019/Conference/Paper705/Authors" ], [ "ICLR.cc/2019/Conference/Paper705/Authors" ], [ "ICLR.cc/2019/Conference/Paper705/Authors" ], [ "ICLR.cc/2019/Conference/Paper705/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper705/Authors" ], [ "ICLR.cc/2019/Conference/Paper705/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper705/Authors" ], [ "ICLR.cc/2019/Conference/Paper705/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper705/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper705/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"Dear authors,\\n\\nYour proposition of adding a noise scaling with the diagonal of the gradient covariance to the updates as a middle-ground between the identity and the full covariance is interesting and tackles the timely question of the links between optimization and generalization.\\n\\nHowever, the reviewers had concerns about the experiments that did not reveal to which extent each trick had an influence.\\nI would like to add that, even though the term Fisher is used for both the true Fisher and tne empirical one, these two matrices encore very different kind of information. In particular, the latter is only defined when there is a dataset. Hence, your case study (section 3.2) which uses the true Fisher does not apply to the empirical Fisher.\\n\\nI encourage the authors to pursue in this direction but to update the experimental section in order to highlight the impact of each technique used.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Issues with the experiments and case study not corresponding to the actual algorithm\"}", "{\"title\": \"Further response to Reviewer 3\", \"comment\": \"We agree that the optimization can't be used alone to choose the step-size. All we want to say is optimization is one of the criterions to choose the step size especially in the case that the difference in the training loss can be observed.\\n\\nBut how to choose the best step-size is orthogonal to our paper. Our main contribution is to point out that not only the variance but also the covariance structure matters in the optimization convergence by giving the following example: the gradients of LB + diag Fisher, LB + full Fisher and SB have roughly the same variance (see Fig 3b). However, in Fig 3c, the convergence of LB + diagonal Fisher is much faster than LB + full Fisher. In fact, it is roughly equal to the convergence of LB which has much smaller variance. We performed theoretical analysis in the convex case, conducted careful experiments and found the LB + diagonal Fisher also gives a better generalization performance.\"}", "{\"title\": \"Re: Further response to Reviewer 3\", \"comment\": \"Under large-batch sgd, as shown in Figure 1 in Hoffer et. al's paper [1], although different batch size could achieve similar training loss, it could be observed that the generalization gap still exists. Therefore, I still have doubt about how the optimization convergence could be used as criterion for step-size selection. According to the author, SGD with different batch sizes would be selected to have the similar step-size since their training loss under same epochs look similar.\\n\\n\\n\\n[1] Hoffer, Elad, Itay Hubara, and Daniel Soudry. \\\"Train longer, generalize better: closing the generalization gap in large batch training of neural networks.\\\" Advances in Neural Information Processing Systems. 2017.\"}", "{\"title\": \"ImageNet\", \"comment\": \"I would just like to comment on the ImageNet requirement: the resources required to perform extensive experiments on ImageNet are out of reach for many authors and as such cannot be made a requirement for a submission. Although I agree it would be nice to get results on larger datasets, it is unfair to penalize a work for not including them.\"}", "{\"title\": \"Further response to Reviewer 3\", \"comment\": \"Thanks for the response.\\n\\nI agree that more comprehensive experiments needed if the paper just focuses on the large batch generalization aspect. However, the main contribution of the paper is the surprising role of variance we mentioned in the summary of contribution. Generalization improvement is a bonus of the curvature noise where we gave a detailed discussion. We will revise the paper in the camera-ready version to make these points more clear in the writing. Additionally, running batch size 4K with ResNet44 or VGG16 on ImageNet is really out of our capability (In [1], to fit batch size 8192 with ResNet50, they use 256 GPUs and 50Gbit of network bandwidth).\\n\\nBetter optimization scheme could either mean a lower train loss or faster convergence. Slower convergence in the epoch budget (our setting) could lead to a higher train loss. So it should be considered as a criterion of step-size selection.\\n\\n[1]: Priya Goyal, et al. \\\"Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour.\\\" arXiv preprint arXiv:1706.02677 (2017)\"}", "{\"title\": \"Re: Further response to Reviewer 3\", \"comment\": \"Thanks for the reply.\\nI agree with the author that currently generalization theory is missing and therefore that's the reason why I would expect more comprehensive empirical study of the proposed method. \\nAs for experiments in [1], more optimization step updates imply better generalization, which is also observed in other literature such as [2]. However, I don't see the reasoning for how step size could be chosen according to the optimization behavior of the algorithm. And may I know what does it mean by \\\"better optimization scheme\\\"? If it means optimization convergence, I do not see how a faster optimization convergence leads to a better generalization for deep learning.\\n\\n[2] Hoffer, Elad, Itay Hubara, and Daniel Soudry. \\\"Train longer, generalize better: closing the generalization gap in large batch training of neural networks.\\\" Advances in Neural Information Processing Systems. 2017.\"}", "{\"title\": \"Further response to Reviewer 3\", \"comment\": \"We thank the reviewer for the quick response. Regarding the generalization analysis, we agree that it is not a complete picture. However, within the current uniform stability framework that we work with in Appendix B, there is little or nothing more we can do. It is not possible to explicitly measure the difference of the determinant of the diffusion matrix $\\\\Sigma_S(t)$ between taking full Fisher and diagonal Fisher. In that case, we would like to ask what type of theoretical analysis the reviewer wishes to see? It would be certainly interesting to provide rigorous guarantees for LB + diagonal Fisher and LB + full Fisher in the non-convex deep learning setting. However, we believe that this would be far too ambitious a task and the state of current deep learning theory offers no tools for us to do so. Additionally, the focus of the paper is the optimization difference among choices of curvature noise.\\n\\nIn [1], all plots are present as out-of-sample-error (generalization) vs. steps. We can see that the model degradation of LB can be explained by not having enough optimization updates, which suggests a better optimization scheme is at least an important sign of better generalization.\\n\\n[1]: Shallue, Christopher J., et al. \\\"Measuring the Effects of Data Parallelism on Neural Network Training.\\\" arXiv preprint arXiv:1811.03600 (2018)\"}", "{\"title\": \"Re: Further response to Reviewer 3\", \"comment\": \"Thanks for the reply.\\n\\nThe generalization theory analysis part does not serve as a complete justification to me but it is more of a motivation of using fisher curvature noise. Therefore, I would still be expecting more comprehensive experiments\\n\\nI have looked at the paper[5] mentioned by the author, however I do not seem to find the argument of explaining generalization by optimization. It seems that only optimization budget is considered in that paper.\"}", "{\"title\": \"Further response to Reviewer 3\", \"comment\": \"We thank the reviewer for the timely response.\\n\\nRegarding using diagonal Fisher for generalization, we first explain the stability argument given in Appendix B in greater detail. The generalization bound is given by Eqn. 11 where the right-hand side is given by a Hellinger distance between two probability distributions. Since we are working over the convex-quadratic setting, we have Ornstein-Uhlenbeck processes and the Hellinger distance is given by Eqn. 13. As t is going to infinity, the term of importance are the determinants of the diffusion matrices $\\\\Sigma_S(t)$. In page 14 of our paper, we gave the formula for the determinant of $\\\\Sigma_S(t)$ for the cases where the noise is either identity or full Fisher. For full Fisher, we see that there is less dependence on the data (more robust) and for identity, we have more dependence on the data (given by the denominator term) and so we choose diagonal Fisher as a middle ground. That explains why diagonal Fisher gives a better generalization than LB.\\n\\nFurthermore, from a heuristic standpoint, we provided experiments on the maximum eigenvalue of the Hessian and marginal variance of the gradients. In Figure 3b), we find that the marginal variance of the gradients for LB + diag F is much larger than LB and similar to SB/LB + full Fisher. In Figure 3c), we find that the maximum eigenvalue of LB + diag F is smaller than the baseline LB + GBN. While the connection is not completely explicit, the fact that such heuristics are correlated with generalization performance has been discussed extensively in the literature; for example in [1, 2, 3, 4]. \\n\\nFrom [5], the time-to-target-validation (generalization) is explained purely by optimization considerations. So optimization performance is definitely an important sign of step-size selection.\\n\\n[1] Chaudhari, Pratik, et al. \\\"Entropy-sgd: Biasing gradient descent into wide valleys.\\\" arXiv preprint arXiv:1611.01838(2016)\\n[2] Chaudhari, Pratik, and Stefano Soatto. \\\"Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks.\\\" 2018 Information Theory and Applications Workshop (ITA). IEEE, 2018.\\n[3] Keskar, Nitish Shirish, et al. \\\"On large-batch training for deep learning: Generalization gap and sharp minima.\\\" arXiv preprint arXiv:1609.04836 (2016).\\n[4]: Sagun, Levent, et al. \\\"Empirical Analysis of the Hessian of Over-Parametrized Neural Networks.\\\" arXiv preprint arXiv:1706.04454 (2017).\\n[5]: Shallue, Christopher J., et al. \\\"Measuring the Effects of Data Parallelism on Neural Network Training.\\\" arXiv preprint arXiv:1811.03600 (2018)\"}", "{\"title\": \"Results without linear scaling learning rate.\", \"comment\": \"We added the result with square root learning rate in the Appendix E. We also verbose the results here.\\n\\nDataset | Network | SB | LB+GBN | Diag-F+GBN |\\nCifar10 | VGG16 | 93.25 | 91.6 | **92.9** |\\nCifar100 | VGG16 | 72.83 | 69.1 | **71.5** |\\nCifar10 | Resnet44 | 93.42 | 91.7 | **92.6** |\\nCifar100 |Resnet44x2| 75.55 | 72.8 | **73.6** |\"}", "{\"title\": \"Re: Further response to Reviewer 3\", \"comment\": \"The generalization analysis in Appendix B gives a generalization bound using uniform stability, however it remains to be explained why using diagonal fisher from the generalization perspective (which is also mentioned in the paper).\\nAnd for learning rate scaling, I do not really get why the optimization performance serve as a rule for choosing the stepsize for good generalization. \\nExperiment wise, as the theory part does not explicitly justify the use of diagonal fisher information for generalization, experiments on standard ImageNet would be necessary to demonstrate its superior performance.\"}", "{\"title\": \"Further response to Reviewer 3\", \"comment\": \"We thank the reviewer for pointing out more potential concerns. We address them as follows:\\n\\n- Motivation for using diagonal Fisher to improve generalization\\n\\nIn the convex-quadratic setting, we provided a theoretical analysis in Appendix B of how choosing different noise covariance structures in Eqn. 2 impacts the generalization error. The choice of noise covariance equal to Fisher and equal to identity are extreme cases and diagonal Fisher represents a \\u201cmiddle ground\\u201d. The Hellinger distance is much smaller when diagonal Fisher is chosen compared to identity leading to a tighter generalization bound in Eqn. 11. \\n \\nMoreover, the experiments show that taking the diagonal preserves some generalization benefits of full Fisher. An example of this is reported in Table 1 where we trained ResNet-44 model on CIFAR-10, the validation accuracy for full Fisher was 93.22 while diagonal Fisher was 92.72 (whereas the baseline LB + GBN is 91.92).\\n\\n- Results without linear scaling\\n\\nWe have included extra experiments with square-root scaling in Appendix E. Indeed, in the experiments, full Fisher has the same behavior as SB and no LR scaling is required for it. However, for diagonal Fisher, since the optimization property is closer to LB, the learning rate needs to be scaled (either linear or square root) to get optimal performance. \\n\\n- ImageNet experiments\\n\\nWell-designed experiments on MNIST and CIFAR-10/100 have a good track record of generalizing to large datasets such as ImageNet. In the recent systematic studies of LB training in [1], many of the experimental results are qualitatively consistent between different datasets (ranging from MNIST to Open Images) and architectures (ranging from Fully Connected to ResNet-50) within epoch-budget regimes.\\n\\nWe want to iterate that the central focus of our paper is not an empirical study on closing generalization gap of LB. Rather, we discovered adding diagonal Fisher gradient noise to LB gives significantly better optimization performance than full Fisher (and also SB) while they roughly share the same gradient variance and improving generalization of LB. We gave a full justification in the convex-quadratic setting.\\n\\nWe thank the reviewer again for responding in such a timely manner. Please let us know if there are more concerns with any other parts of the paper. \\n\\n[1]: Shallue, Christopher J., et al. \\\"Measuring the Effects of Data Parallelism on Neural Network Training.\\\" arXiv preprint arXiv:1811.03600 (2018)\"}", "{\"title\": \"Re: Response to Reviewer3\", \"comment\": \"Thanks for the reply. I still have some concerns after reading the response.\\n1. I could see the motivation of using diagonal Fisher noise from the optimization perspective, however it remains unclear to me how it helps for generalization than Fisher noise. And we know in deep learning, optimization does not guarantee generalization in general. The convergence theory part mostly only cares about optimization, so I would be more interested in seeing the motivation from the generalization perspective for diagonal Fisher noise.\\n2. Same with Reviewer 2, I would hope to see experiments without linear scaling. As the proposed Fisher noise is to account for the noise component, it is expected to have similar behavior to small-batch SGD. \\n3. Also, I feel that ImageNet is a standard dataset for studying generalization of large-batch training as also reported in the recent literature (Hoffer et. al 17, Goyal et. al 17). Therefore, it is of much interest as the paper is proposing an empirical approach for closing the generalization gap.\"}", "{\"title\": \"Response to Reviewer2\", \"comment\": \"We would like to thank the reviewer for their feedback and suggestions to improve this paper. We address the reviewer\\u2019s concerns:\\n\\n-\\u201cAs a caveat, I think the authors should also point out that the convergence rate would be best when C is set to 0 in the result of the theorem. This implies no noise is used during SGD updates. However, this would imply the regularization effect from the noise will also vanish which would lead to poor generalization.\\u201d\\n\\nThe reviewer completely understands our point here. We do thank the reviewer for raising this point and in the latest version of our paper, we added this remark right after Theorem 3.1. \\n\\n-\\u201dthe learning rate is linearly scaled proportional to the large batch-size with a warmup scheme similar to Goyal et al (2017) and ghost batch normalization is used similar to Hoffer et al (2017). The former two tricks have individually been shown on their own to close the generalization gap for large batch-size training on large datasets like ImageNet.\\u201d\\n\\nThe learning rate scheduling in Goyal et al, 2017 (linear scaling w.r.t. batch-size) and Hoffer et al, 2017 (square-root scaling w.r.t. batch-size) indeeds helps generalization in their experiments. However, the recent paper of [1] shows that there is no optimal learning-rate scaling scheme. As long as the learning rate scheduling is consistent for all LB methods, we can accurately measure the gain in performance from utilizing diagonal Fisher. In the latest version of our paper, we added square-root scaling in Table 3 of Appendix E for completeness purposes. \\n\\n-\\u201dFinally, the accuracy numbers for the proposed method is only marginally better than the baseline where isotropic noise is added to the large batch-size gradient.\\u201d\\n\\nWe thank the reviewer for bring into attention this confusion in writing. In Table 1, Isotropic + GBN is not exactly the baseline with isotropic Gaussian noise. In the main body of the text, what isotropic means is isotropic Gaussian noise scaled by trace of Fisher matrix. We apologize for this lack of clarity and have revised the writing in the most recent version. What this means is that in Eqn. 2 of the paper, we choose the covariance matrix D to be square-root of trace of Fisher and not the identity matrix. In fact, this is an important point in the paper: the choice of a covariance matrix D in Eqn. 2 has strong implications for both optimization and generalization performance. The reported numbers in Table 1 demonstrates that if we choose D to be Tr(F), we improve over the baseline (taken from Hoffer et al, 2017) but not as good as our proposed method in the last column. \\n\\nWe add that if we choose the covariance matrix D to be identity (isotropic Gaussian noise), then the performance is significantly worse. \\n\\n-\\u201cConcerning expected gradient over joint distribution on dataset is approximately zero\\u201d\\n\\nWe thank the reviewer for raising this point and we change the writing accordingly to remove previous confusions. \\n\\nAgain, we appreciate the insightful comments made by this reviewer. In light of the changes that we made suggested by the reviewer as well as the contributions in our meta-review, we would appreciate if the reviewer can increase their score. \\n\\n[1]: Shallue, Christopher J., et al. \\\"Measuring the Effects of Data Parallelism on Neural Network Training.\\\" arXiv preprint arXiv:1811.03600 (2018)\"}", "{\"title\": \"Response to Reviewer1\", \"comment\": \"We would like to thank the reviewer for the positive comments regarding our work.\\n\\n\\u201cThere is a typo in the next line of Eq. (2): \\\\nabla_{M_L} (\\\\theta_k)} -> \\\\nabla_{M_L} L(\\\\theta_k)}\\u201d\\nWe have corrected this typo in the latest version of our paper\\n\\nSince the reviewer\\u2019s positive assessment of our paper and in consideration of the novel contributions of this paper given in our meta-review, we would very much appreciate if the reviewer can increase their score.\"}", "{\"title\": \"Response to Reviewer3\", \"comment\": \"We like to thank the reviewer for the comments and suggestions to improve this paper. We address the reviewer\\u2019s concerns:\\n\\n-\\u201cThe idea of exploring the curvature information in the noise in SGD has been studied in (Hoffer et al. 2017). The difference between this approach and the proposed method in the paper is the use of diagonal fisher instead of the empirical fisher.\\u201d\\n\\nHoffer et al, 2017 discussed intrinsic curvature noise in SGD (as do many other recent papers such as Smith et al, 2017, Chaudhari et al, 2017, etc.). The proposed solutions in all these papers did not explicitly implement any form of empirical Fisher gradient noise. The main approach implemented in Hoffer et al, 2017 is the use of Ghost-Batch Normalization (GBN). GBN, similar to usual BN, should be thought as an architectural modification rather than incorporating curvature noise information. In addition, the training procedure is elongated for LB. However, extending the training regime for LB is against the very goal of using LB in the first place. In contrast, we show that using diagonal Fisher noise preserves the desirable convergence performance of LB training per parameter update and significantly improves generalization performance of LB without training longer.\\n\\n- \\u201cAlthough there is convergence analysis provided under convex quadratic setting, I feel that the motivation behind using diagonal fisher for faster convergence is not clear to me, although in the experiment part, the comparison of some of the statistics of diagonal fisher appear similar to the small batch SGD. The intuition of using diagonal fisher for faster convergence in generalization performance is still missing from my perspective.\\u201d\\n\\nThe intuition can be understood in the following way. In Fig 1, we notice that the empirical full Fisher update is orthogonal to the loss curvature. Thus, adding full Fisher noise to the gradients gives large perturbation in the high curvature direction, which leads to a higher expected training loss. In comparison, only taking the diagonal results in a smaller perturbation in the high curvature direction, which leads to a smaller expected training loss. \\n\\nThis is quantified in Theorem 3.1 for the convex quadratic setting. The overall convergence rate of the bound is O(1/k) but the constant is Tr(C^TAC). The difference between using full Fisher (C=\\\\sqrt{A}) and diagonal Fisher is (C=\\\\sqrt{diag A}) is exactly the difference between their Frobenius norms. We show that this carries over to the deep learning setting: in Figure 3a), we show that the Frobenius norm of full Fisher is much larger than that of diagonal Fisher. Finally, in Fig 3c), we showed that FB (full-batch) + diagonal Fisher attains much faster training than FB + full Fisher, which verifies the above-mentioned statement.\\n\\n-\\u201cIn the convergence analysis, as there is a difference between the full fisher and diagonal fisher in the Tr(C\\u2019AC) term. It would be interesting to see the effect of how this term play on convergence rate, and also how this term scale with batch size. But this is more of a minor issue as we are mostly caring about its generalization performance which is different from optimization error convergence.\\u201d\\n\\nThe difference between the Frobenius norm of the Fisher matrix and diagonal Fisher matrix is independent of the batch-sizes involved. That being said, the batch-sizes are used in the coefficient \\\\sqrt{N-M}{NM} before the diagonal Fisher term in Algorithm 1. \\n\\n-\\u201cThe experiments are conducted on MNIST and CIFAR-10/100, which I feel is a bit insufficient for a paper that deals with generalization gap in large batch. As in large batch training, we care more about bigger dataset such as ImageNet, and hence I would expect results reported on various models on ImageNet.\\u201c \\n\\nThe reason that all of our experiments were conducted on smaller models and datasets such as MNIST, CIFAR-10/100 are due to constraints in computing resources. We did not have the computing power or budget to run experiments on ImageNet. However, we feel that the empirical analysis given in our paper addresses key research questions concerning both convergence and generalization of LB training. \\n\\n-\\u201cAnother interesting thing to show would be the generalization error over epochs for different methods, which could give a more detailed characterization of the behavior of different methods.\\u201d \\nPlease see Appendix E in the latest version of our paper. \\n\\nIn light of the changes that we made suggested by the reviewer as well as the contributions in our meta-review, we would appreciate if the reviewer can reconsider their score.\"}", "{\"title\": \"Summary of contributions and changes.\", \"comment\": \"Summary of central contributions:\\n\\n-The role of variance: Variance-reduction techniques are commonly used in machine learning to improve convergence. However, convergence is not only related to the variance but also the covariance structure. In this paper, we found some surprising counter-intuitive phenomenon. In particular, the following three methods: the gradients of LB + diag Fisher, LB + full Fisher and SB have roughly the same variance (see Fig 3b). However, in Fig 3c, the convergence of LB + diagonal Fisher is much faster than LB + full Fisher. In fact, it is roughly equal to the convergence of LB. Moreover, over the convex-quadratic setting, we give a theoretical convergence guarantee for FB (full-batch) + diagonal Fisher and FB + full Fisher in terms of their Frobenius norms. We demonstrate that the Frobenius norm analysis carries over to deep neural networks in our experiments.\\n\\n-In a recent comprehensive study on LB training [1], the authors showed that epoch-budget training favors SB regime and is disadvantageous for LB. Moreover, the work of [2] shows that LB\\u2019s generalization performance can be remedied by extending the number of epochs trained. In this paper, we demonstrated that using diagonal Fisher noise allows us to improve LB generalization performance within an epoch-budget training (and in addition, retaining LB\\u2019s fast convergence per iteration). \\n\\n-Analyzing optimization and generalization together: In this paper, we examined both the optimization and generalization performance of LB training in unison rather than separately. Recent works (for example, [3]) indicates that optimization and generalization in deep learning cannot be decoupled. \\n\\n[1]: Shallue, Christopher J., et al. \\\"Measuring the Effects of Data Parallelism on Neural Network Training.\\\" arXiv preprint arXiv:1811.03600 (2018) \\n[2]: Hoffer, Elad, Itay Hubara, and Daniel Soudry. \\\"Train longer, generalize better: closing the generalization gap in large batch training of neural networks.\\\" Advances in Neural Information Processing Systems. 2017.\\n[3]: Neyshabur, Behnam, et al. \\\"Exploring generalization in deep learning.\\\" Advances in Neural Information Processing Systems. 2017.\\n\\n============================================================================================\", \"summary_of_changes\": \"-Revised Section 3.3 \\n-Minor rewriting of Section 4.3 as suggested by Reviewer 2 to remove previous confusions about experimental setup\\n-Added paragraph at end of Theorem 3.1 discussing how taking C=0 impacts optimization and generalization. This was suggested by Reviewer 2\\n-Added Table 3 in the Appendix using square-root scaling scheme. This was requested by Reviewer 2\\n-Added Figure 4 in Appendix E showing generalization performance of different regimes with respect to number of epochs. This was requested by Reviewer 3\"}", "{\"title\": \"Re: About the linear scaling scheme and gbn\", \"comment\": \"I agree about GBN and the extended training regime that it is against the goal of training faster using large batch size. But at least it does show from a research point of view that the generalization gap can be closed when using large batch-size. However I find the baseline reported in Hoffer et al to be slightly weak compared to what I have seen in my experience. So I believe the gap is still present even when using their tricks and the problem is completely solved yet.\\n\\nI will wait for your experiments using diagonal Fisher without linear learning rate scheme.\"}", "{\"title\": \"About the linear scaling scheme and gbn\", \"comment\": \"In the table1 where we reported for the baseline (LB+GBN), we also used the linear scaling scheme. It shows that the linear scaling scheme can't close the generalization gap on its own, at least without other tricks used in (Goyal et al 2017). The only difference between the baseline in our work and the (LB+GBN) in (Hoffer et al 2017) is the learning rate scheme (linear scaling vs. square root scaling). Hence, the gain from the linear learning rate scaling can be observed from the improvement of our numbers and theirs. But we still think it is interesting to see how diagonal Fisher works without the linear learning rate scaling scheme. We can report it in the upcoming revision.\\n\\nAs for the GBN, it was shown in the (Hoffer et al 2017) that it needs more iterations to close the generalization gap on its own, which is against the motivation of LB training. We reported the numbers of (LB+gbn+noise) because it produces the best results without training longer but we think the gain from curvature noise is still clear and more importantly consistent with the convergence analysis.\\n\\nAnother detail is the Isotropic noise in table1 is scaled by the Fisher trace norm so it is still curvature noise because it depends on the network parameters.\"}", "{\"title\": \"Re: Clarifying experimental setup in Sec. 4.3\", \"comment\": \"I see. But there were still two other tricks used-- the linear scaling scheme for learning rate and ghost batch-normalization, and the former has been shown to close the generalization gap on its own (Goyal et al 2017). Could you also report the improvement using diagonal Fisher conditioning but without the linear learning rate scaling scheme (may be on CIFAR-100)? I think it should not take long to run. According to the claim, the Fisher noise that is being added to the large batch gradient should lead to similar generalization as the base case, while improving convergence as pointed out by theory. So I am curious about where most of the gain is coming from-- the linear learning rate scaling scheme or the proposed Fisher noise.\"}", "{\"title\": \"Clarifying experimental setup in Sec. 4.3\", \"comment\": \"We apologize for the confusion in writing in Section 4.3 and the issues that stemmed from it. We clarify our experimental setup here. All of the results concerning LB=4096, the last three columns presented in Table 1 are strictly LB during the entire training process. We did not use SB=128 for the first 50 epochs of training.\\n\\nWe did perform an initial experiment where we used SB for first 50 epochs and LB for 150 epochs to suggest that the noise is relevant in the early stages of training. The results of this particular experiment are reported as \\u201cBatchChange\\u201d in Appendix F. We iterate that this is not a preferred solution; it is not strictly LB training during the entire process and hence it sacrifices the benefits of LB training in the early stages of training. \\n\\nIn upcoming versions of this paper, we will modify Section 4.3 along with other sections of the paper to improve clarity and presentation of the paper. We will address your other comments and concerns in later rebuttals.\"}", "{\"title\": \"curvature noise for large batch training in DNNs\", \"review\": \"In this paper, the authors propose a method to close the generalization gap that arises in training DNNs with large batch. The author reasons about the effectiveness in SGD small batch training by looking at the curvature structure of the noise. Instead of using the na\\u00efve empirical fisher matrix, the authors propose to use diagonal fisher noise for large batch SGD training for DNNs. The proposed method is shown empirically to achieve both comparable generalization and the training speedup compared to small batch training. A convergence analysis is provided for the proposed method under convex quadratic setting.\\n\\nThe idea of exploring the curvature information in the noise in SGD has been studied in (Hoffer et al. 2017). The difference between this approach and the proposed method in the paper is the use of diagonal fisher instead of the empirical fisher. Although there is convergence analysis provided under convex quadratic setting, I feel that the motivation behind using diagonal fisher for faster convergence is not clear to me, although in the experiment part, the comparison of some of the statistics of diagonal fisher appear similar to the small batch SGD. The intuition of using diagonal fisher for faster convergence in generalization performance is still missing from my perspective. \\n\\nIn the convergence analysis, as there is a difference between the full fisher and diagonal fisher in the Tr(C\\u2019AC) term. It would be interesting to see the effect of how this term play on convergence rate, and also how this term scale with batch size. But this is more of a minor issue as we are mostly caring about its generalization performance which is different from optimization error convergence. \\n\\nIn the experiments section, the authors claim that noise structure is only important for the first 50 epochs. But it would be better if the authors could show experimental results of using the same training method all the way during the experiment. The experiments are conducted on MNIST and CIFAR10 and 100, which I feel is a bit insufficient for a paper that deals with generalization gap in large batch. As in large batch training, we care more about bigger dataset such as ImageNet, and hence I would expect results reported on various models on ImageNet. Another interesting thing to show would be the generalization error over epochs for different methods, which could give a more detailed characterization of the behavior of different methods.\\n\\nOverall, I feel the motivation and intuition behind the proposed method is not clear enough and experimental studies are not sufficient for understanding the behavior of the proposed method as an empirical paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Algorithm derivation is reasonable and interesting\", \"review\": \"Summary:\\nThis paper proposes the method which improves the generalization performance of large-batch SGD by adding the diagonal Fisher matrix noise.\\nIn the theoretical analysis, it is shown that gradient descent with the diagonal noise is faster than it with the full-matrix noise on positive-quadratic problems.\\nMoreover, the effectiveness of the method is verified in several experiments.\", \"comments\": \"\", \"the_idea_of_the_proposed_method_is_based_on_the_following_observations_and_assumptions\": \"- Stochastic gradient methods with small-batch can be regarded as a gradient method with Fisher matrix noise.\\n- The generalization ability is comparable between diagonal Fisher and full Fisher matrix.\\n- Gradient method with diagonal Fisher is faster than that with full Fisher matrix.\\nThis conjecture is theoretically validated for the case of quadratic problems.\\n\\nIn short, the algorithm derivation seems to be reasonable and the derived algorithm is executable.\\nMoreover, experiments are well conducted and the results are also good.\", \"minor_comment\": \"- There is a typo in the next line of Eq. (2):\\n\\\\nabla_{M_L} (\\\\theta_k)} -> \\\\nabla_{M_L} L(\\\\theta_k)}\\n\\nIn addition, the notation \\\"l_i\\\" is not defined at this time.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting and insightful theory but weak experimental setup\", \"review\": \"It has previously been observed that training deep networks using large batch-sizes leads to a larger generalization gap compared to the gap when training with a relatively small batch-size. This paper proposes to add noise sampled from diagonal \\\"empirical\\\" Fisher matrix to the large batch gradient as a method for closing the generalization gap. The authors motivate the use of empirical Fisher for sampling noise by arguing that the covariance of gradients from small batch-sizes can be seen as approximately equal to a scaled version of the Fisher matrix. It is then pointed out that using the Fisher matrix directly to sample noise could in principle close the generalization gap but would lead to slow converegence similar to SGD with a small batch-size. The authors then claim that the convergence speed is better when noise is sampled from the diagonal Fisher matrix instead of the full Fisher matrix. This claim is proven in theory for a convex quadratic loss surface and experiments are conducted to empirically verify this claim both in the quadratic setting are for realistic deep networks. Finally an efficient method for sampling noise from the diagonal empirical Fisher matrix is proposed.\", \"comments\": \"I think the paper is very well written and the results are presented clearly. In terms of novelty, I found the argument about convergence using diagonal Fisher being faster compared with full Fisher quite interesting, and its application for large batch training to be insightful. \\n\\nAs a minor comment, for motivating theorem 3.1, it is pointed out by the authors that the diagonal Fisher acts as an approximation of the full Fisher and hence their regularization effects should be similar while convergence should be faster for diagonal Fisher. As a caveat, I think the authors should also point out that the convergence rate would be best when C is set to 0 in the result of the theorem. This implies no noise is used during SGD updates. However, this would imply the regularization effect from the noise will also vanish which would lead to poor generalization. \\n\\n\\nHowever, there is a crucial detail that makes the main argument of the paper weak. In the main experiments in section 4.3, for the proposed large batch training method, the authors mention that they use a small batch-size of 128 for the first 50 epochs similar to Smith et al (2017) and then switch to the large batch-size of 4096, at which point, the learning rate is linearly scaled proportional to the large batch-size with a warmup scheme similar to Goyal et al (2017) and ghost batch normalization is used similar to Hoffer et al (2017). The former two tricks have individually been shown on their own to close the generalization gap for large batch-size training on large datasets like ImageNet. This paper combines these tricks and adds noise sampled from the diagonal Fisher matrix on top when switching to large batch-size after epoch 50 and reports experiments on smaller datasets-- MNIST, Fashion MNIST and the CIFAR datasets. Finally, the accuracy numbers for the proposed method is only marginally better than the baseline where isotropic noise is added to the large batch-size gradient. For these reasons, I do not consider the proposed method a significant improvement over existing techniques for closing the generalization gap for large batch training.\\n\\nThere is also a statement in the paper that is problematic but can be fixed by re-writing. In the paper, empirical Fisher matrix, as termed by the authors in the paper, refers to the Fisher matrix where the target values in the dataset is used as the output of the model rather than sampling it from the model itself as done for computing the true Fisher matrix. This empirical (diagonal) Fisher matrix is used to sample noise which is added to the large batch gradient in the proposed method. It is mentioned that the covariance of the noise in small batch SGD is exactly same as the empirical Fisher matrix. This claim is premised on the argument that the expected gradient (over dataset) is unconditionally roughly 0, i.e., throughout the training. This is absolutely false. If this was the case, gradient descent (using full dataset) should not be able to find minima and this is far from the truth. Even if we compare the scale of expected gradient to the mini-batch gradient (for small batch-size), the scale of these two gradients at any point during training (using say small batch-size SGD) is of the same order. I am saying the latter statement from my personal experience. The authors can verify this as well.\\n\\nOverall, while I found the theoretical argument of the paper to be mostly interesting, I was dissapointed by the experimental details as they make the gains from the proposed method questionable when considered in isolation from the existing methods that close the generalization gap for large batch training.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
S1lVniC5Y7
From Nodes to Networks: Evolving Recurrent Neural Networks
[ "Aditya Rawal", "Jason Liang", "Risto Miikkulainen" ]
Gated recurrent networks such as those composed of Long Short-Term Memory (LSTM) nodes have recently been used to improve state of the art in many sequential processing tasks such as speech recognition and machine translation. However, the basic structure of the LSTM node is essentially the same as when it was first conceived 25 years ago. Recently, evolutionary and reinforcement learning mechanisms have been employed to create new variations of this structure. This paper proposes a new method, evolution of a tree-based encoding of the gated memory nodes, and shows that it makes it possible to explore new variations more effectively than other methods. The method discovers nodes with multiple recurrent paths and multiple memory cells, which lead to significant improvement in the standard language modeling benchmark task. Remarkably, this node did not perform well in another task, music modeling, but it was possible to evolve a different node that did, demonstrating that the approach discovers customized structure for each task. The paper also shows how the search process can be speeded up by training an LSTM network to estimate performance of candidate structures, and by encouraging exploration of novel solutions. Thus, evolutionary design of complex neural network structures promises to improve performance of deep learning architectures beyond human ability to do so.
[ "Recurrent neural networks", "evolutionary algorithms", "genetic programming" ]
https://openreview.net/pdf?id=S1lVniC5Y7
https://openreview.net/forum?id=S1lVniC5Y7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1Wpco0el4", "HkgjYAaj3Q", "HylpXeaq3Q", "S1ge9p1bhX" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544772501008, 1541295746628, 1541226533417, 1540582792258 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper704/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper704/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper704/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper704/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"In this work, the authors explore using genetic programming to search over network architectures. The reviewers noted that the proposed approach is simple and fast. However, the reviewers expressed concerns about the experimental validation (e.g., experiments were conducted on small tasks; issues with comparisons (cf. feedback from Reviewer2)), and the fact that the method were not compared against various baseline methods related to architecture search.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Work could be strengthened with additional experimental validation\"}", "{\"title\": \"An interesting idea but experiments and writeup need improvement.\", \"review\": \"A genetic algorithm is used to do an evolutionary architecture search to find better tree-like architectures with multiple memory cells and recurrent paths. To speed up search, an LSTM based seq2seq framework is also developed that can predict the final performance of the child model based on partial training results.\\n\\nThe algorithms and intuitions based on novelty search are interesting and there are improvements over baseline NAS model with the same architecture search space. \\n\\nAlthough, the experiments are not compared against latest architectures and best results. For example on PTB, there are new architectures such as those created by ENAS that result in much lower perplexity than best reported in Table 1, for the same parameter size. While you have mentioned ENAS in the related work, the lack of a comparison makes it hard to evaluate the true benefit if this work compared with existing literature. \\n\\nThere is no clear abolition study for the Meta-LSTM idea. Figure 4 provides some insights but it'd be good if some experiments were done to show clear wins over baseline methods that do not employ performance prediction.\\n\\nThere are many typos and missing reference in the paper that needs to be fixed.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting but not enough\", \"review\": \"This paper explores evolutionary optimization for LSTM architecture search. To better explore the search space, authors used tree-based encoding and Genetic Programing (GP) with homologous crossover, tree distance metric, etc. The search process is pretty simple and fast. However, there is a lack of experiments and analysis to show the effectiveness of the search algorithm and of the architecture founded by the approach.\", \"remarks\": [\"The contents provided in this paper is not enough to be convinced that this is a better approach for RNN architecture search and for sequence modeling tasks.\", \"This paper requires more comparisons and analysis.\", \"Experiments on Penn Tree Bank\", \"The dataset on both experiments are pretty small to know the effect of the new architecture they found. More experiments on larger datasets e.g., wikitext-2 will be needed.\", \"In the paper \\\"On the state of the art of evaluation in neural language models\\\", Melis et al., 2018 reported improvement using classic LSTM over other variations of LSTM. They intensively compared the performance of classic LSTM, NAS, and RHN (Recurrent Highway Network) as authors did. Melis et al. reported LSTM (with depth 1) can already achieve a test perplexity of 59.6 with 10M parameters and 59.5 with 24M parameters.\", \"Could you analyze a new finding of the LSTM architecture compared to the classic LSTM and NAS? Figure 5 and 6 are not very clear how are their final architectures different and the important/useful nodes changes for different tasks?\", \"Recently, there are a number of architecture search algorithms introduced, but there is only one comparison in this direction (Zoph&Le16). It is important to compare this approach with other architecture search methods.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Few contributions to architecture search, limited comparison to relevant work\", \"review\": \"The authors apply (tree-based) genetic programming (GP) to RNN search, or more specifically RNNs with memory cells, with the foremost example of this being the LSTM. GP provide a structured search that seems appropriate for designing NN modules, and has previously been applied successfully to evolving CNNs. However, the authors fail to mention that (tree-based) GP has been applied to evolving RNN topologies as far back as 2 decades ago, with even multiple cells in a single RNN unit [1]. The selection of more advanced techniques is good though - use of Modi for allowing multiple outputs, and neat-GP for more effective search (though a reference to the \\\"hall of fame\\\" [2] is lacking).\\n\\nThe authors claim that their method finds more complex, better performing structures than NAS, but allow their method to find architectures with more depth (max 15 vs. the max 10 of NAS), so this is an unfair comparison. It may be the case that GP scales better than the RL-based NAS method, but this is an unfair comparison as the max depth of NAS is not in principle limited to 10.\\n\\nThe second contribution of allowing heterogeneity in the layers of the network is rather minimal, but OK. Certainly, GP probably would have an advantage when searching at this level, as compared to other methods (like NAS). Performance prediction in architecture search has been done before, as noted by the authors (but see also [3]), so the particular form of training an LSTM on partial validation curves is also a minor contribution. Thirdly, concepts of archives have been in use for a long time [2], and the comparison to novelty search, which optimises for a hand-engineered novelty criteria, reaches beyond what is necessary. There are methods based on archives, such as MAP-Elites [4], which would make for a fairer comparison. However, I realise that novelty search is better known in the wider ML community, so from that perspective it is reasonable to keep this comparison in as well.\\n\\nFinally, it is not surprising that GP applied to searching for an architecture for one task does not transfer well to another task - this is not specific to GP but ML methods in general, or more specifically any priors used and the training/testing scheme. That said, prior work has explicitly discussed problems with generalisation in GP [5].\\n\\n[1] Esparcia-Alcazar, A. I., & Sharman, K. (1997). Evolving recurrent neural network architectures by genetic programming. Genetic Programming, 89-94.\\n[2] Rosin, C. D., & Belew, R. K. (1995, July). Methods for Competitive Co-Evolution: Finding Opponents Worth Beating. In ICGA (pp. 373-381).\\n[3] Zhou, Y., & Diamos, G. (2018). Neural Architect: A Multi-objective Neural Architecture Search with Performance Prediction. In SysML.\\n[4] Mouret, J. B., & Clune, J. (2015). Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909.\\n[5] Kushchu, I. (2002). An evaluation of evolutionary generalisation in genetic programming. Artificial Intelligence Review, 18(1), 3-14.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
r1gNni0qtm
Generalized Tensor Models for Recurrent Neural Networks
[ "Valentin Khrulkov", "Oleksii Hrinchuk", "Ivan Oseledets" ]
Recurrent Neural Networks (RNNs) are very successful at solving challenging problems with sequential data. However, this observed efficiency is not yet entirely explained by theory. It is known that a certain class of multiplicative RNNs enjoys the property of depth efficiency --- a shallow network of exponentially large width is necessary to realize the same score function as computed by such an RNN. Such networks, however, are not very often applied to real life tasks. In this work, we attempt to reduce the gap between theory and practice by extending the theoretical analysis to RNNs which employ various nonlinearities, such as Rectified Linear Unit (ReLU), and show that they also benefit from properties of universality and depth efficiency. Our theoretical results are verified by a series of extensive computational experiments.
[ "expressive power", "recurrent neural networks", "Tensor-Train decomposition" ]
https://openreview.net/pdf?id=r1gNni0qtm
https://openreview.net/forum?id=r1gNni0qtm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1x-ppuGg4", "Byxr4biHJV", "r1xImMNH1V", "rJeRns-xy4", "r1ldXFHyJ4", "rklZ6VBkyE", "BJeyD19227", "Sylvejvq27", "HklQ01sssm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544879544704, 1544036652857, 1544008222378, 1543670709780, 1543620896145, 1543619768608, 1541345110635, 1541204718556, 1540235211333 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper703/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper703/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper703/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper703/Authors" ], [ "ICLR.cc/2019/Conference/Paper703/Authors" ], [ "ICLR.cc/2019/Conference/Paper703/Authors" ], [ "ICLR.cc/2019/Conference/Paper703/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper703/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper703/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": [\"AR1 finds that extension of the previously presented ICLR'18 paper are interesting and sufficient due to the provided analysis of universality and depth efficiency. AR2 is concerned with the lack of any concrete toy example between the proposed architecture and RNNs. Kindly make an effort to add such a basic step-by-step illustration for a simple chosen architecture e.g. in the supplementary material. AR3 is the most critical (the analysis TT-RNN based on the product non-linearity done before, particular case of rectifier non-linearity is used, etc.)\", \"Despite the authors cannot guarantee the existence of corresponding weight tensor W in less trivial cases, the overall analysis is very interesting and it is the starting point for further modeling. Thus, AC advocates acceptance of this paper. The review scores do not indicate this can be an oral paper, e.g. it currently is unlikely to be in top few percent of accepted papers. Nonetheless, this is a valuable and solid work.\", \"Moreover, for the camera-ready paper, kindly refresh your list of citations as a mere 1 page of citations feels rather too conservative. This makes the background of the paper and related work obscure to average reader unfamiliar with this topic, tensors, tensor outer products etc. There are numerous works on tensor decompositions that can be acknowledged:\", \"Multilinear Analysis of Image Ensembles: TensorFaces by Vasilescou et al.\", \"Multilinear Projection for Face Recognition via Canonical Decomposition by Vasilescou et al.\", \"Tensor decompositions for learning latent variable models by Anandkumar et al.\", \"Fast and guaranteed tensor decomposition via sketching by Anandkumar et al.\", \"One good example of the use of the outer product (sums over rank one outer products of higher-order) is paper from 2013. They perform higher-order pooling on encoded feature vectors (although this seems to be the shallow setting) similar to Eq. 2 and 3 (this submission):\", \"Higher-order occurrence pooling on mid-and low-level features: Visual concept detection by Koniusz et al. (e.g. equations equations 49 and 50 or 1, 16 and 17 realize Eq. 3 and 13 in this submission)\", \"Higher-Order Occurrence Pooling for Bags-of-Words: Visual Concept Detection (similar follow-up work)\"], \"other_related_papers_include\": [\"Long-term Forecasting using Tensor-Train RNNs by Anandkumar et al.\", \"Tensor Regression Networks with various Low-Rank Tensor Approximations by Cao et al.\", \"Of course, the authors are encouraged to cite even more related works.\"], \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Solid work + references should be extended.\"}", "{\"title\": \"Response\", \"comment\": \"I thank the Authors for providing their responses to my questions.\"}", "{\"title\": \"response\", \"comment\": \"Thank you for your response. I believe the paper could be improved and would be more interesting if some analysis on general nonlinearities is provided.\"}", "{\"title\": \"Data organization & object representation, mapping between RNNs and generalized tensor models\", \"comment\": \"Thank you for pointing us to the relevant papers on tensor decompositions for imaging, we have already updated the related work section to include them. Also, please see the answers to the raised questions.\\n\\n1. In our analysis, the data is initially represented as a matrix (collection of vectors). This representation is later translated into rank 1 canonical tensor \\\\Phi. In its simplest form (multiplicative) the model can be written as <W, \\\\Phi>, where W is the weight tensor. As was shown in [1], TT decomposition of W leads to the recurrent architecture. In our analysis, we move further from this representation directly to RNNs with ReLU nonlinearity, and show that this model corresponds to the ReLU-TT decomposition of the \\u201cgrid tensor\\u201d, containing values of the model samples on a certain tensor product grid.\\n\\n2. In order to map vanilla RNN to our model we need to redefine two things:\\n1) Trainable weights. In the case of vanilla RNN without any nonlinearity and hidden state equation h_t = A*h_{t-1} + B*x_t + c, the weights are two matrices A and B and the vector c. In the case of our model, the weights are 3-dimensional TT-cores G_t (equation 11) or a single TT-core G (equation 12) if we set all of them to be equal.\\n2) Equation for hidden state. Equations 10, 12, and 16 provide us with concrete formulas for calculating hidden states for different models. We just need to plug them in instead of corresponding formula for vanilla RNN.\\nTo implement our models in practice we took the original code of TensorFlow RNN Cell, redefined trainable weights, and changed the expression for hidden state computation.\\n\\n[1] V. Khrulkov, A. Novikov, I. Oseledets. Expressive Power of Recurrent Neural Networks. In International Conference on Learning Representations, 2018.\"}", "{\"title\": \"Tensor models based on Tensor Ring, clarification on \\\"logits\\\" term, implementation details\", \"comment\": \"Thank you for such detailed and instructive analysis of our work! We hope that following comments will help to resolve your questions.\\n\\n1. The approximation function is l(X) and it approximates functions F: \\\\mathbb{R}^{N \\\\times T} \\\\rightarrow \\\\mathbb{R}. An example of such function F is a probability that given sequence X corresponds to a particular class.\\n\\n2. The connection with Tensor Ring format is an appealing direction for future research. We have not investigated this connection yet, but here are some thoughts on your comments. In particular, incorporating the circular dimensional permutation invariance property into neural networks may provide insights into how to design new types of RNNs. It would be also interesting to look at the distribution of lower bounds on the rank of TR-based tensor models (in analogy to Figure 3), as evenly distributed rank bounds may give better results than less balanced TT.\\n\\n3. It is indeed the case \\u2014 we will overfit if we increase the number of parameters. However, the purpose of the Figure 2 is to show that in the case of ReLU nonlinearity, TT-based generalized tensor models are more expressive than CP-based models. We believe that Figure 2 is a good illustration to this phenomena even within a current range of parameters number.\\n\\n4. By \\u201clogits\\u201d we mean immediate outputs of the last hidden layer before applying nonlinearity. This term is adopted from classification tasks where neural network usually outputs \\u201clogits\\u201d and following softmax nonlinearity transforms them into valid probabilities of classes. Note that the theory supports arbitrary number of classes: we just need to approximate logit for each class with our model and then apply sigmoid nonlinearity to get a valid classifier.\\n\\n5. TT-based RNN implementation details\\n(i) We experimented with non-overlapping patches of size 7x7 for MNIST and of size 8x8 for CIFAR-10.\\n(ii) From the analogy between vanilla RNNs and our model, M is closely related to the dimensionality of hidden state in RNN, which is typically set to tens or hundreds. In our experiments we used M=32 for visual datasets and M=50 for IMDB (as it exhibits longer temporal dependencies).\\n(iii) We used the same weight tensor W for all classes and in order to get 10 probabilities we added fully-connected layer right after the output of tensor network and used softmax activation function to obtain valid probabilities.\\n(iv) We used ReLU in the preprocessing feature map f_\\\\theta(x).\\n(v) Both matrix A and vector b are treated as an additional fully-connected layer and are learned together with TT-cores via backpropagation.\"}", "{\"title\": \"General nonlinearities discussion, loss of conformity between tensor networks and weight tensors, template vectors\", \"comment\": \"Thank you for your comments! Please, see the answers to your questions below.\\n\\n1. The detailed proof we provide in the paper indeed refers to the ReLU nonlinearity only. Due to the constructive nature of our proof, it is not easily generalized to the arbitrary associative and commutative binary operator, and we highly doubt that it will work in general. However, even without solid theoretical justification, we can implement generalized tensor networks with various nonlinearities and compare them empirically, which we do in Section B of the appendix. As we can see, the right choice of nonlinearity for particular dataset may lead to boost in the performance and it will be interesting direction of research to analyze them more rigorously from both theoretical and practical viewpoints.\\n\\n2. Before introducing the concept of generalized tensor networks, we had full correspondence between the score functions of scalar product form (equation 2) and the score functions of tensor decomposition form (equations 6 and 8). It ensures that tensor networks are universal function approximators and allows us to focus on another important properties, such as expressivity. However, after replacing the outer product with different operator and declaring the expressions from equations 14 and 16 to be the score functions, we can no longer state that they can be represented in a form of equation 2. Specifically, we can no longer guarantee the existence of corresponding weight tensor W (and, thus, universality), the existence of which was trivial in the case of standard tensor networks with multiplicative nonlinearity.\\n\\n3. The use of template vectors is motivated by the discussion in [1]. In order to be able to achieve zero classification error for the model under analysis, the data has to satisfy two assumptions: label has to be completely determined by the instance, and the input vectors may be quantized into one of the M templates. The assumption that natural images possess these properties is based on various empirical studies. For example, it was shown in [2] that small image patches of sizes 2x2, 4x4, 8x8, 16x16, and so on, can be effectively modeled by a GMM of size 64. We believe that similar properties also hold for sequential data appearing in NLP tasks, however, this assumption requires further investigation.\\n\\n[1] N. Cohen, O. Sharir, A. Shashua. On the expressive Power of Deep Learning: A Tensor Analysis. In Conference on Learning Theory, pp. 698 - 728, 2016.\\n[2] D. Zoran, Y. Weiss. Natural images, Gaussian Mixtures and Dead Leaves. In Advances in Neural Information Processing Systems, pp. 1745 - 1753, 2012.\"}", "{\"title\": \"The paper analyze connection between RNN and TT decomposition by incorporating nonlinearity. The theoretical results are very interesting while novelty is limited.\", \"review\": \"This paper extends the work of TT-RRN [Khrulkov et al., 2018] to further analyze the connection between RNN and TT decomposition by incorporating generalized nonlinearity, i.e., RELU, into the network architectures. Specifically, the authors theoretically study the influence of generalized nonlinearity on the expressivity power of TTD-based RNN, both theoretical result and empirical validation show that generalized TTD-based RNN is more superior to CP-based shallow network in terms of depth efficiency.\", \"pros\": \"1. This work is theoretically solid and extend the analysis of TT-RNN to the case of generalized nonlinearity, i.e. ReLU.\\n\\n2. The paper is well written and organized.\", \"cons\": \"1. The contribution and novelty of this paper is incremental and somehow limited, since the analysis TT-RNN based on the product nonlinearity already exists, which make the contribution of this paper decreased.\\n\\n2. The analysis is mainly on the particular case of rectifier nonlinearity. I wonder if the nonlinearities other than the RELU hold the similar properties? The proof or discussion on the general nonlinearities is missing.\", \"other_comments\": \"1. The authors said that the replacement of standard outer product with its generalized version leads to the loss of conformity between tensor networks and weight tensors, the author should clarify this in a bit more details.\\n\\n2. The theoretical analysis relies on grid tensor and restricts the inputs on template vectors. It is not explained why to use and how to choose the those template vectors in practice?\\n\\n3. A small typo: In Figure 2, \\u2018m' should be \\u2018M'\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good paper that would benefit from a meaningful and concrete toy example.\", \"review\": \"This paper would benefit from a meaningful and concrete toy example.\\n\\nThe toy example from section 3.1 eq(3) amounts to stating that the Kronecker product of a set of eigenparts is equal to the PCA eigenpixels for a set of complete images, or more generally that the kronecker product of a set of image-part feature tensors is equal to the complete image-feature tensor (assuming no pixel overlap). Sure. What does that buy? Hierarchical Tucker (including Tensor Train) does indeed compute the standard Tucker mode representation of a tensor in an efficient manner using a set of sequential SVDs rather than using a single SVD per mode. Is there anything else? Depending on how the data is organized into a data tensor, the object representation and its properties can differ dramatically. Section 3.1 needs further clarification.\", \"questions\": \"1. What is data tensor organization and what tensor decomposition model are you using? Tucker but implemented as a TT? \\nWhat is the resulting object representation?\\n2. In the context of the toy example, please give a concrete mapping between your tensor decomposition (object representation) and RNN.\\n\\nThe rest of the paper is a lot of mathematical manipulation which looks correct.\\n\\n\\nPlease reference the first papers to employ tensor decompositions for imaging.\\n\\nM. A. O. Vasilescu, D. Terzopoulos, \\\"Multilinear Analysis of Image Ensembles: TensorFaces,\\\" Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark, May, 2002, in Computer Vision -- ECCV 2002, Lecture Notes in Computer Science, Vol. 2350, A. Heyden et al. (Eds.), Springer-Verlag, Berlin, 2002, 447-460. \\n\\nM.A.O. Vasilescu, \\\"Multilinear Projection for Face Recognition via Canonical Decomposition \\\", In Proc. Face and Gesture Conf. (FG'11), 476-483.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good incrementally theoretical paper with supporting experimental results. The presentation could be improved (see comments).\", \"review\": \"The authors extend the theoretical results of a paper previously presented in the last edition of ICLR (2018), where it was demonstrated that Recurrent Neural Network can be interpreted as a tensor network decomposition based on the Tensor-Train (TT, Oseledets et al, 2011).\\nWhile previous results covered the multiplicative nonlinearity only, the contribution of the current paper is the extension of the analysis of universality and depth efficiency (Cohen et al, 2016) to different nonlinearities, for example ReLU (Rectified Linear Unit), which is very important from the practical point of view.\\nThe paper is well written and have a good structure. However, I found that some deep concepts are not well introduced, and maybe other more trivial results are discussed with unnecessary details. The following comments could help authors to improve the quality of presentation of their paper:\\n-\\tSection 3.1 (Score Functions and Feature Tensor) is a bit short and difficult to read. \\no\\tMaybe, a more motivating introduction could be included in order to justify the definition of score functions (eq. 2). \\no\\tIt would be also nice to state that, according to eq. (3), the feature tensor is a rank-1 tensor. \\no\\tI would suggest moving the definition of outer product to the Appendix, since most readers know it very well.\\no\\tIt is said that eq. 2 possesses the universal approximation property (it can approximate any function with any prescribed precision given sufficiently large M). It is not clear which is the approximation function.\\n-\\tA Connection with Tensor-Ring (TR) format, if possible, could be helpful: It is known that TR format (Zhao et al, 2016, arXiv:1606.05535), which is obtained by connecting the first and last units in a TT model, helps to alleviate the requirement of large ranks in the first and last the core tensors of a TT model reaching to a decomposition with an evenly distributed rank bounds. I think, it would be interesting to make a connection of RNN to TR because the assumption of R_i < R for all i becomes more natural. I would like to see at least some comment from the authors about the applicability of TR in the context of analysis of RNN, if possible. Maybe, also the initial hidden state defined in page 5 can be avoided if TR is used instead of TT.\\n-\\tFig 2 shows that Test accuracy of a shallow network (CP based) is lower and increases with the number of parameters approaching to the one for RNN (TT based). It would be necessary to show the results for an extended range in the number of parameters, for example, by plotting the results up to 10^6. It is expected that, at some point, the effect of overfitting start decreasing the test accuracy.\\n- When scores functions are presented (eq. 2) it is written the term \\\"logits\\\" between brackets. Could you please clarify why this term is introduced here? Usually, logit of a probability p is defined as L(p)=p/(1-p). What is the usage of this term in this work? \\n- I think the theory is presented for a model with the two-classes only but used for multiple classes in the experimental sections. It should be necessary to make some comment about this in the paper.\\n- Details about how the RNN based on TT is applied must be added. More specifically, the authors should provide answers to clarify the following questions: \\n(i) Are patches overlapped or non-overlapped? \\n(ii) What value of M is used? and is there any general rule for this choice? \\n(iii) How the classification in the 10-classes is obtained? Are you using a softmax function in the last layer? Are you using one weight tensor W_c per class (c=1,2,...,10). Please provide these technical details. \\n(iv) Please, specify which nonlinear activation sigma is used in the feature map f_\\\\theta(x).\\n(v) How many feature maps are used? and, Are the matrix A and vector b learned from training dataset or only the TT-cores need to be learned?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
r1fE3sAcYQ
Overcoming Multi-model Forgetting
[ "Yassine Benyahia*", "Kaicheng Yu*", "Kamil Bennani-Smires", "Martin Jaggi", "Anthony Davison", "Mathieu Salzmann", "Claudiu Musat" ]
We identify a phenomenon, which we refer to as *multi-model forgetting*, that occurs when sequentially training multiple deep networks with partially-shared parameters; the performance of previously-trained models degrades as one optimizes a subsequent one, due to the overwriting of shared parameters. To overcome this, we introduce a statistically-justified weight plasticity loss that regularizes the learning of a model's shared parameters according to their importance for the previous models, and demonstrate its effectiveness when training two models sequentially and for neural architecture search. Adding weight plasticity in neural architecture search preserves the best models to the end of the search and yields improved results in both natural language processing and computer vision tasks.
[ "multi-model forgetting", "deep learning", "machine learning", "multi-model training", "neural architecture search" ]
https://openreview.net/pdf?id=r1fE3sAcYQ
https://openreview.net/forum?id=r1fE3sAcYQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BklA2K-elE", "rJe_jDnHC7", "HJe5Bv3BAm", "rkeiZwhr0X", "ByeOqI3r07", "rkg-raP03Q", "H1e4uaHj2X", "rygpqz4uhQ", "r1eaAIv6F7", "H1xJ3lgTF7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1544718774329, 1542993824311, 1542993729909, 1542993667253, 1542993552002, 1541467449261, 1541262700262, 1541059221415, 1538254548800, 1538224295495 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper702/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper702/Authors" ], [ "ICLR.cc/2019/Conference/Paper702/Authors" ], [ "ICLR.cc/2019/Conference/Paper702/Authors" ], [ "ICLR.cc/2019/Conference/Paper702/Authors" ], [ "ICLR.cc/2019/Conference/Paper702/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper702/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper702/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper702/Authors" ], [ "~Vihari_Piratla1" ] ], "structured_content_str": [ "{\"metareview\": \"\", \"pros\": [\"nicely written paper\", \"clear and precise with a derivation of the loss function\"], \"cons\": \"novelty/impact:\\nI think all the reviewers acknowledge that you are doing something different in the neural brainwashing (NB) problem than is done in the typical catastropic forgetting (CF) setting. You have one dataset and a set of models with shared weights; the CF setting has one model and trains on different datasets/tasks. But whereas solving the CF problem would solve a major problem of continual machine learning, the value of solving the NB problem is harder to assess from this paper... The main application seems to be improving neural architecture search. At the meta-level, the techniques used to derive the main loss are already well known and the result similar to EWC, so they don't add a lot from the analysis perspective. I think it would be very helpful to revise the paper to show a range of applications that could benefit from solving the NB problem and that the technique you propose applies more broadly.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"good work but more is needed to have impact\"}", "{\"title\": \"Revised paper uploaded\", \"comment\": [\"We thank the reviewers for their valuable comments and for taking the time to review our paper. We have uploaded a revised version that addresses the reviewers\\u2019 main concerns. In particular\", \"We have renamed brainwashing \\u201cmulti-model forgetting\\u201d to account for the fact that the literature has become more liberal in using the term \\u201cforgetting\\u201d;\", \"We have clarified the novelty of our approach. Specifically, one could not simply heuristically modify the EWC loss to fit our multi-model setting. Our derivation shows that an additional term encoding the interactions between two models arises from our scenario;\", \"We have added a discussion of (Xu & Zhu, NIPS 2018);\", \"We have incorporated novel experiments based on the NAO strategy of Luo et al., NIPS 2018. These experiments demonstrate the benefits of our WPL loss in another neural architecture search approach, and show that WPL improves over the path dropout strategy of Bender et al., ICML 2018.\", \"Altogether, we believe that these modifications significantly strengthen our paper and further highlight the generality of our approach.\"]}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"Thank you for your time and positive feedback! We believe that addressing your comments has led to a stronger version of our paper.\\n\\n>Relation to [1] Bender et al., ICML 2018\\n\\nThank you for pointing us to this interesting work. [1] indeed highlights the problems arising from training a one-shot model corresponding to multiple architectures with shared parameters, and circumvents them by randomly dropping paths during training. However, this differs significantly from our work, where we derive a mathematical solution to address this problem. In fact, both solutions can be used jointly, and this is what is done in our experiments. Indeed, in all our architecture search experiments, ENAS relies on path dropout with a probability of 0.5, both when incorporating WPL and when not. Therefore, our experiments show that our approach can further improve the results of the strategy used in [1]. We have revised our paper so as to explain our use of path dropout and give proper credit to [1].\\n\\n\\n> Relation to [2] Luo et al., NIPS 2018 (NAO)\\n\\nWe became aware of NAO shortly after the ICLR deadline. In essence, the contribution of this work is to replace the reinforcement learning portion of ENAS with a gradient based auto-encoder. This can still suffer from multi-model forgetting, and again is thus orthogonal to our work. To demonstrate this, we incorporated WPL in the NAO framework and re-ran our RNN experiments with this new search method. The details of these experiments are provided in the appendix of our revised paper. These experiments illustrate the effectiveness of our approach with respect to both [1] and [2]. In short, we observed that\\nthe use of WPL reduces multi-model forgetting in NAO, as in ENAS, and this for various dropout rates;\\nwhile increasing the dropout rate indeed limits the multi-model forgetting effect, the resulting model consistently benefits from using WPL.\\nWe believe that these experiments confirm that our paper addresses an important issue, occurring in many neural architecture search strategies that use shared model representations. This further strengthens our contribution.\\n\\n\\n> Relatively low performance of ENAS on CIFAR-10\\n\\nTo implement our approach, we used a publicly available PyTorch implementation of ENAS for the RNN case that we further developed. For CIFAR-10, we extended this implementation to the CNN case. The choice of reimplementation of ENAS was motivated by the simplicity and flexibility of PyTorch. \\n\\nTo evaluate the final cells obtained by ENAS-WPL and ENAS we trained them in a fair training, without any hyperparameter tuning. The mismatch in scores is solely due to a difference in hyperparameter tuning, both for search and training from scratch, since ENAS final training is highly optimized. However, we believe this not to be a real issue, since our point is truly to demonstrate the benefits of accounting for multi-model forgetting, which our experiments do.\"}", "{\"title\": \"Response to Reviewer #3\", \"comment\": \"We thank the reviewer for their useful comments and for taking the time to review our paper. Below, we address their main concerns and have revised our paper accordingly.\\n\\n>Branding: Brainwashing vs forgetting\\n\\nIn the literature, \\u201cforgetting\\u201d traditionally refers to the scenario where one aims to train a single model on two different datasets. By contrast, we aim to train multiple models on a single data, which motivated our use of the term \\u201cbrainwashing\\u201d. We agree with the reviewer, however, that the term \\u201cforgetting\\u201d has recently started being used in a looser sense. Therefore, we have revised our paper to refer to our approach as \\u201cmulti-model forgetting\\u201d.\\n\\n\\n>Novelty over EWC\\n\\nThere is clear technical novelty in our paper, which stems from the fact that, while EWC aims to maximize the posterior probability p(\\\\theta | D1, D2), we maximize p(\\\\theta_1, \\\\theta_2, \\\\theta_s | D), where \\\\theta_1 and \\\\theta_2 denote the parameters specific to each model and \\\\theta_s those shared by both models. Heuristically modifying the EWC loss to fit our two-model scenario would be mathematically unjustified, and we therefore had to derive the equations for our formalism so as to reach WPL loss.\\n\\nOur derivation led to a new term in Equation (3), v^T \\\\Omega v, which encodes the interaction between the two models. This term will never appear in EWC, nor in any single-model forgetting formulation. The fact that WPL looks similar to EWC loss is then only due to our use of a Laplace approximation of this term with the diagonal Fisher information matrix as covariance. However, other approximations, such as a Laplace one with a full covariance matrix, will lead to loss functions that differ fundamentally from the EWC one. We have clarified this in the revised paper and believe our mathematical formulation of the parameter sharing scenario and its general solution in Equation (3) to be solid technical contributions.\\n\\n\\n>Relation to Xu & Zhu, NIPS 2018.\\n\\nThis paper addresses a fundamentally different problem from the one we tackle. In essence, given Model A trained on Dataset A, Xu & Zhu, NIPS 2018, use an NAS-like strategy to train Model B on a different Dataset B. While Model B shares some parameters with Model A, absolutely no forgetting occurs, because the parameters of Model A are fixed. As such, this work does not address forgetting, but rather aims to compensate for the sub-optimality of Model A\\u2019s parameters for Dataset B via NAS. While interesting, this idea is orthogonal to ours. In fact, this method could benefit from relying on WPL when searching for the best Model B. We thank the reviewer for pointing us to this work, which we now discuss in our revised paper.\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for taking the time to review our paper. Below, we address the main concern.\\n\\n> Incremental advances\\n\\nOur work is not incremental. While weight sharing is popular, multi-model forgetting has been neither explicitly acknowledged, nor carefully studied. There is clear technical novelty in our paper, which stems from the fact that, while EWC aims to maximize the posterior probability p(\\\\theta | D1, D2), we maximize p(\\\\theta_1, \\\\theta_2, \\\\theta_s | D), where \\\\theta_1 and \\\\theta_2 denote the parameters specific to each model and \\\\theta_s those shared by both models. Heuristically modifying the EWC loss to fit our two-model scenario would be mathematically unjustified, and we therefore had to derive the equations for our formalism so as to reach WPL loss.\\n\\nOur derivation led to a new term in Equation (3), v^T \\\\Omega v, which encodes the interaction between the two models. This term will never appear in EWC, nor in any single-model forgetting formulation. The fact that WPL looks similar to EWC loss is then only due to our use of a Laplace approximation of this term with the diagonal Fisher information matrix as covariance. However, other approximations, such as a Laplace one with a full covariance matrix, will lead to loss functions that differ fundamentally from the EWC one. We have clarified this in the revised paper and believe our mathematical formulation of the parameter sharing scenario and its general solution in Equation (3) to be solid technical contributions.\\n\\nFurthermore, we believe that our new experiments using WPL in the NAO framework of Luo et al., NIPS 2018, further confirm that our paper addresses an important issue that occurs in many neural architecture search strategies that use shared model representations.\"}", "{\"title\": \"The technique in this paper feels more or less identical to the ideas from Kirkpatrick et al (catastrophic forgetting). The difference seems to be one of application (different tasks vs same task), and as such feels like an incremental advance.\", \"review\": \"There is certainly additional novelty in that this paper focuses on models performing same/identical tasks (compared the results from the catastrophic forgetting paper), and because this model more clearly delineates the parameters that are shared across the models, vs those that are not. But both of those advances feel incremental.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Not sufficiently novel\", \"review\": [\"This \\\"neural brainwashing\\\" is catastrophic forgetting. Technically speaking, this is catastrophic forgetting.\", \"Also, some works targeting NAS (which I reckon should as well be cited due to being quite related) have targeted similar forgetting issues, e.g. Xu and Zhu, NIPS 2018 \\\"Reinforced continual learning\\\". It is nice to enrich the literature with new terms, when there is a need to. In my opinion, in this particular case, neural brainwashing is catastrophic forgetting.\", \"Forgetting is not necessarily an \\\"individual problem\\\", sticking to the language used in the third paragraph of the first page. The same applies to \\\"single-model forgetting\\\".\", \"page 1 \\\"Our work is the first of which we are aware to identify neural brainwashing and to propose a solution.\\\": According to the authors' argument, this is the case. Again, mine is different.\", \"Novelty w.r.t. works tackling catastrophic forgetting, most notably EWC, is minimal. Also, comparing to other state-of-the-art algorithms targeting catastrophic forgetting can further enrich the experiments.\", \"3.1.1. On a technical level, there is no inherent difference, between EWC and the proposed algorithm.\", \"Writing can improve, both in terms of the flow and the language. There are also a few typos, e.g. in the first line of the caption of Figure 2.\", \"Apart from the aforementioned issue (comparing to other state-of-the-art catastrophic forgetting algorithms), the experiments are rigorously prepared.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"a good work on one shot model training\", \"review\": \"This paper discusses the phenomena of \\u201cneural brainwashing\\u201d, which refers to that the performance of one model is affected via another model sharing model parameters. To solve the issue, the authors derived a new loss out from maximizing the posterior of the parameters. With the new loss, the neural brainwashing is largely diminished.\\nThe derived new loss looks meaningful to me and I think this is a valuable work for handling the weights coadaptation between two neural models, which with no doubt will bring great interests within the community of neural architecture search.\", \"here_are_some_comments_on_the_aspects_that_this_paper_can_be_improved\": \"1)\\tA very important related work [1] is missed in this paper. [1] discussed the properties of \\u201cone-shot model\\u201d, which means that several different architectures are unified into the same model by sharing model weights. Furthermore, [1] discussed \\u201cneural brainwashing\\u201d (although not with the same name) and how to handle it in a very simple way (by randomly dropping path). This definitely should be a baseline to compare with. In addition, a very recent work [2] also leverages model sharing to conduct neural architecture search.\\n\\n2)\\tAlthough I understand that to improve accuracy of NAS is not the main goal of this paper, the baseline number to be improved over is too weak. For example, 4.87 of CIFAR10 in ENAS. Per my own hands on experience, it does not need too many hyperparameter tuning of ENAS to obtain < 4% error rate. Please provide more convincing baseline numbers and supporting evidences of the better performance of WPL in NAS.\\n\\n\\n[1] Bender, Gabriel, et al. \\\"Understanding and simplifying one-shot architecture search.\\\" International Conference on Machine Learning. 2018.\\n[2] Luo, Renqian, et al. \\\"Neural architecture optimization.\\\" NIPS (2018).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"brainwashing is different from catastrophic forgetting\", \"comment\": \"We thank the reader for commenting on the paper so quickly. We believe that the problem statement for brainwashing is very different from the forgetting. Throughout the paper (e.g., paragraph 2 of the Introduction, Section 3.1.1), we explain this difference, i.e., catastrophic forgetting happens when training a single model for multiple tasks, while the brainwashing occurs when training multiple models on a single task.\\n\\nIndeed, In \\u00ab overcoming catastrophic forgetting \\u00bb, the authors maximize the posterior probability p(\\\\theta | D1, D2) while in \\u00ab overcoming neural brainwashing \\u00bb, we maximize p(\\\\theta_1, \\\\theta_2, \\\\theta_s | D), with \\\\theta_s refering to the shared parameters between two different models. Mathematically speaking, the two problems are fundamentally different. Catastrophic forgetting does not deal with parameter sharing across different models and only considers a single model with parameters \\\\theta. In our paper, we focus on tackling the problem where multiple models are sharing part of their architectures. We formulate our final loss through a completely different mathematical derivation and coincidentally ends in a similar formalism.\"}", "{\"comment\": \"The difference between what is referred to as \\\"Neural Brainwashing\\\" and \\\"Catastrophic Forgetting\\\" is not clear from the explanation. Even in *Catastrophic Forgetting*, the performance of the network on earlier tasks degrades. Also not clear is how the proposed model is different from the regularization methods (akin to \\\"weight plasticity\\\" proposed in yours) suggested in the context of Catastrophic Forgetting.\", \"title\": \"Brainwashing vs Catastrophic Forgetting\"}" ] }
BJlVhsA5KX
Sequenced-Replacement Sampling for Deep Learning
[ "Chiu Man Ho", "Dae Hoon Park", "Wei Yang", "Yi Chang" ]
We propose sequenced-replacement sampling (SRS) for training deep neural networks. The basic idea is to assign a fixed sequence index to each sample in the dataset. Once a mini-batch is randomly drawn in each training iteration, we refill the original dataset by successively adding samples according to their sequence index. Thus we carry out replacement sampling but in a batched and sequenced way. In a sense, SRS could be viewed as a way of performing "mini-batch augmentation". It is particularly useful for a task where we have a relatively small images-per-class such as CIFAR-100. Together with a longer period of initial large learning rate, it significantly improves the classification accuracy in CIFAR-100 over the current state-of-the-art results. Our experiments indicate that training deeper networks with SRS is less prone to over-fitting. In the best case, we achieve an error rate as low as 10.10%.
[ "deep neural networks", "stochastic gradient descent", "sequenced-replacement sampling" ]
https://openreview.net/pdf?id=BJlVhsA5KX
https://openreview.net/forum?id=BJlVhsA5KX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rylYjfNlg4", "H1egDgd2hm", "S1lYj1Hcn7", "r1lhSo5w3m" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544729249333, 1541337176245, 1541193632582, 1541020484477 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper701/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper701/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper701/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper701/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a new batching strategy for training deep nets. The idea is to have the properties of sampling with replacement while reducing the chance of not touching an example in a given epoch. Experimental results show that this can improve performance on one of the tasks considered. However the reviewers consistently agree that the experimental validation of this work is much too limited. Furthermore the motivation for the approach should be more clearly established.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-review\"}", "{\"title\": \"More work is needed\", \"review\": \"The paper suggests a new way of sampling mini-batches for training deep neural nets. The idea is to first index the samples then select the batches during training in a sequential way. The proposed method is tested on the CIFAR dataset and some improvement on the classification accuracy is reported.\\n\\nI find the idea interesting but feel that much more is needed in order to have a better understanding of how the proposed method works, when it works and when it doesn't work. Some theoretical insight, or at least a more systematic experimental study, is needed to justify the proposed method.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The approach requires more validation\", \"review\": \"This paper constructs a very simple, new scheme for sampling mini-batches. It aims to (i) achieve the noise properties of sampling with replacement while (ii) reduce the probability of not touching a sample in a given epoch. The result is a biased sampling scheme called \\u201csequenced-replacement sampling (SRS)\\u201d. Experimental results show that this scheme performs significantly better than a standard baseline on CIFAR-100 with minor improvements on CIFAR-10.\\n\\nThis is a highly empirical paper that presents a simple and sound method for mini-batch sampling with impressive results on CIFAR-100. It however needs more thorough analysis or experiments that validate the ideas as also experiments on harder, large-scale datasets.\", \"detailed_comments\": \"1. The authors are motivated by the exploration properties of sampling with replacement which I find quite vague. For instance, https://arxiv.org/abs/1710.11029 , https://arxiv.org/abs/1705.07562 etc. show that sampling mini-batches with replacement has a large variance than sampling without replacement and consequently SGD has better regularization properties. Also, for mini-batches sampled with replacement, the probability of not sampling a given sample across an epoch is very small.\\n\\n2. I believe the sampling scheme is unnecessarily complicated. Why not draw samples with replacement with a probability p and draw samples without replacement with a probability 1-p? Do the experimental results remain consistent with this more natural sampling scheme which also aligns with the motivations of the authors?\\n\\n3. To validate the claim that SRS works well when there are fewer examples per class, can you do ablation experiments on CIFAR-10 or ImageNet/restricted subset of it?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Needs More Work\", \"review\": [\"In this paper, the authors introduce a sampling strategy that aims to combine the benefits of with- and without-replacement SGD. With-replacement strategies add more randomness to the process, which the authors claim helps convergence, while the without-replacement strategies ensure equal usage of all datapoints. The authors present numerical results showing better convergence and improved final accuracy. While I found the idea appealing, I felt that the paper needs more work before it can be published. I detail some of my primary concerns below:\", \"The entire motivation of the paper is predicated on the hypothesis that more randomness is better for training. This is not generally true. Past work has shown that specific kinds of random noise aid convergence through exploration/saddle point avoidance/escaping spurious minima while others either make no change, or hurt. Noise from sampling tends to be a structured noise that aids exploration/convergence over batch gradient descent, but it is not immediately clear to me why the choice between with- and without-replacement should imply exploration.\", \"Maybe it's obvious but I'm not grasping why, mathematically, the number of accessible configurations for SRS is the same as original replacement sampling (4th paragraph on page 3).\", \"Given that the central motivation of the work was to enable with-replacement strategies while still ensuring equal usage, I recommend that the authors include a histogram of datapoint usage for three strategies (with, without, hybrid). This should help convince the reader that the SRS indeed improves upon the usage statistics of with replacement.\", \"If one were to create a hybrid sampling strategy, one that is natural is doing 50-50 sampling with and without replacement. In other words, for a batch size of 64, say, 32 are sampled with replacement and 32 without. By changing the ratio, you can also control what end of the sampling spectrum you want to be on. Did you try such a strategy?\", \"For the numerical experiments, as I see it, there are 3 differences between the SRS setup and the baseline: location of batch normalization, learning rate, and batch size. The authors show (at the bottom of Page 6) that the performance boost does not come from learning rate or mini-batch size, but what about the placement of the BN layer? Seems like that still remains as a confounding factor?\", \"\\\"SRS leads to much more fluctuations, and hence significantly more covariate shift\\\". How do the authors define covariate shift? Can the authors substantiate this claim theoretically/empirically?\", \"The authors claim that the method works better when the dataset size is low compared to number of classes. Again, can the authors substantiate this claim theoretically/empirically? Maybe you can try running a sub-sampled version of CIFAR-10/100 with the baselines?\", \"The writing in the paper needs improving. A few sample phrases that need editing: \\\"smaller mini-batch means a larger approximation\\\", \\\"more accessible configurations of mini-batches\\\", \\\"hence more exploration-induction\\\", \\\"less optimal local minimum\\\"\", \"Minor comment: why is the queue filled with repeated samples? In Figure 1, why not have the system initialized with 1 2 3 in the pool and 4 5 in the queue? Seems like by repeating, there is an unnecessary bias towards those datapoints.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rkl42iA5t7
NETWORK COMPRESSION USING CORRELATION ANALYSIS OF LAYER RESPONSES
[ "Xavier Suau", "Luca Zappella", "Nicholas Apostoloff" ]
Principal Filter Analysis (PFA) is an easy to implement, yet effective method for neural network compression. PFA exploits the intrinsic correlation between filter responses within network layers to recommend a smaller network footprint. We propose two compression algorithms: the first allows a user to specify the proportion of the original spectral energy that should be preserved in each layer after compression, while the second is a heuristic that leads to a parameter-free approach that automatically selects the compression used at each layer. Both algorithms are evaluated against several architectures and datasets, and we show considerable compression rates without compromising accuracy, e.g., for VGG-16 on CIFAR-10, CIFAR-100 and ImageNet, PFA achieves a compression rate of 8x, 3x, and 1.4x with an accuracy gain of 0.4%, 1.4% points, and 2.4% respectively. In our tests we also demonstrate that networks compressed with PFA achieve an accuracy that is very close to the empirical upper bound for a given compression ratio. Finally, we show how PFA is an effective tool for simultaneous compression and domain adaptation.
[ "Artificial Intelligence", "Deep learning", "Machine learning", "Compression" ]
https://openreview.net/pdf?id=rkl42iA5t7
https://openreview.net/forum?id=rkl42iA5t7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1gNVJS1HV", "B1etpz_ex4", "H1xvnmK414", "H1xCdNmQJE", "SylHM_0xJV", "BkgKD7doaX", "rJxU4AU76X", "HJxNw2I7Tm", "Skeeao8ma7", "HJlRitO53Q", "BJlCzy7q37", "ryl3i9nH37" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1549909804280, 1544745665241, 1543963567220, 1543873653672, 1543723021028, 1542320993129, 1541791278450, 1541790811532, 1541790648514, 1541208486491, 1541185302295, 1540897443822 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper700/Authors" ], [ "ICLR.cc/2019/Conference/Paper700/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper700/Authors" ], [ "ICLR.cc/2019/Conference/Paper700/Authors" ], [ "ICLR.cc/2019/Conference/Paper700/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper700/Authors" ], [ "ICLR.cc/2019/Conference/Paper700/Authors" ], [ "ICLR.cc/2019/Conference/Paper700/Authors" ], [ "ICLR.cc/2019/Conference/Paper700/Authors" ], [ "ICLR.cc/2019/Conference/Paper700/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper700/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper700/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Closing comments\", \"comment\": \"Thanks to the reviewers and the AC for their feedback. This process of review has strengthened and clarified our paper significantly. Closing comments about the final review topics can be found below:\\n\\n1. \\\"Results on large-scale tasks such as Imagenet\\\" was addressed in the rebuttal, as acknowledge by the AC.\\n\\n2. \\\"Compression after the fact may not be as good as training with a modified loss function that does compression jointly\\u201d. Without discussing here the merits and the drawbacks of changing the loss, in our comparisons, we showed that we obtain better results than other state of the art techniques, two of which do modify the loss function. There may be in the future techniques which effectively use modification of the loss, however, to the best of our knowledge, this statement is currently not supported by evidence.\\n\\n3. \\\"Insufficient comparisons on ResNet architectures which make comparisons against previous works harder\\u201d. The simplicity and improvements of our proposed technique are evident in the experiments we have presented. Experiments show that our technique achieves better results on a variety of architectures (a simple CNN, VGG-16 and ResNet) and datasets (CIFAR10, CIFAR100 and ImageNet). ResNet on ImageNet (we used VGG-16 instead) is the only missing combination. As we expand our presented results, we look forward to covering all experiments which reviewers feel are critical. As it is always possible to describe a subjectively useful or missing experiment, we hope the community will continue to revise evaluations criteria to emphasize progress rather than coverage.\"}", "{\"metareview\": \"The authors propose a technique for compressing neural networks by examining the correlations between filter responses, by removing filters which are highly correlated. This differentiates the authors\\u2019 work from many other works which compress the weights independent of the task/domain.\", \"strengths\": \"Clearly written paper\\nPFA-KL does not require additional hyperparameter tuning (apart from those implicit in choosing \\\\psi)\\nExperiments demonstrate that the number of filters determined by the algorithm scale with complexity of the task\", \"weaknesses\": \"Results on large-scale tasks such as Imagenet (subsequently added by the authors during the rebuttal period)\\nCompression after the fact may not be as good as training with a modified loss function that does compression jointly\\nInsufficient comparisons on ResNet architectures which make comparisons against previous works harder\\n\\nOverall, the reviewers were in agreement that this work (particularly, the revised version) was close to the acceptance threshold. In the ACs view, the authors addressed many of the concerns raised by the reviewers in the revisions. However, after much deliberation, the AC decided that the weaknesses 2, and 3 above were significant, and that these should be addressed in a subsequent submission.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting approach to compression based on analyzing filter activations.\"}", "{\"title\": \"ResNet-34 vs ResNet-56\", \"comment\": \"Dear reviewer,\\n\\nAfter exploring the experimental settings of the different state of the art techniques used in our comparison, we see ResNet-34 is not typically used with CIFAR, so unfortunately adding ResNet-34 experiments will not provide additional comparisons.\\n\\nAlmost all papers present results on CIFAR solely with VGG-16. Only us and Li et al. additionally report results with ResNet (and we both use ResNet-56). \\n\\nPapers that use ResNet-34 do so mainly for ImageNet and often include VGG-16 (which we have for comparison). \\n\\nIn order to augment Table 1 (as you suggest), we will include results for ResNet-34 on ImageNet.\\n\\nThank you once more.\"}", "{\"title\": \"Further clarifications\", \"comment\": \"Thank you for your further feedback.\\n\\n> Can you provide the full reference to Jordao et al., 2018?\", \"this_is_the_full_title_and_link\": \"\\u201cPruning Deep Neural Networks using Partial Least Squares\\u201d (https://arxiv.org/pdf/1810.07610v1.pdf). We will add this reference to our paper too.\\n\\n> Peng et al., 2018 report results in terms of FLOP compression. For completeness of the comparison, it would be interesting to also have these numbers for the detailed results you provided above\\n\\nThe information about FLOPs is already available in Table 1. In summary: Peng's algorithm (FGA) achieves a better FLOPs compression than our proposal (PFA), while PFA achieves a better footprint reduction and accuracy improvement than FGA. \\n\\nSpecifically, the FLOPs of the compressed model with respect to the original architecture are 43% for FGA (i.e. 57% reduction) and 52.7% for PFA (i.e., 47.3% reduction). On the other hand, the footprint of the compressed model with respect to the original architecture is 74.4% for FGA (i.e. 25.6% reduction) and 69.3% for PFA (i.e., 30.7% reduction). Finally the Top-1 and Top-5 accuracy improvement with respect to the original model are 0.82% and 0.94% respectively for FGA, and 2.39% and 1.41% respectively for PFA. \\n\\n> While I agree, several papers report results using a ResNet-34. Your choice of a ResNet-56, for which there are almost no other results, seems strange.\\n\\nThank you for pointing out this aspect. We have provided results on ResNet-56 because we followed the experimental settings of Li et al. for CIFAR-10. In retrospect we agree that ResNet-34 would have been a more popular choice. We do not expect the conclusions about PFA to change with ResNet-34. We believe we have provided enough quantitative and qualitative information for a reader to form her or his own judgment about how PFA compares with the state of art. We are, however, happy to do those experiments and report the new results in the paper if this helps to remove any doubts.\\n\\nThank you again for your review, we appreciate the time you invested in it.\"}", "{\"title\": \"A few more comments\", \"comment\": [\"Thank you for the clarification and for the new results. I nonetheless have a few more comments/questions:\", \"Can you provide the full reference to Jordao et al., 2018? It does not seem to be cited in the paper.\", \"Peng et al., 2018 report results in terms of FLOP compression. For completeness of the comparison, it would be interesting to also have these numbers for the detailed results you provided above.\", \"Regarding Table 1, you mentioned that there is no agreed benchmark. While I agree, several papers report results using a ResNet-34. Your choice of a ResNet-56, for which there are almost no other results, seems strange.\"]}", "{\"title\": \"New version of the manuscript available\", \"comment\": [\"We have uploaded a new version of our paper to address reviewers' comments. Here are the highlights of the changes:\", \"We have added the references mentioned by the reviewers.\", \"We have clarified a few sentences to avoid misunderstandings.\", \"We have included a section regarding the complexity analysis of PFA, as well as the actual time required by PFA to run on each layer of VGG-16 using the ImageNet dataset.\", \"We have included compression results on the ImageNet dataset.\", \"We have corrected the numbers in Table 1 regarding the FGA algorithm.\", \"Thanks to the changes recommended by the reviewers, the claims in the paper are strengthened. Independently of the architecture or dataset, PFA consistently provides better compression or accuracy than the state of the art.\", \"We would like to thank the reviewers once more for their valuable feedback. We hope they will find the changes satisfactory or we will wait for new feedback.\"]}", "{\"title\": \"Answers to comments on weaknesses\", \"comment\": \"Thank you for your comments. We appreciate your review.\\n\\n> ... spatial max pooling... I do not understand the intuition behind this.\\n\\nPooling is a relaxation to ease the next step in the process. Jordao et al. 2018 compares different forms of pooling for compression: global max pooling (as in PFA), avg pooling and a spatial preserving max pooling. They observe that global max pooling performs the best.\\n\\nThe intuition is that if two filters are correlated they might be redundant for the end task, even if they learn different features. For example, in order to decide if an image contains a face there is no need to detect nose, mouth, eyes, etc... one (or more) of these features might be sufficient. That said, we agree that exploring alternatives to max pooling is a potential for future research.\\n\\n> ... other methods have also proposed to take the activation into account for pruning, ... but they aim to minimize the reconstruction error.... In fact, this is also what PFA-En does;\\n\\nPFA-En uses the spectral energy of the filters' responses only to decide how many filters should be preserved. Our filter selection does not account for the reconstruction error.\\n\\n> While it is good that the KL-divergence-based method does not rely on any hyper-parameter, the function \\\\psi used in Eq. 3 seems quite ad hoc. As such, there has also been some manual tuning of the method.\\n\\n\\\\psi could be tuned to a given task, though we do not. The proposed \\\\psi is the function that empirically worked the best in an initial evaluation, and has not been tuned in any of our experiments.\\n\\n> In Table 1, there seems to be a confusion regarding how the results of FGA are reported. ...\\n\\nThank you for spotting this mistake.\\n\\n> Peng et al., 2018 report much better compression results, with %FLOP compression going up to 88.58%. Why are these results not reported here?\\n\\nTable 1 is meant to provide a numerical comparison for similar compression rates. Fig. 2b provides all compression rates for FGA (Peng et al.). However, we made the same mistake in reporting the numbers for FGA. Here is the correct comparison:\\n Footprint; Accuracy change\\nFGA 11.31%; -1.93%\\nPFA 9.64%; -2.37%\\n\\nFGA 13.20%; -0.57%\\nPFA 14.37%%; -0.27%\\n\\nFGA 18.54%; -0.02%\\nPFA 19.27%; +0.50%\\n\\nFGA 39.67%; +0.39%\\nPFA 43.11%; +1.70%\\n\\n> Many of the entries in Table 1 are empty ... This makes an actual comparison more difficult.\\n\\nWe agree. Sadly there is no agreed benchmark. We hope, that the amount of comparison provided is enough for the reader to form an opinion. Such opinion should also be influenced by other factors (beyond numbers): easy of implementation, practicality, ease of parameters tuning, etc...\\n\\n> Many compression methods report results on ImageNet. This would make this paper more convincing.\\n\\nWe have just completed the experiments on ImageNet. Here is a summary:\\n Footprint; Top1 change; Top5 change\\nPFA-KL from scratch 69.30%\\t -1.89% -0.97%\\nPFA-KL with filter selection 69.30%\\t +2.39% +1.41%\\n\\n> While I appreciate the domain adaptation experiments, it would be nice to see a comparison with Masana et al., 2017.\\n\\nOur results on domain adaptation are meant to support our claim that with PFA different complexities in the tasks lead to different compressions.\\n\\n> It is not entirely clear to me why tensor factorization methods are considered being so different from the proposed approach.\\n\\nIn the paper, by \\u201ctensor factorization\\u201d we refer to those algorithms that split weight tensor into a sequence of smaller tensors. By \\u201cstructured pruning\\u201d we refer to those algorithms (like PFA) that remove full filters from the current layer, without attempting to replace or approximate them. This terminology follows other discussions, such as Peng et al. 2018, Liu et al. 2017, and Li et al. 2017.\\n\\n> The authors argue that performing compression after having trained the model is beneficial. This is in contrast with what was shown by Alvarez&Salzmann, NIPS 2017.\\n\\nAlvarez&Salzmann (A&S) claim that is beneficial to modify the original loss in order to induce some properties in the full model in order to ease compression. This is not in contrast with our findings. The difference between A&S and PFA is that PFA does not need to modify the loss. From here on, the workflow is the same: we both compress after training and fine-tune the compressed model.\\n\\n> I do appreciate the idea of aiming for a hyper-parameter-free compression method. However, I feel that there are too many points to be corrected or clarified and too many missing experiments for this paper to be accepted to ICLR.\\n\\nThank you for your time and consideration; we hope you will find our answers and new experiments satisfactory. We believe our work is stronger after addressing your concerns. Please let us know if you have further questions and consider updating your rating.\"}", "{\"title\": \"Preview of the results on ImageNet\", \"comment\": \"Thank you for your positive comments and for your feedback and time. Please find a preview of the results on ImageNet below.\\n\\n> They should probably have run ImageNet experiments because many earlier papers on this topic use it as a benchmark and because the ImageNet size often reveals different behaviors. \\n\\nWe agree with the reviewer. We have just completed the experiments on ImageNet. Here is a summary of the results.\\n\\n Footprint; Top1 change; Top5 change\\nPFA-KL from scratch 69.30%\\t -1.89% -0.97%\\nPFA-KL with filter selection 69.30%\\t +2.39% +1.41%\\n\\nWe will report these results. In addition to the experiments on ImageNet, are there other enhancements that would elevate the paper?\"}", "{\"title\": \"Evidence of our claim, and preview of the results on ImageNet (including complexity analysis and actual times)\", \"comment\": \"We appreciate your review and would like to thank you for your feedback and time. Please find our answers below.\\n\\n> Therefore, in theory (nothing shown), different task would provide different outputs while similar works would compress in the same manner.\\n\\nThe experiments presented in section 4.5 and Figure 3 provide evidence that with PFA different complexities in the tasks lead to different architectures; similar tasks lead to similar architectures. We will clarify the connection between the claim and Section 4.5. In those experiments, we show how the architecture produced by PFA (starting from the same trained model) differs depending on the complexity of the task. For example, VGG-16 trained on CIFAR-100 and then compressed using 10 labels (R1 line in Figure 3.b) has 150 filters in the last layer (and overall it has 52% of the original filters), whereas a simpler task with 2 labels (S4 line in Figure 3.b) has only 45 filters in the last layer (and overall it has 36% of the original filters). The compression obtained in the two cases is clearly different and reflects the complexity of the tasks with respect to what the trained model has already learnt. On the other hand, the two tasks with 10 labels (R1 and R2 lines) lead to similar compression.\\n\\n> I still miss results in larger systems including imagenet. How all this approach actually scales with the complexity of the network and the task?\\n\\nWe have just completed the experiments on ImageNet. Here is a summary. In terms of complexity, we have a dedicated section in Appendix C. However, we will consider move the main conclusions to the main paper. In summary, the complexity of the PFA algorithm per layer is O(mn^2 + n^3), where n is the number of filters, and m is the number of samples. The task does not affect the complexity because the labels are not used in PFA. For the experiment on ImageNet (m=1.2M, ILSVRC2012) PFA took the longest on the last two layers (n = 4096) which were computed in roughly 120 seconds. Here is the full table of times when computing PFA sequentially (non-parallel CPU implementation):\\n\\nblock 0 (64 filters)\", \"conv0\": \"9.18s\", \"conv1\": \"9.22s\", \"conv2\": \"9.22s\\n\\nfully connected block (4096 filters)\", \"fc1\": \"142.17s\", \"fc2\": \"112.74s\\n\\nAs mentioned in Appendix C, PFA has to run once at the end of the training step and as shown above the time consumed by PFA is a negligible compared to the whole training time. In exchange for this marginal extra-time PFA provides the long-term benefit of a smaller footprint and faster inference, which in the lifetime of a deployed network will quickly surpass the time initially required by PFA. In addition, when working in a setting where the network is periodically re-trained with new incoming data, having a smaller network will add to the saved time.\\n\\nIn terms of performance, these are the results of PFA-KL on ImageNet\\n Footprint; Top1 change; Top5 change\\nPFA-KL from scratch 69.30%\\t -1.89% -0.97%\\nPFA-KL with filter selection 69.30%\\t +2.39% +1.41%\\n\\n> There have been recent approaches incorporating low-rank approximations that would be interesting to couple with this approach. I am surprissed these are not even cited ('Compression aware training' at NIPS 2017 or Coordinating filters at ICCV2017 both with a similar approach (based on weights tho). Pairing with those approaches seems a strong way to improve your results.\\n\\nThank you for the references. We will add them to our state-of-the-art review. Both techniques are interesting and could further improve the compression rate of PFA. The approach of modifying the original loss in order to induce a specific property in the full model is smart but goes against the philosophy that we have adopted for PFA. We envision PFA to be easy to use: no-parameter strategy (PFA-KL), no need to modify the original loss function (which would require additional hyper-parameters tuning), and the ability to start from pre-trained models or to use known training hyper-parameters. Nevertheless we are eager to consider how these other techniques could be paired with PFA in future work and see what level of improvement can be achieved.\\n\\nWe hope we have answered all your concerns. Please let us know if you think there are more opportunities for improvement.\"}", "{\"title\": \"Interesting approach, second time reviewing\", \"review\": \"This paper proposes a compression method based on spectral analysis. The basic idea is to analyse correlation between responses of difference layers and select those that are more relevant discarding the others. That, in principle (as mentioned in the paper) differs from other compression methods based on compressing the weights independently of the data being used. Therefore, in theory (nothing shown), different task would provide different outputs while similar works would compress in the same manner.\\n\\nThen, the paper proposes a greedy algorithm to select those filters to be kept rather than transforming the layer (as it has been usually done in the past [Jaderberg et al]). This is interesting (from a practical point of view) as would lead to direct benefits at inference time. \\n\\nThis is the second time i review this paper. I appreciate the improvements from the first submission adding some interesting results. \\n\\nI still miss results in larger systems including imagenet. How all this approach actually scales with the complexity of the network and the task?\\n\\nThere have been recent approaches incorporating low-rank approximations that would be interesting to couple with this approach. I am surprissed these are not even cited ('Compression aware training' at NIPS 2017 or Coordinating filters at ICCV2017 both with a similar approach (based on weights tho). Pairing with those approaches seems a strong way to improve your results.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"A decent pruning strategy.\", \"review\": \"The paper proposes to prune convolutional networks by analyzing the observed correlation between the filters of a same layer as expressed by the eigenvalue spectrum of their covariance matrix. The authors propose two strategies to decide of a compression level, one based on an eigenvalue threshold, the other one based on a heuristic based on the KL divergence between the observed eigenvalue distribution and the uniform one. This is a bit bizarre but does not require searching a parameter. Once one has decided the number of filters to keep, one can either retrain the network from scratch, or iteratively remove the most correlated filters, which, unsurprisingly, work better.\\n\\nThe authors perform credible experiments on CIFAR-10 and CIFAR-100 that show the results one would expect. They should probably have run ImageNet experiments because many earlier papers on this topic use it as a benchmark and because the ImageNet size often reveals different behaviors.\\n\\nIn conclusion, this is a very decent paper, but not a very exciting one.\\n\\n-------------\\n\\nAfter reading the authors' response and their additional experiments, I still see this work as a very decent paper, but not a very exciting one. This is why I rank this paper somewhat above the acceptance threshold. I could be proven wrong if this approach becomes the method of choice to prune networks, but I would need to see a lot more comparisons to be convinced.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting hyper-parameter free compression, but missing experiments and clarification/corrections needed\", \"review\": \"This paper introduces an approach to compressing a trained neural network by looking at the correlation of the filter responses in each layer. Two strategies are proposed: one based on trying to preserve the energy of the original activations and one based on looking at the KL divergence between the normalized eigenvalues of the activation covariance matrix and the uniform distribution.\", \"strengths\": [\"The KL-divergence-based method is novel and has the advantage of not requiring to define any hyper-parameter.\", \"The results show the good behavior of the approach.\"], \"weaknesses\": \"\", \"method\": [\"One thing that bothers me is the spatial max pooling of the activations of convolutional layers. This means that is two filters have high responses on different regions of the input image, they will be treated as correlated. I do not understand the intuition behind this.\", \"In Section 2, the authors mention that other methods have also proposed to take the activation into account for pruning, but that they aim to minimize the reconstruction error of these activations. In fact, this is also what PFA-En does; for a given dimension, PCA gives the representation that minimizes the reconstruction error. Therefore, the connection between this method and previous works is stronger than claimed by the authors.\", \"While it is good that the KL-divergence-based method does not rely on any hyper-parameter, the function \\\\psi used in Eq. 3 seems quite ad hoc. As such, there has also been some manual tuning of the method.\"], \"experiments\": [\"In Table 1, there seems to be a confusion regarding how the results of FGA are reported. First, in (Peng et al., 2018), the %FLOPS is reported the other way around, i.e., the higher the better, whereas here the lower the better. Similarly, in (Peng et al., 2018), a negative \\\\Delta in accuracy means an improved performance (as stated in the caption of their Table 2, where the numbers reported here were taken). As such, the numbers reported here, and directly taken from this work, are misinterpreted.\", \"Furthermore, Peng et al., 2018 report much better compression results, with %FLOP compression going up to 88.58%. Why are these results not reported here? (To avoid any misunderstanding, I would like to mention that I am NOT an author of (Peng et al., 2018)).\", \"Many of the entries in Table 1 are empty due to the baselines not reporting results on these datasets or with the same network. This makes an actual comparison more difficult.\", \"Many compression methods report results on ImageNet. This would make this paper more convincing.\", \"While I appreciate the domain adaptation experiments, it would be nice to see a comparison with Masana et al., 2017, which also considers the problem of domain adaptation with network compression and, as mentioned in Section 2, also makes use of the activations to achieve compression.\"], \"related_work\": [\"It is not entirely clear to me why tensor factorization methods are considered being so different from the proposed approach. In essence, they also perform structured network pruning.\", \"The authors argue that performing compression after having trained the model is beneficial. This is in contrast with what was shown by Alvarez & Salzmann, NIPS 2017, where incorporating a low-rank prior during training led to higher compression rates.\", \"The authors list (Dai et al., 2018) as one of the methods that aim to minimize the reconstruction error of the activations. Dai et al., 2018 rely on the mutual information between the activations in different layers to perform compression. It is not entirely clear to me how this relates to reconstruction error.\"], \"summary\": \"I do appreciate the idea of aiming for a hyper-parameter-free compression method. However, I feel that there are too many points to be corrected or clarified and too many missing experiments for this paper to be accepted to ICLR.\", \"after_response\": \"I appreciate the authors' response, which clarified several of my concerns. I would rate this paper as borderline. My main concern now is that the current comparison with existing method still seems too incomplete, especially with ResNet architectures, to really draw conclusions. I would therefore encourage the authors to revise their paper and re-submit it to an upcoming venue.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
B1e7hs05Km
Efficient Exploration through Bayesian Deep Q-Networks
[ "Kamyar Azizzadenesheli", "Animashree Anandkumar" ]
We propose Bayesian Deep Q-Networks (BDQN), a principled and a practical Deep Reinforcement Learning (DRL) algorithm for Markov decision processes (MDP). It combines Thompson sampling with deep-Q networks (DQN). Thompson sampling ensures more efficient exploration-exploitation tradeoff in high dimensions. It is typically carried out through posterior sampling over the model parameters, which makes it computationally expensive. To overcome this limitation, we directly incorporate uncertainty over the value (Q) function. Further, we only introduce randomness in the last layer (i.e. the output layer) of the DQN and use independent Gaussian priors on the weights. This allows us to efficiently carry out Thompson sampling through Gaussian sampling and Bayesian Linear Regression (BLR), which has fast closed-form updates. The rest of the layers of the Q network are trained through back propagation, as in a standard DQN. We apply our method to a wide range of Atari games in Arcade Learning Environments and compare BDQN to a powerful baseline: the double deep Q-network (DDQN). Since BDQN carries out more efficient exploration, it is able to reach higher rewards substantially faster: in less than 5M±1M samples for almost half of the games to reach DDQN scores while a typical run of DDQN is 50-200M. We also establish theoretical guarantees for the special case when the feature representation is fixed and not learnt. We show that the Bayesian regret is bounded by O􏰒(d \sqrt(N)) after N time steps for a d-dimensional feature map, and this bound is shown to be tight up-to logarithmic factors. To the best of our knowledge, this is the first Bayesian theoretical guarantee for Markov Decision Processes (MDP) beyond the tabula rasa setting.
[ "Deep RL", "Exploration Exploitation", "DQN", "Bayesian Regret", "Thompson Sampling" ]
https://openreview.net/pdf?id=B1e7hs05Km
https://openreview.net/forum?id=B1e7hs05Km
ICLR.cc/2019/Conference
2019
{ "note_id": [ "B1g98Je8xE", "HyxANvhHeE", "HJe03g6WxE", "BJe0xdMggE", "r1xz_N_kgN", "rkx5QOb_kE", "SklccwDh0X", "B1eFHHDnAQ", "HJgYZgX30X", "BJeXXaM30Q", "H1lXah_90X", "HJezVrgmCX", "H1e2hKSRpm", "HkxtAs4Ka7", "SkgdhF4KT7", "SkehU_4tp7", "B1xfqv4Kp7", "HJl3HrEYpX", "HJgKVVEtTX", "Hkgl5de63Q", "H1gi8TCPhX", "rygc6qQ82m", "HkehNOmf3X", "ByecO2R6j7", "rye7xYTnim", "Bye-m-3UiX" ], "note_type": [ "official_comment", "comment", "official_comment", "meta_review", "official_comment", "comment", "official_comment", "official_comment", "comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_review", "comment", "official_comment" ], "note_created": [ 1545105234396, 1545090869971, 1544831158149, 1544722422099, 1544680554455, 1544194082109, 1543432082167, 1543431489407, 1543413761461, 1543413019279, 1543306426528, 1542812969676, 1542506932151, 1542175697292, 1542175152357, 1542174803565, 1542174602134, 1542174020298, 1542173745170, 1541372040455, 1541037395447, 1540926146477, 1540663347827, 1540381809568, 1540311274605, 1539911960627 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper699/Authors" ], [ "~Akshay_Krishnamurthy1" ], [ "ICLR.cc/2019/Conference/Paper699/Authors" ], [ "ICLR.cc/2019/Conference/Paper699/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper699/Authors" ], [ "~Akshay_Krishnamurthy1" ], [ "ICLR.cc/2019/Conference/Paper699/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper699/Authors" ], [ "~Ian_Osband1" ], [ "~Ian_Osband1" ], [ "ICLR.cc/2019/Conference/Paper699/Authors" ], [ "~Ian_Osband1" ], [ "ICLR.cc/2019/Conference/Paper699/Authors" ], [ "ICLR.cc/2019/Conference/Paper699/Authors" ], [ "ICLR.cc/2019/Conference/Paper699/Authors" ], [ "ICLR.cc/2019/Conference/Paper699/Authors" ], [ "ICLR.cc/2019/Conference/Paper699/Authors" ], [ "ICLR.cc/2019/Conference/Paper699/Authors" ], [ "ICLR.cc/2019/Conference/Paper699/Authors" ], [ "ICLR.cc/2019/Conference/Paper699/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper699/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper699/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper699/Authors" ], [ "ICLR.cc/2019/Conference/Paper699/AnonReviewer1" ], [ "~Ian_Osband1" ], [ "ICLR.cc/2019/Conference/Paper699/Authors" ] ], "structured_content_str": [ "{\"title\": \"the assumption\", \"comment\": \"Dear Akshay\\n\\nI totally agree with you that the d^{{H+1}/2} dependent regret upper bound does not evidence the sampling efficiency, and I also agree it is not reasonable to just assume the mentioned assumption is satisfied. \\n\\nI am currently working on the cases that this assumption is rigorously satisfied (e.g. following optimism under what condition this is satisfied). So far, the current stage of the proof, O(d\\\\sqrt(T)) with the independence assumption, is trivial and moreover the upper bound O(d^{{H+1}/2} \\\\sqrt(T)) is not tight, therefore the current state of the theoretical contribution is not complete yet. I would like to thank you again for your thoughtful and helpful comment on this paper.\\n\\nCheers\"}", "{\"comment\": \"Hi authors,\\n\\nThanks for looking into this. The d^{(H+1)/2}\\\\sqrt{T} regret bound is less surprising to me, but this is not a very good guarantee. In particular, I think an exp(d) regret guarantee does not provide evidence for sample efficiency. \\n\\nAbout the \\\"prior\\\" (I guess you mean assumption?). I do not think it is reasonable to make this assumption, as it depends on the policy \\\\pi_t, which of course is algorithm dependent. Are there any special cases where this assumption is true, in some algorithm-independent sense?\\n\\nAkshay\", \"title\": \"Thanks, but this much weaker\"}", "{\"title\": \"update\", \"comment\": \"Dear Akshay\\n\\nThanks again for bringing up this important point.\", \"update\": \"For the general case, with a modification in the current proof, I get a regret upper bound of \\nd^((H+1)/2)\\\\sqrt(T). \\n\\nWith an additional prior of \\nsum_{i<=t}||\\\\phi(x_i,\\\\pi_t(x_i))||^2_{\\\\xi_t^-1}=O(1) \\nI get d\\\\sqrt(T).\\n\\nI'll update the text with this regard. \\n\\nCheers\"}", "{\"metareview\": \"There was a significant amount of discussion on this paper, both from the reviewers and from unsolicited feedback. This is a good sign as it demonstrates interest in the work. Improving exploration in Deep Q-learning through Thompson sampling using uncertainty from the model seems sensible and the empirical results on Atari seem quite impressive. However, the reviewers and others argued that there were technical flaws in the work, particularly in the proofs. Also, reviewers noted that clarity of the paper was a significant issue, even more so than a previous submission.\\n\\nOne reviewer noted that the authors had significantly improved the paper throughout the discussion phase. However, ultimately all reviewers agreed that the paper was not quite ready for acceptance. It seems that the paper could still use some significant editing and careful exposition and justification of the technical content.\\n\\nNote, one of the reviews was disregarded due to incorrectness and a fourth reviewer was brought in.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"A neat idea with impressive results but has technical flaws and issues with clarity\"}", "{\"title\": \"bias\", \"comment\": \"Dear Akshay\\n\\nI really appreciate this comment. I could not find any explanation of how I missed this obvious point. As you can imagine, under this assumption, the problem becomes trivial. Let me relax this assumption ( which results in bias estimators for all the time step except for time step H) and see how the bound changes. Thanks again.\\n\\nCheers\"}", "{\"comment\": \"Hi authors,\\n\\nI was studying Appendix B, which contains the main theoretical content, and I have a question about Assumption 1. Above equation (5), there is a comment that says \\\"the noise process is correlated, dependent on the agent policy, and is not mean zero unless under policy \\\\pi^\\\\star.\\\" Then, Equation 7 is just a re-writing of Equation 5 (there is a typo here since it should be \\\\omega^\\\\star). But Assumption 1 now states that \\\\bar{\\\\nu}_t is mean zero, which implies \\\\nu_t is mean zero as well (by linearity of expectation).\\n\\nThese two statements are in contradiction, but Assumption 1 seems incorrect to me. One cannot model the random return for a suboptimal policy as an unbiased estimate of Q^\\\\star, which is what Equation 7 + Assumption 1 are saying. If one executes a suboptimal policy, the random return will systematically underestimate Q^\\\\star. This systematic underestimation/bias is the core difficulty in analyzing algorithms for linear Q-learning. So one cannot assume it away.\\n\\nCould you please clarify?\\nAkshay\", \"title\": \"About Assumption 1\"}", "{\"title\": \"Agreed that this is not a special case of Dropout TS\", \"comment\": \"I also disagreed with the reviewer's assessment here. I don't agree with the reviewer's choice of relevant literature or demanded baselines. This is why a fourth reviewer was brought in, and this review will be weighted accordingly.\"}", "{\"title\": \"Posterior Sampling RL\", \"comment\": \"Dear Ian,\\n\\nWe would like to thank you for kindly leaving a comment on our AnonReviewer1's thread.\\n\\nRegarding the analysis of model-free PSRL; BDQN up to some modification reduces to PSRL algorithm if the prior and posterior are Gaussian. As we mentioned in the paper, since maintaining the posterior distribution can be computationally intractable, in our empirical study, we approximate the posterior with a Gaussian distribution. We apologize if it is not clear from the paper. We will emphasize more on this statement.\\n\\nAs we mentioned before, in model-based PSRL, we generally specify the problem with a prior over MDP models, as well as a likelihood over state transitions and reward processes. As you also agreed, we are given these quantities before interacting with the environment. Consequently, in model-free PSRL, we specify the problem with a prior over Q and likelihood over the return. Similarly, we are given these quantities before interacting with the environment to pursue PSRL\\n\\nRegarding the discount factor \\\\gamma; the per episode regret is bounded above with O(1 / (1-gamma) ), but not the overall regret. Following your statement, the regret is upper bounded as O(T / (1-gamma) ) which is linear in T. We derived a sublinear regret of \\\\tild{O}(\\\\sqrt{T} ||(1,\\\\gamma,\\\\gamma^2,...,\\\\gamma^{H-1})||_2 ).\\n\\n\\nWe would like to appreciate the thoughtful comments on our paper and also thank you for taking the time to leave a comment on our AnonReviewer1 review regarding the drop-out discussion. \\n\\nSincerely \\nAuthors\"}", "{\"comment\": \"This issue of \\\"exact\\\" posterior inference is crucial.\\nYes, in previous papers analysing PSRL they assume exact posterior inference.\\nHowever, those papers do not claim that PSRL is a model-free algorithm when doing this.\\n\\nAt this point, the analysis of this \\\"model-free\\\" PSRL is quite divorced from your algorithm Bayesian DQN (which is the model-free RLSVI, but only applying randomization to the final layer of a DQN).\\nI think it would be better to separate these contributions into two separate papers...\\n\\nAt the moment, there is an implication that the two algorithms (model-free PSRL and BDQN) are conflated... but this is confusing because actually they are not the same thing.\\nIt's also not clear if there are any examples where you can do this \\\"model-free PSRL\\\" exactly, even with infinite compute, without an underlying model of the MDP?\\nThe issue is that, in order to have the *exact* prior/likelihood updates, you need to take into account the max_a dynamics.\\n\\nI also think you should omit the \\\"gamma\\\" stuff entirely.\\nRegret bounds O(\\\\sqrt{T}) make absolutely no sense in a discounted setting... for any discount < 1 the regret is bounded O(1 / (1-gamma) ).\", \"title\": \"Making progress, but still serious problems\"}", "{\"comment\": \"Following the discussion with the authors above, I would say that I don't think the characterization of their algorithm as \\\"a special case of Dropout\\\" is fair.\\nYes, both papers attempt to find an approximate form of posterior for Q-values... but I think this is one method (actually more similar to \\\"RLSVI\\\") and Dropout sampling on the final layer is another method.\\n(Dearden et al, 1998) is another relevant comparison to add, but they do not correctly propagate uncertainty over multiple steps of TD error.\\n\\nI have personally had some concerns with the dropout-as-posterior, particularly for RL settings.\\nSections 2.1 and 2.2 of this paper https://arxiv.org/abs/1806.03335 outline some of these.\\nAlso, it looks like this paper provides more empirical evidence of DropoutTS poor performance.\\n\\nThat said, I agree with many of your other points, and add severe concerns over the quality of the analysis\", \"title\": \"I agree the paper should not be accepted, but this algorithm is not just a special case of \\\"Dropout TS\\\".\"}", "{\"title\": \"This is an interesting line of research, and it would be impactful to get this answer right!\", \"comment\": \"Dear Ian,\\n\\nWe also like to thank you for the time you dedicated and kindly read the revised version of our paper. As you mentioned, upon you and our four reviewers\\u2019 thoughtful comments, we improved the clarity of the presentation. We believe that the merit of openreview helps authors to deliver polished and influential research contributions. With this regard, we would be grateful to you if you could leave a comment for our AnonReviewer1 regarding drop-out and its correspondence to Thompson sampling. We already referred our AnonReviewer1 to the discussion in Appendix A of your BootstrapDQN paper. Moreover, we made an additional empirical study on four Atari games to show the deficiency of dropout in providing reasonable exploration and exploitation tradeoff. We would appreciate it if you could take a time and leave a comment in the corresponding thread.\", \"regarding_the_confidence_set_c_t\": \"The confidence C_t is mentioned in section 4.\", \"regarding_the_discount_factor\": \"Both theorem 1 and theorem 2 hold for any discount factor 0<= \\\\gamma<=1. We get a tight bound if we replace \\\\sqrt(H) with a smaller quantity ||1,\\\\gamma, gamma^2,...,\\\\gamma^(H-1)||_2. We addressed this in terms of a remark in the latest version.\\n\\nRegarding the choices of prior/likelihood: We apologize that the choices of prior and likelihood were not clear from the main text. We would like to restate that we do not specify the choices of prior and likelihood. They can be anything as long as they satisfy the set of assumptions on page 6, e.g., sub-Gaussianity.\", \"regarding_the_inconsistency\": \"Conditioned on any data history H_t, the posterior over w is identical to the posterior of the optimal parameter. Similar theoretical justification is also deployed in Russo et al. 2014, \\u201cLearning to Optimize Via Posterior Sampling\\u201d page 10, as well as many of your papers, e.g., \\u201c(More) Efficient Reinforcement Learning via Posterior Sampling\\u201d lemma 1.\\n\\nRegarding your statement on \\u201cthis is not a model-free algorithm if we know the likelihood\\u201d: Theoretically, given a w, the knowledge of the likelihood does not determine a model. These algorithms, also neither require to construct a model nor to store any MDP model parameters (e.g., transition kernel).\\n\\nIn model-based PSRL, we generally specify the problem with a prior over MDP models, as well as a likelihood over state transitions and reward processes. These quantities are given and known in the model-based PSRL framework. Consequently, in model-free PSRL we specify the problem with a prior over Q and likelihood over the return where similarly these quantities are given and known.\\n\\nWe agree with you that when the prior and the likelihood functions are arbitrary, then computing the posterior, as well as sampling from it, can be computationally hard. As you know, this is a principled issue with Bayesian methods. It is also an unsolved issue for model-based methods, e.g., in continuous MDPs. While we are excited about this line of research, we left the study of relaxing this computation complexity for the future work.\\n\\nWe would like to thank you again for taking the time to leave thoughtful comments and we appreciate your positive assessment of this line of research. We would be also grateful to you if you could leave a comment on our AnonReviewer1 review regarding the drop-out discussion. \\n\\nSincerely,\\nAuthors\"}", "{\"comment\": \"Thank you to the authors for trying to improve the paper and analysis.\\nSome parts of the paper have improved, but there are still many parts that are difficult to follow.\\n(e.g. confidence set C_t is not introduced before Appendix, gamma discount appears in undiscounted analysis)\\n\\nRather than get bogged down in small details I want to highlight at least one fundamental error in the analysis of PSRL (Theorem 1).\\nStating this clearly should be enough to convince a third party that this analysis needs more work.\\n\\n\\n## Main theorem claim + prior/posterior disconnect\\n\\nThe authors claim \\\"the first model-free theoretical guarantee for continuous state-action space MDP, beyond the tabular setting\\\".\\nThis means that, the PSRL algorithm of Theorem 1, should maintain its posterior over $w$ without an underlying model.\\n\\nThe description of the PSRL algorithm (and associated regret bound) is not tied to any specific choice of prior/likelihood format.\\nThis statement is itself unclear, do the authors mean this result to hold for all choices of prior/likelihood, or specifically using a Gaussian form for PSRL updates?\\nEither way, the application of the \\\"posterior sampling lemma\\\" on page 23 (that conditioned on any data H_t, the sampled posterior is identically distributed to the optimal value) is inconsistent.\", \"there_are_two_main_options_here\": \"a - If their \\\"PSRL\\\" is using a model-free Gaussian form of the optimal value (per Gaussian linear bandits), then this is not the correct posterior for the Bayesian decision rule on the optimal policy for all underlying MDPs.\\n To see this, note that the *optimal* policy includes a max operator over actions, this breaks Gaussian conjugacy even if the rewards are gaussian... this is the main difficulty in prior analyses of RLSVI (e.g. https://arxiv.org/abs/1402.0635).\\n \\n b - If PSRL is using the correct form of the posterior, then it must be using the information about the likelihood of the \\\"noise process\\\" $\\\\nu$ and thus this is not a model-free algorithm.\\n This setting is most similar to prior work on PSRL with generalization (e.g. https://arxiv.org/abs/1406.1853, https://arxiv.org/abs/1709.04047)\\n\\nTo remedy this, the authors need to be much more clear about what prior/posterior sampling procedure their PSRL algorithm uses *and* the prior/likelihood of tasks against which they are assessing their algorithm.\\nIf the two are the same, then they need to be clear on *how* PSRL is able to be model-free algorithm and still match the exact posterior of the underlying MDP given any possible data H_t.\\n\\nThis is an interesting line of research, and it would be impactful to get this answer right!\\nUnfortunately, I do not think this proof is correct and so it should not be accepted.\", \"title\": \"Fundamental issues with proof are still not solved\"}", "{\"title\": \"Dropout, as another randomized exploration method\", \"comment\": \"Dear reviewer\\nWe would like to bring your attention to the final results of the empirical study you kindly suggested. As we mentioned in our previous comment, we implemented the dropout version of the DDQN and compared its performance against BDQN, DDQN, DDQN+, as well as the random policy (the policy which chooses actions uniformly at random). We included the result of this empirical study to out paper. \\n\\nWe would like to mention that we did not observe much gain beyond the performance of the random policy. Please see the discussion in the related works as well as in the section A.6). In the following we provided a summary of the empirical results on four randomly chosen games;\\n\\nGame\\t\\t BDQN\\tDDQN\\tDDQN+\\t DropoutDDQN RandomPolicy\\nCrazyClimber 124k 84k 102k 19k 11k\\nAtlantis 3.24M 40k 65k 7.7k 13k\\nEnduro 1.12k 0.38k 0.32k 0.27k 0\\nPong 21 18.8 21 -18 -20.7\\n\\nAs a conclusion, despite the arguments in the prior works on the deficieny of the dropout methods in providing reasonable randomizations in RL problems, we also empirically observed that dropout results in a performance worse than the performance of a plain epsilon-greedy DDQN and somewhat similar performance as the random policy's on at least four Atari games.\"}", "{\"title\": \"Updated draft-factored Appendix\", \"comment\": \"Dear Ian,\\nI would like to inform you that I uploaded the revised version of the draft as promised. Based on your and the reviewers' great comments, I significantly improved the Appendix and with high probability, I think it is now much more clear and factored. I would be grateful to have your insightful feedbacks again.\"}", "{\"title\": \"The paper presents some strong experimental results.\", \"comment\": \"We appreciate the thoughtful and detailed comments by the reviewer. In the following, we addressed the comments raised by the reviewer which helped us to revise the draft and make our paper more accessible.\", \"regarding_the_dropout\": \"Drop-out, as another randomized exploration method is proposed by Gal & Ghahramani (2016), but Osband et al. (2016) argue about the deficiency of the estimated uncertainty and hardness in driving suitable exploration and exploitation trade-off from it. Please look at Appendix A in Osband et. al. 2016 https://arxiv.org/pdf/1602.04621.pdf.\\nOsband el. Al. 2016 states that \\u201cThe authors of Gal & Ghahramani (2016) propose a heteroskedastic variant which can help, but does not address the fundamental issue that for large networks trained to convergence all dropout samples may converge to every single datapoint... even the outliers.\\u201d This issue with dropout methods that they result in ensemble of many models but all almost the same is also observed in the adversarial attacks and defense community, e.g. Dhillon et. al. 2018 \\u201c the dropout training procedure encourages all possible dropout masks to result in similar mappings.\\u201d Furthermore, after the reviewer\\u2019s comment, we also implemented DDQN-dropout and ran it on 4 randomly chosen Atari games (among those we ran for less than 50M time steps (please consider that these experiments are expensive)). We observed that the randomization in DDQN-dropout is deficient and results in a performance worse than DDQN on these 4 Atari games (the experiments are half way through and are still running, the statement is based on DDQN performance after seeing half of the data). We will add these further study in the final version of the paper.\", \"regarding_the_model_based_approaches\": \"Model-based approaches are provably sample efficient, but they are mainly not scalable to the high dimensional settings.\", \"regarding_the_mentioned_papers\": \"We appreciate your suggestions and added both of the mentioned papers to our paper.\", \"regarding_the_table2\": \"As it is mentioned in the draft, the reported scores in table 2 are the scores directly reported in their original papers. As discussed in the paper, we are not aware of the detailed implementation and environment choices in this paper since their implementation codes are not publicly available. In order to see why the comparison through table 2 can be problematic, please, for example, look at the reported scores if DDQN in Bootstrapped DQN (Deep Exploration via Bootstrapped DQN ) and compare them with the reported score in the original DDQN paper. You can see that there is a huge gap. For example, some of them are as follows;\\nAline(2.9k,4k), Amidar(.7k,2.1k), Assault(5k,7k), Atlantis(65k,770k) where the first set of scores are DDQN scores in the original DDQN paper, and the second set of scores are the scores of DDQN, reported in the Bootstrap DQN paper. As you can see, direct reporting scores is not the best way of comparison and reasoning. Regarding the evaluation phase, we agree with the reviewer that scores in the evaluation phase are important when asymptotic performance is concerned but they are not sufficiently informative when regret and sample complexity are the measures of interest.\\n\\n\\nWe hope that the reviewer would kindly consider our replies, especially about the Dropout methods, and take them into account when assessing the final scores.\"}", "{\"title\": \"Simple and elegant algorithm + Strong empirical results in standard benchmarks.\", \"comment\": \"We would like to thank the reviewer for the helpful comments. We appreciate the comments and believe that they significantly improve the clarity of our paper.\\n\\n\\nAt first, we would like to apologize for the typos. We addressed them based on the extensive review by AnonRev2.\", \"regarding_the_proof\": \"Based on Ian Osband\\u2019s and AnonRev2 comments, we polished the proof and believe it is now in a good shape. We would like to bring the reviewer\\u2019s attention to the revised version of the Appendix where we further improved the clarity of the expressions and derivations.\", \"regarding_the_noise_model\": \"As the reviewer mentioned, the approach in the mentioned proceeding paper [1] is similar to BDQN (BDQN was publicly available as a workshop paper before [1] appearance). As the reviewer also mentioned, it is an interesting approach to estimate the noise level which could be helpful in practice. We would like to bring the reviewer\\u2019s attention to our Lemma 1 and Eq3 where we show that the noise level is just a constant scale of the confidence interval which vanishes as O(1/sqrt(t)). Therefore, the noise level does not have a critical and deriving effect when the confidence interval is small. To be more accurate, it is also worth noting that one can deploy related Bernstein-based approaches as in \\u201cMinimax Regret Bounds for Reinforcement Learning\\u201d for the noise estimation, but this approach results in the more complicated algorithm as well as an additional fixed but intolerably big term (cubic in the dimension) in the regret.\\n\\n\\nRegarding the design choice \\u201cd\\u201d: We apologize for the lack of clear clarification that \\u201cd\\u201d is a design choice in the theorem and cannot be set to a small value unless the assumption holds. We restated this statement in the revised draft and made it clear.\", \"regarding_the_connection_between_the_linear_ts_and_theorem_1\": \"If the prior and posterior are both Gaussian, similar to the linear Bandits case studied in Russo2014 \\u201cLearning to optimize via posterior sampling\\u201d, the linear TS is equivalent to the PSRL algorithm mentioned in Theorem 1.\\n\\n\\nWe appreciate the reviewer\\u2019s comments and believe they help to improve the current state of the paper. We would be grateful if the reviewer could look at the new uploaded draft.\"}", "{\"title\": \"Significant improvement in the draft.\", \"comment\": \"We would like to thank the reviewer for taking the time to leave a clear, precise, and thoughtful review. We appreciate the reviewer\\u2019s comments and believe that they significantly improved the clarity of our paper. In the following, we describe how we addressed them.\\n\\n\\n1) We agree with the reviewer that the use of sample for both experience and Thompson sampling is confusing. It was not revealed to us until the reviewer mentioned that. We appreciate this comment and addressed it in the new draft.\\n\\n2) We restated it in the abstract\\n3) We changed the statement\\n4) The reviewer is totally right. We fixed the explanation.\\n5) Fixed\\n6) We apologize for the confusion. In high level, we motivate the fact that both BDQN and DDQN approach a similar linear problem (same target value) but one via Bayesian linear regression other via linear regression.\\n\\n7) Full stop added. We appreciate the comment.\\n8) We elaborated more on this statement and expressed that it is an additional computation cost of BDQN over DQN for Atari games. \\n\\n9) That is a great point. We added a reference. \\n10) As the reviewer knows, most of theoretical advances and analyses in the literature are mainly dedicated to OFU based approaches. So far, most of the guarantees for Thompson sampling methods are not as tight as OFU for many problems. In this part of our paper, we try to motivate Thompson sampling as a method which works better than OFU in practice even though the guarantees aren\\u2019t as tight. In addition, we want to motivate that TS based methods are also computation more favorable than OFU. Furthermore, later in the theoretical study, we also prove the regret bound for OFU, but as the reviewer can see, the OFU based methods in the linear models, as usual, consist of solving NP-hard problems due to their inner loop optimization steps.\\n\\n11) Thanks for pointing it out, we added it to the draft.\\n12) We rephrase it as you suggested. \\n13) That is a great point. Fixed\\n14) The definition of a_TS is restated in the new draft. W is the matrix fo most recent sampled weight vectors..\\n15) We addressed that after the submission. \\n16) We added a more detailed explanation. \\n17) We restated these. \\n18) Addressed\\n19) Fixed\\n20) Addressed\\n21) Addressed\\n22) The footnotes unexpectedly disappeared. We appreciate your precise comment and addressed it.\\n23) Fixed\\n24) Addressed\\n25) Great point, addressed\\n26) Yes, we will address this \\n27) That is a great point, it is a typo, we addressed it.\\n27) Changed\\n28) Fixed \\n29) Cited\\n30) We provided the detailed algorithm. \\n31) Fixed\\n32) Restated\\n33) Restated\\n34) It means the T and d dependency in upper bound matches the T and d dependency in the lower bound. Please let us know if it is required to rephrase it.\\n\\n35) Fixed\\n36) Addressed\\n37) It means that one of the games among those 5 games in levine et al. 2017 is common with our set of 15 games.\\n\\n38) Removed\\n39) Addressed\\n40) Addressed\\n41) Fixed\\n42) The pretrained DQN is just used to ease the hyperparameter tuning. BDQN itself start from scratch.\\n43) Fixed\\n44) Fixed\\n45) Fixed\\n46) Fixed\\n47) Restated\\n48) We had the definition of the prior and likelihood in our previous submission where one of our reviewers mentioned that it is obvious and no need to express the prior and likelihood for Bayesian linear regression.\\n\\n49) Fixed\\n50) Fixed\\n51) Fixed\\n\\nWe would like to thank the reviewer again for taking the time to leave the thoughtful and precise review. We applied the rest of the comments directly on to the revised draft. These comments significantly helped us to improve our paper.\"}", "{\"title\": \"Strong empirical performance as compared with appropriate baselines - More clear description for the main Algorithm.\", \"comment\": \"We would like to thank the reviewer for taking the time to leave a thoughtful review. In the following, we addressed the comments raised by the reviewer which helped us to revise the draft and make it more accessible.\", \"regarding_the_posterior_updates\": \"We apologize for the lack of clarity. The Cov is computed by applying Bayesian Linear regression on a subset of randomly sampled tuples in the experience replay buffer. We expressed it in a more detail in Eq.3 of the new draft. Regarding the updates of target models: Similiar to DQN paper, we update the target feature network every T^{target} and set its parameters to the feature network parameter. We also update the linear part of the target model, i.e., w^{target}, every T^{Bayes target} and set the weights to the mean of the posterior distribution.\", \"regarding_the_thompson_sampling_in_algorithm_1\": \"As the reviewer also pointed out, Thompson sampling in multi-arm bandits can be more efficient if we sample from the posterior distribution at each time step, i.e., T^{sample}=1. But as it has been proven in our Theorem1 as well as Osband et al. 13, and Osband et al. 14, the true Thompson sampling can have T^{sample}>1. Moreover, as it is expressed in Appendix A.5, as long as T^{sample} is in the same order of the horizon length, the choice of T^{sample} does not critically affect the performance. As the reviewer also mentioned, sampling from the posterior at the beginning of each episode could marginally enhance the performance. While sampling at the beginning of episodes does not dramatically affect the BDQN performance, it provides additional information to the algorithm about the games settings which might cause a controversy in the fair comparison against other algorithms.\\n\\nWe apologize for the lack of a concrete definition of PSRL, and we agree with the reviewer that it should have been clearly defined. We added a clear definition of PSRL as well as a new algorithm block (now Alg2). The theoretical contribution in this paper suggests that if an RL agent follows the posterior sampling over weights w for exploration and exploitation trade-off, the agent\\u2019s regret is upper bounded by \\\\tilde{O(d\\\\sqrt(T))}. Similar to the approach in Russo et al. 14 for linear bandits, if the prior and the likelihood are conjugates of each other and Gaussian, then BDQN is equivalent to PSRL. \\n\\u201cA side point: It is an interesting observation that in the proof of our theorems, we construct self-normalized processes which result in a Gaussian approximation of confidence. The Gaussian approximation of confidence also has been deployed for linear bandits in \\u201cLinear Thompson Sampling Revisited\\u201d. Therefore, the choice of Gaussian is well motivated.\\u201d\\n\\nWe appreciate the comment by the reviewer on the O notation. We fixed it in the new draft.\", \"regarding_the_table_2\": \"we added the vertical lines to both tables, the table 2 and the table 3.\\n\\nRegarding the safety discussion in A.8: We apologize if the statement was not clear in the discussion on safety. We added a detailed explanation in addition to a new figure for the proof-of-concept to clarify our statement. Generally, in the study of safety in RL, a RL agent avoids taking actions with high probability of low return. If the chance of receiving a low return under a specific action is high, then that action is not a favorable action. In appendix A.8 we show how the approaches studied in this paper also approximate the distribution over the return which can further be used for the safe exploration.\", \"typos\": \"Thanks for pointing it out. Also thanks to the reviewer 2, we addressed the typos in the new draft.\", \"regarding_the_distributional_rl\": \"The approach in Bellmare et al. 2017 approximates the distribution of the return (\\\\sum_t \\\\gamma r_t|x,a) rather the distribution (or uncertainty) over the expected return Q(x,a)=E[\\\\sum_t \\\\gamma r_t|x,a]. It is worth reminding that the mean of the return distribution is the Q function. Conceptually, approximating the distribution of the return is a redundant effort if our goal is to approximate the Q function. The approach in Bellmare et al. 2017 proposes first to deploy a deep neural network and approximate the return distribution, then apply a simple bin-based discretization technique to compute the mean of the approximated distribution, which is again the approximated Q function. Interestingly, Bellmare et al. 2017 empirically show that this approach results in a better approximation of the Q function. This approach is a variant of the Q-learning algorithm.\"}", "{\"title\": \"General reply to reviewers\", \"comment\": \"Dear reviewers\\nWe would like to thank the reviewers for taking the time to leave thoughtful reviews. Given these feedbacks, we have significantly improved the draft and hope the reviewers will take this into account when assessing the final scores. We appreciate the reviewers for the time and effort they dedicated to our paper. Please find individual replies to each of the reviews in the respective threads. Based on the reviewers' reviews and the comment by Ian Osband we revised the draft and uploaded the new version.\"}", "{\"title\": \"-\", \"review\": \"This paper proposes a method for more efficient exploration in RL by maintaining uncertainty estimates over the learned Q-value function. It is comparable to Double DQN (DDQN) but uses its learned uncertainty estimates with Thompson sampling for exploration, rather than \\\\epsilon-greedy. Empirically the method is significantly more sample-efficient for Atari agents than DDQN and other baselines.\\n\\n=====================================\", \"pros\": \"Introduction and preliminaries section give useful background context and motivation. I found it easy to follow despite not having much hands-on background in RL.\\n\\nProposes a novel (to my knowledge) exploration method for RL which intuitively seems like it should work better than \\\\epsilon-greedy exploration. The method looks simple to implement on top of existing Q-learning based methods and has minimal computational and memory costs.\\n\\nStrong empirical performance as compared with appropriate baselines -- especially to DDQN(+) where the comparison is direct with the methods only differing in exploration strategy.\\n\\nGood discussion of practical implementation issues (architecture, hyperparameters, etc.) in Appendix A.\\n\\n=====================================\\n\\nCons/questions/suggestions/nitpicks:\", \"algorithm_1_line_11\": \"\\u201cUpdate W^{target} and Cov\\u201d -- how? I see only a brief mention of how W^{target} is updated in the last paragraph of Sec. 3, but it\\u2019s not obvious to me how the algorithm is actually implemented from this, and I don\\u2019t see any mention of how Cov is updated.\", \"algorithm_1\": \"I\\u2019d like to know more about how sample-efficiency varies with T^{sample} given that T^{sample}>1 is doing something other than true Thompson sampling. Does the regret bound hold with T^{sample}>1? Also, based on the discussion in Appendix A, approximating the episode length seems to be the goal in choosing a setting of T^{sample} -- so why not just always resample at the beginning of each episode instead of using a constant T^{sample}?\", \"theorem_1\": \"there\\u2019s a common abuse of big-O notation here that should be fixed for a formal statement -- O(f(n)) by definition is a set corresponding to an upper-bound, so this should probably be written as g(n) \\\\in O(f(n)) rather than g(n)<=O(f(n)). (Or alternatively, just rewritten without big-O notation.)\", \"table_2\": \"should be reformatted to make it clear that the rightmost 3 columns are not additional baseline methods (e.g. adding a vertical line would be good enough).\\n\\nAppendix A.8, \\u201cA discussion on safety\\u201d: this section should either be much more fleshed out or removed. I didn\\u2019t understand the statement at the end at all -- \\u201cone can... come up with a criterion for safe RL just by looking at high and low probability events\\u201d -- huh? What is even meant by \\u201csafe RL\\u201d in this context? Nothing is referenced.\\n\\nOverall, much of the writing seems quite rushed with many typos and grammatical errors throughout. This should be cleaned up for a final version. To give a particularly common example, there are many inline references that do not fit in the sentence and distract from the flow -- these should be changed to \\\\citep.\\n\\nHow does this compare with \\u201cA Distributional Perspective on Reinforcement Learning\\u201d (Bellemare et al., ICML 2017) both in terms of the approach and performance? The proposed method seems to at least superficially share motivation with this work (and uses the same Atari benchmark, as far as I can tell) but it is not discussed or compared.\\n\\n=====================================\\n\\nOverall, though many parts of the paper could use significant cleanup and clarification, the paper proposes a novel yet relatively simple and intuitive approach with strong empirical performance gains over comparable baselines.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Major clarity issues\", \"review\": \"Update after feedback: I would like to thank the authors for huge work done on improving the paper. I appreciate the tight time constrains given during the discussion phase and big steps towards more clear paper, but at the current stage I keep my opinion that the paper is not ready for publication. Also variability of concerns raised by other reviewers does not motivate acceptance.\\n\\nI would like to encourage the authors to make careful revision and I would be happy to see this work published. It looks very promising.\", \"just_an_example_of_still_unclear_parts_of_the_paper\": \"the text between eq. (3) and (4). This describes the proposed method, together with theoretical discussions this is the main part of the paper. As a reader I would appreciate this part being written detailed, step by step.\\n=========================================================\\n\\nThe paper proposes the Bayesian version of DQN (by replacing the last layer with Bayesian linear regression) for efficient exploration. \\n\\nThe paper looks very promising because of a relatively simple methodology (in the positive sense) and impressive results, but I find the paper having big issues with clarity. There are so many mistakes, typos, unclear statements and a questionable structure in the text that it is difficult to understand many parts. In the current version the paper is not ready for publication. \\n\\nIn details (more in the order of appearance rather than of importance):\\n1. It seems that the authors use \\u201csample\\u201d for tuples from the experience replay buffer and draws W from its posterior distribution (at least for these two purposes), which is extremely confusing\\n2. pp.1-2 \\u201cWe show that the Bayesian regret is bounded by O(d \\\\sqrt{N}), after N time steps for a d-dimensional feature map, and this bound is shown to be tight up-to logarithmic factors.\\u201d \\u2013 maybe too many details for an abstract and introduction and it is unclear for a reader anyway at that point\\n3. p.1 \\u201cA central challenge in reinforcement learning (RL) is to design efficient exploration-exploitation tradeoff\\u201d \\u2013 sounds too strong. Isn\\u2019t the central challenge to train an agent to get a maximum reward? It\\u2019s better to change to at least \\u201cOne of central challenges\\u201d\\n4. p.1 \\u201c\\u03b5-greedy which uniformly explores over all the non-greedy strategies with 1 \\u2212 \\u03b5 probability\\u201d \\u2013 it is possible, but isn\\u2019t it more conventional for an epsilon-greedy policy to take a random action with the probability epsilon and acts greedy with the probability 1 \\u2013 epsilon? Moreover, later in Section 2 the authors state the opposite \\u201cwhere with \\u03b5 probability it chooses a random action and with 1 \\u2212 \\u03b5 probability it chooses the greedy action based on the estimated Q function.\\u201d\\n5. p.1 \\u201cAn action is chosen from the posterior distribution of the belief\\u201d \\u2013 a posterior distribution is the belief\\n6. p.2 \\u201cand follow the same target objective\\u201d \\u2013 if BDQN is truly Bayesian it should find a posterior distribution over weights, whereas in DDQN there is no such concept as a posterior distribution over weights, therefore, this statement does not sound right\\n7. p.2 \\u201cThis can be considered as a surrogate for sample complexity and regret. Indeed, no single measure of performance provides a complete picture of an algorithm, and we present detailed experiments in Section 4\\u201d \\u2013 maybe too many details for introduction (plus missing full stop at the end)\\n8. p.2 \\u201cThis is the cost of inverting a 512 \\u00d7 512 matrix every 100,000 time steps, which is negligible.\\u201d \\u2013 doesn\\u2019t this depend on some parameter choices? Now the claim looks like it is true unconditionally. Also too many details for introduction\\n9. p.2 \\u201cOn the other hand, more sophisticated Bayesian RL techniques are significantly more expensive and have not lead to large gains over DQN and DDQN.\\u201d \\u2013 it would be better to justify the claim with some reference\\n10. Previous work presented in Introduction is a bit confusing. If the authors want to focus only on Thompson Sampling approaches, then it is unclear, why they mentioned OFU methods. If they mention OFU methods, then it is unclear why other exploration methods are not covered (in Introduction). It is better to either move OFU methods to Related Work completely, or to give a taste of other methods (for example, from Related Work) in Introduction as well\\n11. p.3 \\u201cConsider an MDP M as a tuple <X , A, P, P0, R, \\u03b3>, with state space X , action space A, the transition kernel P, accompanied with reward function of R, and discount factor 0 \\u2264 \\u03b3 < 1.\\u201d \\u2013 P_0 is not defined\\n12. p.4 \\u201cA common assumption in DNN is that the feature representation is suitable for linear classification or regression (same assumption in DDQN), therefore, building a linear model on the features is a suitable choice.\\u201d \\u2013 the statement is more confusing than explaining. Maybe it is better to state that the last fully connected layer, representing linear relationship, in DQN is replaced with BLR in the proposed model\\n13. p.5 In eq. (3) it is better to carry definition of $\\\\bar{w}_a$ outside the Gaussian distribution, as it is done for $\\\\Xi_a$\\n14. p.5 The text between eq. (3) and (4) seems to be important for the model description and yet it is very unclear: how $a_{TS}$ is used? \\u201cwe draw $w_a$ follow $a_{TS}$\\u201d \\u2013 do the authors mean \\u201cfollowing\\u201d (though it is still unclear with \\u201cfollowing\\u201d)? What does notation $[W^T \\\\phi^{\\\\theta} (x_{\\\\tau})]_{a_{\\\\tau}}$ denote? Which time steps do the authors mean?\\n15. p.5 The paragraph under eq. (4) is also very confusing. \\u201cto the mean of the posterior A.6.\\u201d \\u2013 reference to the appendix without proper verbal reference. Cov in Algorithm 1 is undefined, is it equal to $\\\\Xi$? Notation in step 8 in Algorithm 1 is too complicated.\\n16. Algorithm 1 gives a vague idea about the proposed algorithm, but the text should be revised, the current version is very unclear and confusing\\n17. pp.5-6 The text of the authors' attempts to reproduce the results of others' work (from \\\"We also aimed to implement...\\\" to \\\"during the course of learning and exploration\\\") should be formalised\\n18. p. 6 \\\"We report the number of samples\\\" - which samples? W? from the buffer replay?\\n19. p. 6 missing reference for DDQN+\\n20. p. 6 definition of SC+ and references for baselines should be moved from the table caption to the main text of the paper\\n21. p. 6 Table 3 is never discussed, appears in a random place of the text, there should be note in its reference that it is in the appendix\\n22. p.6 Where is the text for footnotes 3-6?\\n23. p.6 Table 2 may be transposed to fit the borders\\n24. p.6 (and later) It is unclear why exploration in BDQN is called targeted\\n25. p.7 Caption of Figure 3 is not very good\\n26. p.7 Too small font size of axis labels and titles in plots in Figure 3 (there is still a room for 1.5 pages, moreover the paper is allowed to go beyond 10 pages due to big figures)\\n27. p.7 Figure 3. Why Assault has different from the others y-axis? Why in y-axis (for the others) is \\\"per episode\\\" and x-axis is \\\"number of steps\\\" (wise versa for Assault)?\\n27. Section 5 should go before Experiments\\n28. p. 7 \\u201cWhere \\u03a8 is upper triangular matrix all ones 6.\\u201d \\u2013 reference 6 should be surrounded by brackets and/or preceded by \\\"eq.\\\" and it is unclear what \\u201call ones\\u201d means especially given than the matrix in eq. (6) does not contain only ones\\n29. p. 7 \\u201cSimilar to the linear bandit problems,\\u201d \\u2013 missing citation\\n30. p. 7 PSRL appears in the theorem, but is introduced only later in Related work\\n31. p. 7 \\u201cProof: in Theorem. B\\u201d \\u2013 proof is given in Appendix B?\\n32. p. 8 Theorem discussion, \\u201cgrows not faster than linear in the dimension, and \\\\sqrt(HT)\\u201d \\u2013 unclear. Is it linear in the product of dimension (of what?) and \\\\sqrt(HT)?\\n33. p.8 \\u201cOn lower bound; since for H = 1\\u2026\\u201d \\u2013 what on lower bound?\\n34. p.8 \\u201cour bound is order optimal in d and T\\u201d \\u2013 what do the authors mean by this?\\n35. p.8 \\\"while also the state of the art performance bounds are preserved\\\" - what does it mean?\\n36. p.8 \\\"To combat these shortcomings, \\\" - which ones?\\n37. p.8 \\\"one is common with our set of 15 games which BDQN outperformS it...\\\" - what is it?\\n38. p.9 \\\"Due to the computational limitations...\\\" - it is better to remove this sentence\\n39. p.9 missing connection in \\\"where the feature representation is fixed, BDQN is given the feature representation\\\", or some parts of this sentence should be removed?\\n40. p.9 PAC is not introduced\\n41. pp.13-14 There is no need to divide Appendices A.2 and A.3. In fact, it is more confusing than helpful with the last paragraph in A.2 repeating, sometimes verbatim, the beginning of the first paragraph in A.3\\n42. In the experiments, do the authors pre-train their BDQN with DQN? In this case, it is unfair to say that BDQN learns faster than DDQN if the latter is not pre-trained with DQN as well. Or is pre-training with DQN is used only for hyperparameter tuning?\\n43. p.14 \\u201cFig. 4 shows that the DDQN with higher learning rates learns as good as BDQN at the very beginning but it can not maintain the rate of improvement and degrade even worse than the original DDQN.\\u201d \\u2013 it seems that the authors tried two learning rates for DDQN, for the one it is clear that it is set to 0.0025, another one is unclear. The word \\u201coriginal\\u201d is also unclear in this context. From the legend of Figure 4 it seems that the second choice for the learning rate is 0.00025, but it should be stated in the text more explicitly. The legend label \\u201cDDQN-10xlr\\u201d is not the best choice either. It is better to specify explicitly the value of the learning rate for both DDQN\\n44. p.15 \\u201cAs it is mentioned in Alg. 1, to update the posterior distribution, BDQN draws B samples from the replay buffer and needs to compute the feature vector of them.\\u201d \\u2013 B samples never mentioned in Algorithm 1\\n45. p.15 \\u201cduring the duration of 100k decision making steps, for the learning procedure,\\u201d \\u2013 i) \\u201cduring \\u2026 duration\\u201d, ii) what did the authors meant by \\u201cdecision making steps\\u201d and \\u201cthe learning procedure\\u201d?, and iii) too many commas\\n46. p.15 \\u201cwhere $\\\\tilde{T}^{sample}$, the period that of $\\\\tilde{W}$ is sampled our of posterior\\u201d \\u2013 this text does not make sense. Is \\u201cour\\u201d supposed to be \\u201cout\\u201d? \\u201c\\u2026 the number of steps, after which a new $\\\\tilde{W}$ is sampled from the posterior\\u201d?\\n47. p.15 \\u201c$\\\\tilde{W}$ is being used just for making Thompson sampling actions\\u201d \\u2013 could the authors be more specific about the actions here?\\n48. p.16 \\u201cIn BDQN, as mentioned in Eq. 3, the prior and likelihood are conjugate of each others.\\u201d \\u2013 it is difficult to imagine that an equation would mention anything and eq. (3) gives just the final formula for the posterior, rather than the prior and likelihood\\n49. p.16 The formula after \\u201cwe have a closed form posterior distribution of the discounted return, \\u201d is unclear\\n50. p.17 \\u201cwe use \\u03c9 instead of \\u03c9 to avoid any possible confusion\\u201d \\u2013 are there any differences between two omegas?\\n51. p.17 what is $\\\\hat{b}_t$?\\n\\nThere are a lot of minor mistakes and typos also, I will add them as a comment since there is a limit of characters for the review.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Appealing idea; poor delivery.\", \"review\": \"Summary: The paper proposes an approximate Thompson Sampling method for value function learning when using deep function approximation.\", \"research_context\": \"Thompson Sampling is an algorithm for sequential decision making under uncertainty that could provide efficient exploration (or lead to near optimal cumulative regret) under some assumptions. The most critical one is the ability to sample from the posterior distribution over problem models given the already collected data. In most cases, this is not feasible, so we need to rely on approximate posteriors, or, informally, on distributions that somehow assign reasonable mass to plausible models. The paper tries to address this.\", \"main_idea\": \"In particular, the idea here is to simultaneously train a deep Q network while choosing actions based on samples for the linear parameters of the last layer. On one hand, this seems sensible: a distribution over the last layer weights provides an approximate posterior over Q functions, as needed, and a linear model could work after learning an appropriate representation. On the other, this seems doable: there are close-form updates for Bayesian linear regression when using a Gaussian prior and likelihood, as proposed in the paper.\", \"pros\": [\"Simple and elegant algorithm.\", \"Strong empirical results in standard benchmarks.\"], \"cons\": [\"The paper is very poorly written; the number of typos is countless, and in general the paper is quite hard to read and to follow.\", \"I share the concerns expressed in the first public comment regarding the correctness of the theoretical statements (Theorem 1) or, at least, the proposed proofs. Notation is very hard to parse, and the meaning of some claims is not clear ('the PSRL on w', 'we use w instead of w', 'the estimated \\\\hat{b_t}', '\\\\pi_t(x, a) = a = ...'). I'd appreciate a clear proof strategy outline. In addition, it'd be quite useful if the authors could highlight the specific technical contributions of the proposed analysis, and how they rely on and relate to previous analyses (Abbasi-Yadkori et al., De la Pe\\u00f1a et al., Osband et al., etc).\", \"I think Table 1, Figure 1, and Figure 2 are not particularly useful and could be removed.\"], \"questions\": [\"Last year, there was a paper published in ICLR [1] that proposed basically the same algorithm for contextual bandits. They reported as essential to also learn the noise levels for different actions, while in this work \\\\sigma_\\\\epsilon is assumed known, fixed, and common across actions (see paragraph to the left of Figure 2). I'm curious why not learn it for each action using an Inverse-Gamma prior as proposed in [1], or if this was actually something you tried, and what the performance consequences were. In principle, my hunch is it should have a strong impact on the amount of exploration imposed by the algorithm (see Equation 3) over time.\", \"A minor comment: the dimension 'd' in Theorem 1 is a *design choice* in the proposed algorithm. Of course, Theorem 1 relies on some assumptions that may be harder to satisfy for decreasing values of 'd', but I think some further comment can be useful as some readers may think the theorem is indeed suggesting we should set as small 'd' as possible...\", \"More generally, what are the expected practical consequences of the mismatch between the proposed algorithm (representation is learned alongside with linear TS) and the setup in Theorem 1 (representation is fixed or known, and prior and likelihood are not misspecified)?\"], \"conclusion\": \"While definitely a promising direction, the paper requires significant further work, writing improvement, and polishing. At this point, I'm unable to certify that the theoretical contribution is correct.\\n\\n\\nI'm willing to change my score if some of the comments above are properly addressed. Thanks.\\n\\n\\n\\n\\n\\n[1] - Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Clarifications on the regret analysis.\", \"comment\": \"Dear Ian\\nThank you for your interest in this work, and I appreciate your comments. They were helpful to improve the representation in the appendix. \\n\\nRegarding the structure of analysis, I followed the flow in Abbasi-Yadkori et al 2010 to make the theoretical contribution more transparent. Upon your feedback, I changed the presentation in the appendix and refactored pieces such that the appendix is more accessible. The Lemmas and their proofs are factored out from the main body of the theorem proof. More explanation in the components and more detailed comments on derivation are also provided. \\n\\na) PSRL-RLSVI: I believe the main confusion is due to the fact that I was not clear enough in stating that the theoretical analysis is for PSRL where the agent knows the exact prior and likelihood, and I apologize for the confusion. If the prior and likelihood are Gaussian, then BDQN (for the fixed representation up to some modification) is equivalent to PSRL; otherwise, BDQN approximates the posterior distribution with a Gaussian distribution, and the bound does not hold. I also added the PSRL algorithm block into the main text. In short, at the beginning of each episode, the algorithm draws a sample w_t from the w posterior distribution and follows w_t\\u2019s policy. The prior and posterior need not to be Gaussian, rather known. \\n\\nb) Mapping from MDP to w: We consider a class of MDPs for which the optimal Qs are linear. By definition, given an underlying MDP, there exists a w^* (up to some conditions). Clearly, as you also mentioned, the mapping from MDP to w, in general, cannot be defined as a bijective mapping. Therefore, I would avoid saying w specifies the model and instead the other way, as it is also mentioned in the paper. In order to prevent any further misunderstanding, I explained it in more detail and also changed a few lines to clarify it the most. \\nMoreover, in order to prove the theorem, there is no need to bring the model in the proof picture. I explained it through the model to ease the understanding of the derivation. I admit that I could explain the derivation in a better and clearer way. You can see that the same derivation can be done by adding and subtraction through Eq 6, the linear model. In order to prevent any confusion and directly carry the message, I wrote the derivation without bringing the MDP model into the proof picture in the new draft. Thanks for this comment, the current derivation without MDP model is now more transparent.\\n\\nc) High probability bound: When we use frequentist argument for a bound, we usually get high probability bound for either Bayesian or frequentist, e.g. \\u201cMcAllester 1998\\nSome PAC-Bayesian Theorems\\u201d\\nAs you know, when one substitutes \\\\delta with 1/T (or sometimes 1/T^2) we get log(T) instead of log(1/\\\\delta) in the bound as well as additional positive constant T/T Loss_max in the final bound. For example, your paper Osband et al 2013 and Russo et al 2014 follow the same argument. But the bound is not \\u201cany time\\u201danymore. In order to simplify the theorem, I set \\\\delta = 1/T to match the claim in Osband et al 2013. \\n\\nd) [General discount factor set] I apologize for the confusion. In the main text I talk about a discount factor of 1 (undiscounted), but in the appendix, I define the discount factor \\\\gamma to be in a closed set of [0, 1]. Please note that the upper bound on the discount factor is 1 and 1 is in the set, i.e., it contains \\\\gamma=1. So it should feel more general. For simplicity, I first derived the regret bound for \\\\gamma=1 then showed it is extendable to any \\\\gamma. I elaborated more in the new draft on how to extend the current analyses to any 0\\\\geq\\\\gamma\\\\geq 1. \\n\\n\\nThank you for pointing out the typo in the in Lemma 2, I fixed that. \\n\\n\\nThe new draft is re-organized and is much more accessible. I\\u2019ll upload it to openreview when the website is open again. It would be great if you could look at it and point out the part which requires more clarification in the new draft. It would be again helpful to have your feedback on it. They are helping to improve the accessibility of the proof.\\n\\nSincerely yours\\nAuthors\"}", "{\"title\": \"Lacks novelty, experiments incomplete, results misinterpreted. Clear reject.\", \"review\": \"The paper proposes performing Thompson Sampling (TS) using a Bayesian Linear Regressor (BLR) as the action-value function the inputs of which are parameterized as a deterministic neural net. The authors provide a regret bound for the BLR part of their method and provide a comparison against Double Deep Q-Learning (DDQL) on a series of computer games.\", \"strengths\": [\"The paper presents some strong experimental results.\"], \"weaknesses\": [\"The novelty of the method falls a little short for a full-scale conference paper. After all, it is only a special case of [3] where the random weights are restricted to the top-most layer and the posterior is naturally calculated in closed form. Note that [3] also reports a proof-of-concept experiment on a Thompson Sampling setting.\", \"Related to the point above, the paper should have definitely provided a comparison against [3]. It is hard to conclude much from the fact that the proposed method outperforms DDQN, which is by design not meant for sample efficiency and effective exploration. A DDQN with Dropout applied on multiple layers and Thompson Sampling followed as the policy would indeed be both a trivial design and a competitive baseline. Now the authors can argue what they provide on top of this design and how impactful it is.\", \"If the main concern is sample efficiency, another highly relevant vein of research is model-based reinforcement learning. The paper should have provided a clear differentiation from the state of the art in this field as well.\", \"Key citations to very closely related prior work are missing, for instance [1,2].\", \"I have hard time to buy the disclaimers provided for Table 2. What is wrong with reporting results on the evaluation phase? Is that not what actually counts?\", \"The appendix includes some material, such as critical experimental results, that are prerequisite for a reviewer to make a decision about its faith. To my take, this is not the Appendices are meant for. As the reviewers do not have to read the Appendices at all, all material required for a decision has to be in the body text. Therefore I deem all such essential material as invalid and make my decision without investigating them.\"], \"minor\": \"* The paper has excessively many typos and misspellings. This both gives negative signals about its level of maturity and calls for a detailed proofread.\\n\\n[1] R. Dearden et al., Bayesian Q-learning, AAAI, 1998\\n\\n[2] N. Tziortziotis et al., Linear Bayesian Reinforcement Learning, IJCAI, 2013\\n\\n[3] Y. Gal, Z. Ghahramani, Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning, ICML, 2016\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"comment\": \"## Overview\", \"this_paper_appears_to_be_a_revised_and_updated_submission_from_iclr_2018_https\": \"//openreview.net/forum?id=Bk6qQGWRb&noteId=rJMiU16HM.\\nAt a high level, this paper offers a particular adaptation of RLSVI to deep neural network models where a linear approximation is used on only the final layer.\\n\\nPerhaps the key addition to this paper comes from the novel theoretical results and, in particular, the regret bound of Theorem 1.\\nThis result would be *highly* significant if correct, as some of the first polynomial regret bounds for value-based RL with function approximation.\\nWe should note it would not be the first Bayesian regret bounds beyond tabula rasa, even for the PSRL algorithm (e.g. https://arxiv.org/abs/1406.1853, https://arxiv.org/abs/1403.3741, https://arxiv.org/abs/1709.04047)... but this doesn't mean it would not be *very impressive* result.\\n\\nHowever, upon close examination I am highly skeptical of Theorem 1 (at least in its current presentation).\\nAt the very least, the statement/proof as presented in this paper is insufficiently clearly-stated for verification.\\nIn fact, I actually believe there are several important missing pieces to this analysis that mean this approach is not correct...\\n\\n\\n## Issues with \\\"PSRL\\\" analysis\\n\\nCritiquing specific parts of the analysis is difficult, since the analysis is not broken into separable pieces.\\nI would encourage the authors to refactor the analysis into a more step-by-step argument, so that issues can be isolated more carefully... even if this results in a long appendix.\", \"specific_comments\": \"(a) - The theorem statement conflates PSRL (that samples from the exact posterior distribution) with RLSVI (that uses a Gaussian approximation to the TD error) with \\\"Bayesian DQN\\\" (which appears to be RLSVI when used with linear approximation). It's quite hard to know exactly what algorithm is meant by \\\"The PSRL on w\\\", since this is not defined.\\nIf this algorithm refers to RLSVI we should note that samples from RLSVI are *not* drawn from the posterior, but instead use a Gaussian approximation to the Bellman residual. Specifically this argument is used at the bottom of p19 to set a term = 0 in expectation, but will not generally be true. Previous analyses of RLSVI make special care to address this mismatch but only in a tabular setting (e.g. https://arxiv.org/abs/1703.07608).\\n\\n(b) - In the analysis the coefficient vector \\\"w\\\" parameterizes not only optimal Q, but also the specific MDP model (reward R, transition T). However, parameterizing by w means the underlying model are not well-defined.\\n\\n(c) - BayesRegret takes expectation over all variables, therefore there should be no delta in Theorem 1.\\n\\n(d) - The Theorem statement makes no mention of discounting, but the analysis given in terms of gamma, which feels wrong.\\n\\n\\nLooking through some of the appendix there are several other issues (e.g. Lemma 2 misses a square in the first term of the proof), but digging into these particular issues too deeply probably isn't very helpful at this stage.\\nIt seems like this was maybe a little rushed generally (\\\"we use w instead of w to avoid any possible confusion\\\" p.17).\\nI would suggest a clean rewrite of the Theorem statement + proof so that anyone can easily verify each step.\\n\\n## Summary\\n\\nMy belief is that there are some significant missing pieces to this analysis and it will not be possible to remedy them without significant additional tools/insight.\", \"title\": \"Significant concerns with the analytical regret bound\"}", "{\"title\": \"A post-submission typo in the second paragraph of the introduction\", \"comment\": \"Dear reviewers and readers\\n\\nWe, unfortunately, noticed we made a typo post-submission in the second paragraph of the introduction section. In particular, this typo has appeared in the starting sentence, \\n\\\"An alternative to optimism-under-uncertainty is Thompson Sampling (TS), a general sampling and randomizathttps://www.overleaf.com/1332425641pdxdghynmdhyion approach (in both frequentist and Bayesian settings) (Thompson, 1933).\\\" \\nwhich should be replaced with \\n\\\"An alternative to optimism-under-uncertainty is Thompson Sampling (TS), a general sampling and randomization approach (in both frequentist and Bayesian settings) (Thompson, 1933).\\\"\\n\\nWe have already addressed this issue in our draft and apologize for any inconvenience this causes. \\n\\nSincerely yours\\nAuthors\"}" ] }
Syx72jC9tm
Invariant and Equivariant Graph Networks
[ "Haggai Maron", "Heli Ben-Hamu", "Nadav Shamir", "Yaron Lipman" ]
Invariant and equivariant networks have been successfully used for learning images, sets, point clouds, and graphs. A basic challenge in developing such networks is finding the maximal collection of invariant and equivariant \emph{linear} layers. Although this question is answered for the first three examples (for popular transformations, at-least), a full characterization of invariant and equivariant linear layers for graphs is not known. In this paper we provide a characterization of all permutation invariant and equivariant linear layers for (hyper-)graph data, and show that their dimension, in case of edge-value graph data, is $2$ and $15$, respectively. More generally, for graph data defined on $k$-tuples of nodes, the dimension is the $k$-th and $2k$-th Bell numbers. Orthogonal bases for the layers are computed, including generalization to multi-graph data. The constant number of basis elements and their characteristics allow successfully applying the networks to different size graphs. From the theoretical point of view, our results generalize and unify recent advancement in equivariant deep learning. In particular, we show that our model is capable of approximating any message passing neural network. Applying these new linear layers in a simple deep neural network framework is shown to achieve comparable results to state-of-the-art and to have better expressivity than previous invariant and equivariant bases.
[ "graph learning", "equivariance", "deep learning" ]
https://openreview.net/pdf?id=Syx72jC9tm
https://openreview.net/forum?id=Syx72jC9tm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJl9NfdQgE", "H1lppkObRm", "HylTKNLx0m", "ByeAJ28Opm", "r1ehj9U_p7", "HJegE7RUTQ", "HkxIqcKZa7", "ByxBaTGy67", "SkgCXcb16X", "S1lQhtby6m", "SyeASuDp27", "rJgbJ0v52m" ], "note_type": [ "meta_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1544942130176, 1542713284609, 1542640773293, 1542118373611, 1542118052361, 1542017832112, 1541671565669, 1541512636825, 1541507622231, 1541507499465, 1541400645943, 1541205465141 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper698/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper698/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper698/Authors" ], [ "ICLR.cc/2019/Conference/Paper698/Authors" ], [ "ICLR.cc/2019/Conference/Paper698/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper698/Authors" ], [ "ICLR.cc/2019/Conference/Paper698/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper698/Authors" ], [ "ICLR.cc/2019/Conference/Paper698/Authors" ], [ "ICLR.cc/2019/Conference/Paper698/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper698/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper provides a comprehensive study and generalisations of previous results on linear permutation invariant and equivariant operators / layers for the case of hypergraph data on multiple node sets. Reviewers indicate that the paper makes a particularly interesting and important contribution, with applications to graphs and hyper-graphs, as demonstrated in experiments.\\n\\nA concern was raised that the paper could be overstating its scope. A point is that the model might not actually give a complete characterization, since the analysis considers permutation action only. The authors have rephrased the claim. Following comments of the reviewer, the authors have also revised the paper to include a discussion of how the model is capable of approximating message passing networks. \\n\\nTwo referees give the paper a strong support. One referee considers the paper ok, but not good enough. The authors have made convincing efforts to improve issues and address the concerns.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Description of linear permutation invariant and equivariant layers\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for bringing to our attention these two recent works. We uploaded a revision. These two works construct graph features that seem to be very useful for graph classification but are not directly related to our approach. We have added them to our table and updated the text accordingly. Indeed these methods outperform our method on most, but not all datasets.\\nNote that we ran exactly the same 3-layer network on all datasets and still outperform other deep-learning based methods on the social network datasets.\"}", "{\"comment\": \"While the proposed solution was compared to a few algorithms, some recent state-of-the-art algorithms were omitted in the experiments sections, having a misleading impression on the performance of the author's algorithm. At least the following papers should be included and argued the differences with the author's approach.\\n\\n[1] Ivanov et.al, Anonymous Walk Embeddings, ICML 2018\\n[2] Verma et.al, Hunt For The Unique, Stable, Sparse And Fast Feature Learning On Graphs, NIPS 2017\", \"title\": \"Missing related work in experiments.\"}", "{\"title\": \"[Gilmer et al, 2017]\", \"comment\": \"Thank you for taking the time to write a response and bringing up Gilmer's message passing formulation.\\n\\n1) The Gilmer et al. 2017 bar: We have just uploaded a revised manuscript in which we prove that our model can approximate any message passing neural network of the general form introduced in [Gilmer et al. 2017]. \\n\\n2) \\u201cInteraction between sets\\u201d networks are not suitable for learning graphs: This claim is not entirely clear to us. The most popular way to represent a graph is by an affinity matrix that describes the interaction between every pair of nodes. From our point of view, our work establishes a natural connection between \\u201cinteraction between sets\\u201d networks and graph learning. Note that both approaches utilize the adjacency structure as well as node features, so the geometric structure of the graph is indeed visible and usable by our method.\\n\\n3) Graph learning = message passing: Although message passing is a prominent graph learning method it is not the only approach to learning graph data. We introduce a method, based on a generalization of \\u201cinteraction between sets\\u201d that theoretically contains the message passing framework. In any case, we believe seeking new/different methods to learn graphs is a worthy research goal. \\n\\n4) Full characterization of linear layers: We have updated our contribution statement (in the introduction and abstract) to claim that we give a classification of *permutation* invariant/equivariant layers.\"}", "{\"title\": \"Revision uploaded\", \"comment\": \"We thank all the reviewers for their time and effort. We have uploaded a revised manuscript in which we have incorporated the suggestions from the reviews and comments.\\n\\nWe want to highlight one specific addition to the manuscript (Appendix 3) which is a proof that our model can approximate any message passing neural network that falls in the framework of Gilmer et al. [2017] (i.e., the bar set by Reviewer2 after a fruitful discussion). \\n\\nNote that these additions made us overflow by several lines which can be squeezed back if the reviewers require that.\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thanks for your response.\\n\\nI guess there is a distinction here that is blurred in the paper, but also, to some extent, in the literature. Learning from graphs and learning from subsets of {1,...,n} are not the same thing, even if both problems can be framed in terms of a hypergraph and both problems involve equivariance to the action of S_n. \\n\\nThe graph of a molecular or a social network is much more than just a subset of V\\\\times V: it has specific geometric structure and that's exactly what a G-NN is supposed to be able to latch onto. I was critical of the early Laplacian based graph G-NN papers exactly because they only dealt with the spectrum of the Laplacian, which is an easy way out: of course it is invariant to permutations, but it doesn't tell you much about the geometry.\\n\\nFor this reason, being able to reproduce [Kip & Welling, 2017] doesn't impress me so much. Despite your claim, I don't regard that as a message passing algorithm, in fact, it looks like the word \\\"message\\\" doesn't even appear in the paper. [Gilmer et al, 2017] or any of the papers following it would be the bar if we are really talking about message passing.\\n\\nTo refine my stance a little, I do understand that equivariance of tensors to the permutation action does come up in G-NNs. For example in [Kondor et al] at each vertex they take a higher order tensor and use contractions to reduce it to a number of lower order ones. Your results could be used to count how many different ways this can be done. But this is not exactly what you describe in the paper: the group is not S_n, but only a smaller symmetric group, etc.\\n\\nIn summary, my main problem with the paper is that it overstates its scope. This paper is not really about graph neural networks, it is more about the \\\"interactions between sets\\\" nets. And it doesn't give a full characterization of equivariant layers, only a \\\"more or less full characterization\\\" because the authors only consider the permutation action. The main result is Theorem 1, which is neat, but I wonder if it has the requisite technical depth or element of surprise to warrant a separate paper.\"}", "{\"title\": \"Addressing Reviewer 3 concerns\", \"comment\": \"We thank the reviewer for the positive comments. Below we address the main concerns.\", \"q\": \"\\u201cSome of the benchmark datasets for the proposed task as well as some well-known methods (see Battaglia et al\\u201918 and references in there) are missing\\u201d.\", \"a\": \"We did our best to survey and compare to the most related works on the dataset collection introduced in [Yanardag & Vishwanathan 2015]. These datasets contain graphs from multiple origins, where some of them consist of highly varying graph sizes (within the same dataset). In any case we will make the code available as soon as possible.\\n--------------------------------------------------------------------------------------------------------------------------------\"}", "{\"title\": \"Beautiful work -- problems with experiments\", \"review\": \"The paper presents a maximally expressive parameter-sharing scheme for hypergraphs, and in general when modeling the high order interactions between elements of a set. This setting is further generalized to multiple sets. The paper shows that the number of free parameters in invariant and equivariant layers corresponds to the different partitioning of the index-set of input and output tensors. Experimental results suggest that the proposed layer can outperform existing methods in supervised learning with graphs.\\n\\nThe paper presents a comprehensive generalization of a recently proposed model for interaction across sets, to the setting where some of these sets are identical. This is particularly useful and important due to its applications to graphs and hyper-graphs, as demonstrated in experiments.\\n\\nOverall, I enjoyed reading the paper. My only concern is the experiments:\\n\\n1) Some of the benchmark datasets for the proposed task as well as some well-known methods (see Battaglia et al\\u201918 and references in there) are missing.\\n\\n2) Applying the model of Hartford et al\\u201918 to problems where interacting sets are identical is similar to applying convolution layer to a feature vector that is not equivariant to translation. (In both cases the equivariance group of data is a strict subgroup of the equivariance of the layer.) Do you agree that for this reason, all the experiments on the synthetic dataset is flawed?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Addressing Reviewer 1 concerns\", \"comment\": \"We thank the reviewer for the detailed review. Below we address the main concerns.\\n\\n--------------------------------------------------------------------------------------------------------------------------------\", \"q\": \"Comparison to popular graph convolution methods (GCN [kipf et al. 2016] / GraphSAGE [Hamilton et al. 2017] / Monti et al [2017] / etc.).\", \"a\": \"As discussed in our response to Reviewer 2, We will add a theoretical result that shows that our model is at least as powerful in terms of universality as [Kipf & Welling ICLR 2017].\"}", "{\"title\": \"Message passing using our method...\", \"comment\": \"We thank the reviewer for the detailed review. The reviewer's main concern was the practicality of the method, and its inability to model message passing. We respectfully disagree. Below we show that our model can simulate standard message passing architectures in a simple way, as well as answer other concerns.\\n\\n--------------------------------------------------------------------------------------------------------------------------------\", \"q\": \"\\u201dThe paper does not discuss what happens when the input tensor is symmetric.\\u201d\", \"a\": \"This question was addressed in Appendix 1. We add a relevant quote:\\n\\u201cWe note that in case the input matrix is symmetric, our basis reduces to 11 elements in the first layer. If we further assume the matrix has zero diagonal we get a 6 element basis in the first layer. In both cases our model is more expressive than the 4 element basis of Hartford et al. (2018) and as the output of the first layer (or other inner states) need not be symmetric nor have zero diagonal the deeper layers can potentially make good use of the full 15 element basis.\\u201d\\n--------------------------------------------------------------------------------------------------------------------------------\", \"proposition\": \"Our model can represent Kipf & Welling\\u2019s message passing [Kipf & Welling ICLR 2017] to arbitrary precision.\", \"proof\": \"Consider input vertex data X\\\\in R^{n x d} (n is the number of vertices in the graph, and d is the feature depth) and adjacency/affinity matrix A\\\\in R^{n x n} of the graph. In our setting we represent this data using a tensor Y\\\\in R^{n x n x d+1} where the first channel is the adjacency matrix A and the last d channels are diagonal matrices that hold X. We would like to approximate the function Y \\\\mapsto A*X. For simplicity we consider d=1 but the following generalizes readily to all d>1. A*X can be represented by first using our equivariant linear layer to replicate X values on the rows; denote this new matrix by Z \\\\in R^{n x n x 2}, where the first channel of Z is A and the second is the replication of X. Now multiplying entrywise the two feature channels of Z followed by summing the rows (another equivariant 2->1 operator) will provide A*X. Since pointwise product between features is not a part of our model we can approximate it to arbitrary precision using an MLP on the feature dimension that can be written as a series of linear equivariant operators and ReLUs. (Note that MLP on the feature dimension is the way PointNet and DeepSets work.) QED\\n\\nWe will add this claim and proof to the paper. \\n\\nOne immediate corollary of this proposition is that in terms of universality our model is at-least as powerful as Kipf & Welling message passing. \\n\\n--------------------------------------------------------------------------------------------------------------------------------\"}", "{\"title\": \"Nice combinatorics, but this is not what graph neural networks actually do\", \"review\": \"Given a graph G of n vertices, the activations at each level of a graph neural network (G-NN) for G\\ncan be arranged in an n^k tensor T for some k. A fundamental criterion is that this tensor must be equivariant \\nto permutations of the vertices of G in the sense of each index of of T being permuted simultaneously. \\n\\nThis paper enumerates the set of all linear maps that satisfy this criterion, i.e., all linear maps \\nwhich (the authors claim) can serve as the analog of convolution in equivariant G-NNs. \\nThe authors find that for invariant neural networks such maps span a space of dimension just b(k), whereas \\nfor equivariant neural networks they span a space of dimension b(2k).\\n\\nThe proof of this result is simple, but elegant. It hinges on the fact that the set of tensor elements of \\nthe same equality type is both closed and transitive under the permutation action. Therefore, the \\ndimensionality of the subspace in question is just the number of different identity types, i.e., \\npartitions of either {1,...,k} or {1,...,2k}, depending on whether we are talking about invariance or \\nequivariance.\\n\\nMy problem with the paper is that the authors' model of G-NNs doesn't actually map to what is used \\nin practice or what is interesting and useful. Let me list my reservations in increasing order of significance.\\n\\n1. The authors claim that they give a ``full characterization'' of equivariant layers. This is not true. \\nEquivariance means that there is *some* action of the symmetric group S_n on each layer, and wrt these actions \\nthe network is equivariant. Collecting all the activations of a given layer together into a single object L, \\nthis means that L is transformed according to some representation of S_n. Such a representation can always be \\nreduced into a direct sum of the irreducible representations of S_n. The authors only consider the case then \\nthe representation is the k'th power of the permutation representation (technically called the defining \\nrepresentation of the S_n). This corresponds to a specific choice of irreducibles and is not the most general case. \\nIn fact, this is not an unnatural choice, and all G-NNs that I know follow this route. \\nNonetheless, technically, saying that they consider all possible equivariant networks is not correct.\\n\\n2. The paper does not discuss what happens when the input tensor is symmetric. On the surface this might seem \\nlike a strength, since it just means that they can consider the more general case of undirected graphs (although \\nthey should really say so). In reality, when considering higher order activations it is very misleading because \\nit leads to a massive overcounting of the dimensionality of the space of convolutions. In the case of k=2, for \\nexample, the dimensionality for undirected graphs is probably closer to 5 than 15 for example (I didn't count).\\n\\n3. Finally, and critically, in actual G-NNs, the aggregation operation in each layer is *not* \\nlinear, in the sense that it involves a product of the activations of the previous layer with the adjacency \\nmatrix (messages might be linear but they are only propagated along the edges of the graph). \\nIn most cases this is motivated by making some reference to the geometric meaning of convolution, \\nthe Weisfeiler-Lehman algorithm or message passing in graphical models. In any case, it is critical that the \\ngraph topology be reintroduced into the network at each layer. The algebraic way to see it is that each layer \\nmust mix the information from the vertices, edges, hyperedges, etc.. The model in this paper could only aggregated \\nedge information at the vertices. Vertex information could not be broadcast to neighboring vertices again. \\nThe elemenary step of ``collecting vertex information from the neighbors but only the neighbors'' cannot be \\nrealized in this model.\\n\\nTherefore, I feel that the model used in this paper is rather uninteresting and irrelevant for practical \\npurposes. If the authors disagree, I would encourage them to explicitly write down how they think the model \\ncan replicate one of the standard message passing networks. It is apparent from the 15 operations listed on \\npage 11 that they have nothing to do with the graph topology at all.\", \"minor_gripes\": [\"I wouldn't call (3) and (4) fixed point equations, that's usually used in dynamical systems. Here there is\", \"an entire subspace fixed by *all* permutations.\", \"Below (1), they probably mean that ``up to permutation vec(L)=vec(L^T)''.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Very interesting paper\", \"review\": \"This paper explores maximally expressive linear layers for jointly exchangeable data and in doing so presents a surprisingly expressive model. I have given it a strong accept because the paper takes a very well-studied area (convolutions on graphs) and manages to find a far more expressive model (in terms of numbers of parameters) than what was previously known by carefully exploring the implications of the equivariance assumptions implied by graph data. The result is particularly interesting because the same question was asked about exchangeable matrices (instead of *jointly* exchangeable matrices) by Hartford et al. [2018] which lead to a model with 4 bases instead of the 15 bases in this model, so the additional assumption of joint exchangeability (i.e. that any permutations applied to rows of a matrix must also be applied to columns - or equivalently, the indices of the rows and columns of a matrix refer to the same items / nodes) gives far more flexibility but without losing anything with respect to the Hartford et al result (because it can be recovered using a bipartite graph construction - described below). So we have a case where an additional assumption is both useful (in that it allows for the definition of a more flexible model) and benign (because it doesn't prevent the layer from being used on the data explored in Hartford et al.).\", \"i_only_have_a_couple_of_concerns\": \"1 - I would have liked to see more discussion about why the two results differ to give readers intuition about where the extra flexibility comes from. The additional parameters of this paper come from having parameters associated with the diagonal (intuitively: self edges get treated differently to other edges) and having parameters for the transpose of the matrix (intuitively: incoming edges are different to outgoing edges). Neither of these assumptions apply in the exchangeable setting (where the matrix may not be square so the diagonal and transpose can't be used). Because these differences aren't explained, the synthetic tasks in the experimental section make this approach look artificially good in comparison to Hartford et al. The tasks are explicitly designed to exploit these additional parameters - so framing the synthetic experiments as, \\\"here are some simple functions for which we would need the additional parameters that we define\\\" makes sense; but arguing that Hartford et al. \\\"fail approximating rather simple functions\\\" (page 7) is misleading because the functions are precisely the functions on which you would expect Hartford et al. to fail (because it's designed for a different setting). \\n2 - Those more familiar of the graph convolution literature will be more familiar with GCN [kipf et al. 2016] / GraphSAGE [Hamilton et al. 2017] / Monti et al [2017] / etc.. Most of these approaches are more restricted version of this work / Hartford et al. so we wouldn't expect them to perform any differently from the Hartford et al. baseline on the synthetic dataset, but including them will strengthen the author's argument in favour of the work. I would have also liked to see a comparison to these methods in the the classification results.\\n3 - Appendix A - the 6 parameters for the symmetric case with zero diagonal reduces to the same 4 parameters from Hartford et al. if we constrained the diagonal to be zero in the output as well as the input. This is the case when you map an exchangeable matrix into a jointly exchangeable matrix by representing it as a bipartite graph [0, X; X^T, 0]. So the two results coincide for the exchangeable case. Might be worth pointing this out.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
r1l73iRqKm
Wizard of Wikipedia: Knowledge-Powered Conversational Agents
[ "Emily Dinan", "Stephen Roller", "Kurt Shuster", "Angela Fan", "Michael Auli", "Jason Weston" ]
In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date. The most popular sequence to sequence models typically “generate and hope” generic utterances that can be memorized in the weights of the model when mapping from input utterance(s) to output, rather than employing recalled knowledge as context. Use of knowledge has so far proved difficult, in part because of the lack of a supervised learning benchmark task which exhibits knowledgeable open dialogue with clear grounding. To that end we collect and release a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. We then design architectures capable of retrieving knowledge, reading and conditioning on it, and finally generating natural responses. Our best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while our new benchmark allows for measuring further improvements in this important research direction.
[ "dialogue", "knowledge", "language", "conversation" ]
https://openreview.net/pdf?id=r1l73iRqKm
https://openreview.net/forum?id=r1l73iRqKm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rkx7FNXqC7", "BJggPNQ907", "S1gofVmqCQ", "HklZ0zm5RX", "S1l1hzXcCm", "SJliuf75Am", "HkgWNlm5R7", "ByxXmZmR3m", "SkgGSxr93m", "BJl_Z5Nc37", "S1xbgTO_2Q" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1543283834713, 1543283800106, 1543283731481, 1543283401370, 1543283367307, 1543283315498, 1543282729431, 1541447962623, 1541193786104, 1541192191755, 1541078249302 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper697/Authors" ], [ "ICLR.cc/2019/Conference/Paper697/Authors" ], [ "ICLR.cc/2019/Conference/Paper697/Authors" ], [ "ICLR.cc/2019/Conference/Paper697/Authors" ], [ "ICLR.cc/2019/Conference/Paper697/Authors" ], [ "ICLR.cc/2019/Conference/Paper697/Authors" ], [ "ICLR.cc/2019/Conference/Paper697/Authors" ], [ "ICLR.cc/2019/Conference/Paper697/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper697/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper697/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper697/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Response Part 3\", \"comment\": \"- In Table 5, human evaluators only measure the likeness of the dialog which seems very naive. Why don\\u2019t you measure whether the apprentice gets new knowledge of which s/he didn\\u2019t know before, whether the knowledge provided from the model was informative, whether the dialog was fun and engaging or more? The current human evaluation seems very weak though.\\n o Human evaluation is notoriously hard, and in fact many questions asked to humans either have very little inter-annotator agreement, e.g. when asking about specificity or background knowledge, (see https://arxiv.org/pdf/1708.07149.pdf, appendix A (https://arxiv.org/pdf/1708.07149.pdf)), or else are so correlated with each other they do not give new information and are hence not really connected to the intent of the original question, e.g. appropriateness v.s. topicality (see same cite). That paper recommends only asking humans for one score.\\n o Indeed, many papers with human evaluations only report one type of metric (usually, quality/appropriateness), for example the following highly cited ones: Vinyals & Le on the OpenSubtitles corpus (https://arxiv.org/pdf/1506.05869.pdf), Li et al. (both https://arxiv.org/abs/1606.01541 and https://arxiv.org/pdf/1510.03055.pdf) and Liu et al. (https://arxiv.org/pdf/1603.08023.pdf) . Yet other highly cited papers do not perform any human evaluations at all, e.g. Lowe et al. on the Ubuntu corpus https://arxiv.org/pdf/1506.08909.pdf and Serban et al. on MovieTriples (http://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/download/11957/12160). \\n o Note of the two recent works in knowledge grounding cited, Parthasarathi & Pineau (https://arxiv.org/abs/1809.05524) do not report human evaluation at all (only BLEU metrics), while Ghazvininejad et al. (https://arxiv.org/abs/1702.01932) have two metrics: informativeness and appropriateness. Overall, a fine-gained way of assessing conversations is still an unsolved problem. What we do do in this work, in addition to automatic and human evaluations, is an error analysis in Appendix C.\\n\\n- Training/trying two apprentice models: \\n o We have not tried to train an Apprentice model, but that is interesting future work that could be pursued. Those models would likely contain much less knowledge.\"}", "{\"title\": \"Response Part 2\", \"comment\": \"- \\u201cData collection part itself seems to be the biggest contribution to this work.\\u201d:\\n o To our knowledge there is little or no evidence of models working this well in open-domain chitchat before, and we believe that is the main contribution of this work. Our results stem from contributing models that can effectively leverage knowledge (Transformer Memory Networks) that are trained on a supervised grounded supervised dataset \\u2014 i.e. the new Wizard of Wikipedia task. So you are right that the new grounded dataset is important, but note also that the baseline models from the literature we tried also failed to do this.\\n\\n- \\u201cFor example, what other interesting applications can you develop on this dataset?\\u201d: \\n o We believe the application as described is the application of this dataset. However, if in future work other researchers find other uses for it, then that is great as well. But we are focused on the task of open-domain knowledgeable conversations.\\n\\n- \\u201cSome questions on data collection\\u201d: \\n o We provided bonuses to high quality annotators manually. We discarded poor quality conversations through a mixture of manual inspection and automatic tests where external knowledge was not used in at least 2 Wizard turns (which was determined via at least 3 words of non-stop word overlap between checked sentence and Wizard message), and subsequently did not allow those workers to have further conversations. There were also a few annotators whom others would individually report for various reasons. Finally, we also implemented an offensive language detection system to auto-reject/block workers who used such language.\\n\\n- A question on the model compared to previous works such as (Zhang at al., ACL18), the proposed model seems to have the only replacement with Transformer encoder and a loss term for knowledge selection. Have you tried another way of dealing with the knowledge part? For example, a ranking loss might be better than the attention.\\n o Yes, we believe the Transformer encoder is a better choice than a bag of words used in Zhang et al., 2018; we do compare explicitly to a bag of words encoder in Tables 2 and 3. We did not experiment with other alternative attention or ranking mechanisms for the knowledge choices, but there are many avenues to explore in future work.\\n\\n- Questions on the Experiment section. Any experiment to show the effect of different \\\\lambda value in the loss of the generative model? \\n o We treat lambda as a hyperparameter and tune using the validation set using grid search. An informal trial-and-error search showed valued closer to 1 worked better. During the final tuning, we tried values {0.8, 0.9, 0.95}.\\n\\n- When you evaluate the generative model, have you also tried other automatic metrics such as BLEU instead of only PPL and Unigram-F1? For this task, the possible response grounded by the topic+knowledge might be too diverse to measure though. \\n o We experimented with BLEU-4 as an automatic metric, but concluded it was a poor choice which did not reward diverse responses, particularly since responses usually do not have strong n-gram alignments like in machine translation. We found BLEU was unable to distinguish strong models from weak models in a way that corresponded to our own interactions with the models, and as such, omitted these results from the paper.\\n\\n- \\u201cCould you possibly add some constraints to the annotators to do some clear tasks over the dialog so you can systematically evaluate the dialog w.r.t the constraint?\\u201d: \\n o indeed our task _is_ a chitchat task, in line with other chitchat tasks. Your goal-oriented proposal (which is somewhat open-ended, so would have to be nailed down) is beyond the scope of this work, however we think is an interesting & exciting extension we hope others could try to pursue in the future in order to link chitchat and goal-oriented dialogue together.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your review and detailed feedback, we appreciate the constructive comments. We apologize if our answer is long, but you had a lot of questions! We have tried to answer them all and make necessary changes to the paper.\\n\\n- Real application: \\n o This task is not meant to be solely a diagnostic dataset, but a basis for a knowledgeable conversational agent that can talk about any knowledge that is in Wikipedia. We are interested in this task, because an agent that is both knowledgeable and can converse with humans in an engaging way is one of the goals of AI. A successful system could engage with real people (not just paid crowdworkers). In our work, the goal is to chat freely about the topic, i.e. a chitchat task. No, we do not aim to make an educational tool as you mention, although others could use our work as a pre-training for such a task perhaps. However, we respectfully disagree that our models are \\u201ca simple knowledge retrieval model given the dialog context\\u201d. Please see e.g. Figures 2 and 4 to show that in the best cases, where our modeling works very well, particularly the E-book, toga party and Arnold Schwarzenegger examples the agent can be very conversationally engaging \\u2014 it both uses knowledge, but also clearly replies and follows the conversation of the human partner, producing engaging conversations as measured in human evaluations. \\n\\n- Test-bed for state-of-the-art dialogue models: \\n o Separately, our task is also a challenging setup to develop models that can actually talk in a knowledgeable way to humans. They must have a memory (and be able to retrieve knowledge from it), to be able to select that knowledge and converse convincingly with respect to the dialogue context. This combines a lot of the current research threads into a single challenging task where grounded knowledge can clearly be leveraged, due to the way the data was collected.\\n\\n- \\u201cNo explicit goal of a dialog makes the chat divergent and open-ended\\u201d: \\n o Yes! This is one of the challenges of real dialogue, and our dataset as well. Because our data set collection involves an in-the-loop knowledge retrieval at every dialogue turn during both data collection (and for models working on our task) the human Wizard is able to ground their conversation with knowledge from Wikipedia. That is, if they started talking about cheese, those topics will appear from the retrieval over Wikipedia, but if they switch to Michael Jackson, that will appear too. They are not locked into the original topic, just as in a natural conversation. This is what can make our models more feasibly useful for real chat in an application.\\n\\n- Additional details about the Apprentice: \\n o The Apprentice is a completely unconstrained human, playing the role of a curious learner, eager to chat. Their stated goal is to go into depth about a chosen topic that interests themselves or their partner, while keeping the conversation engaging and fun. We observed Apprentices saying statements, asking questions and answering questions, as shown in the examples in Appendix A.2. Assuming that a question contains a question mark or begins with 'how', 'why', 'who', 'where', 'what' or 'when', in the dataset Apprentices ask questions in 13.9% of training set utterances, and answer questions (i.e., the Wizard has asked a question) 39.5% of the time, while saying new statements (neither asking nor answering a question) 49.3% of the time. (Note those percentages don't add up to 100 because a question may be answered with another question.) That is, overall, the Apprentice maintains a balanced set of dialogue acts. We made this clearer in the main text, and added details to the appendix.\\n\\n- \\u201cDo you have any post analysis on the types of responses from the apprentices so highlighting utilities of the dataset in a real application?\\u201d: \\n o Please see the answer above which is now added to the paper. We also note that we do provide an analysis of our models in Appendix C, which involves models talking to human Apprentices.\\n o The dataset statistics are Table 1, examples from human-human conversations are in Fig 3 and examples dialogues of different models are in 2,4 & 5. There is unfortunately little room left in the main paper for more, hence they are in the appendix. We felt it was important to highlight the successes of the models, as to our knowledge there is scant evidence of models working this well in open-domain chitchat before. Hence, the majority of our analysis has been on the modeling side (see Appendix C).\"}", "{\"title\": \"Response Part 3\", \"comment\": \"- Section 5.3: as stated above, the human evaluation is a little bit underwhelming, both in terms of setup and results. I'd expect a more fine-grained way of assessing conversations by humans, and also an explanation of why the retrieval performer without knowledge was assessed as being on par with the retrieval transformer memnet:\\n o Many papers with human evaluations only report one type of metric (quality/appropriateness), for example the following highly cited ones: Vinyals & Le on the OpenSubtitles corpus (https://arxiv.org/pdf/1506.05869.pdf), Li et al. (both https://arxiv.org/abs/1606.01541 and https://arxiv.org/pdf/1510.03055.pdf) and Liu et al. (https://arxiv.org/pdf/1603.08023.pdf) . Other highly cited papers do not perform any human evaluations at all, e.g. Lowe et al. on the Ubuntu corpus https://arxiv.org/pdf/1506.08909.pdf and Serban et al. on MovieTriples (http://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/download/11957/12160). Note, of the two recent works in knowledge grounding cited, Parthasarathi & Pineau (https://arxiv.org/abs/1809.05524) do not report human evaluation at all (only BLEU metrics), while Ghazvininejad et al. (https://arxiv.org/abs/1702.01932) have two metrics: informativeness and appropriateness. Overall, a fine-gained way of assessing conversations is still an unsolved problem. What we do do in this work, in addition to automatic and human evaluations, is an error analysis in Appendix C. \\n o For the retrieval models (memory vs. no memory), note that our goal is to obtain both an engaging and knowledgable conversational agent. It is possible for a model to display a large amount of knowledge but to not be engaging at all, e.g. copy paste of sentences from Wikipedia. In our work we thus settled on two metrics for online human evaluation: engagingness and knowledge (Wiki F1) metrics, where we assume both should be maximized. In addition, we report a set of offline automatic evaluation metrics. We have now added text explaining this.\\n\\n- Section 5.3: I assume higher=better for the human scores? This should be made explicit:\\n o Correct. We have updated the paper.\\n\\n- \\u201cHave others used the F1 overlap score? If so, cite.'' \\n o Not that we know of. We made this clearer in the paper.\\n\\n- Section 5.3: I don't understand the argument that the human evaluation shows that humans prefer more natural responses. How does it show that? \\n o This is merely an observation that retrieval models tend to produce more \\u201cnatural\\u201d responses\\u2014 in the sense that they are retrieving from a set of human utterances\\u2014 and for that reason humans can prefer these models to generative ones, which often produce short, generic, and repetitive responses (see https://arxiv.org/abs/1611.06216 and https://arxiv.org/pdf/1801.07243.pdf). Despite the fact that retrieval models sometimes produce erroneous or out of context responses, humans may rate these retrieval models more highly as they are simply easier to read. Nevertheless, we have removed this comment as it is hard to tease apart with complete certainty the clear reasons why one model is preferred over another here. We do however provide a detailed error analysis in Appendix C. \\n\\n- Section 5.3: The Wiki F1 score is kind of interesting because it shows to what degree the model uses knowledge. But the side-by-side comparison with the human scores shows that humans don't necessarily prefer chatbot models that use a lot of knowledge. I'd expect this to be discussed, and suggestions for future work to be made accordingly:\\n o Our goal is to obtain both an engaging and knowledgable conversational agent. It is possible for a model to display a large amount of knowledge but to not be engaging at all, e.g. copy paste of sentences from Wikipedia. In our work we thus settled on two metrics for online human evaluation: engagingness and knowledge (Wiki F1) metrics, where we assume both should be maximized. In addition, we report a set of offline automatic evaluation metrics. Like in the entire domain of dialogue (not just knowledge-grounded models), evaluation is unfortunately still an active area of research. We have added comments explaining this, as you suggest.\\n\\n- Section 6: The paper ends a bit abruptly. It's be nice to suggest future areas of improvement:\\n o Thank you, we have enhanced our conclusion with future work.\"}", "{\"title\": \"Response Part 2\", \"comment\": [\"Section 4.2: did you run experiments for BPE encoding? Would be good to see as this is a bit of a non-standard choice:\", \"o We found modeling word piece tokens eased generation difficulty as it reduces the vocabulary size and quantity of rare words. It has been widely used in other sequence-to-sequence modeling tasks, such as machine translation and summarization. In datasets based on Wikipedia, there are often large quantities of rare words. Previous work has found BPE tokenization improves the model's ability to copy rare words, particularly entities (https://arxiv.org/abs/1711.05217). In early iterations of our models, we found that models without BPE tokenization could not produce UNK tokens. We contemplated implementing a copy-pointer mechanism (See et al., 2017), but found that BPE adequately addressed this problem for our purposes. We would be excited to see the effect of a copy mechanism in future work. Our new task and dataset clearly leave many avenues of research still open.\", \"Section 4.2: it would be good to explain the Cer et al. 2018 method directly in the paper:\", \"o Cer et al. 2018 propose using sum(vectors)/sqrt(sentence length), instead of of the mean, sum(vectors)/sentence length, in order to balance short and long sentences. We added this clarification to the paper.\", \"Section 4.2: is there a reference for knowledge dropout? Also, it would be good to show ablation results for this:\", \"o To our knowledge, knowledge dropout is unique to our paper, but it is similar to other techniques like token dropout (see e.g. https://arxiv.org/pdf/1709.03856.pdf). Table 4 contains a comparison with and without knowledge dropout; we emphasized this ablation in the text, and added a reference.\", \"Section 5.1: why did you choose to pre-train on the Reddit data? There should be some more in-depth description of the Reddit dataset to motivate this choice:\", \"o Mazare et al (https://arxiv.org/pdf/1809.01984.pdf) found success with pretraining on a large-scale dataset of conversations with personas extracted from Reddit dumps. They show that by pre-training on this Reddit data and finetuning on the Personachat dataset improved model accuracy by over 18%. We use the same procedure described in that paper.\", \"Section 5.1: what is the setup you use for multi-task learning on SQuAD? Is it just a hard parameter sharing model, or:\"], \"o__we_multi_task_with_squad_by_formulating_the_squad_task_as_a_ranking_one\": \"namely, the model is tasked with finding the sentence(s) in the context paragraph that contain the correct answer. In this way we multi-task in the usual sense - by alternating training examples between the Wizard task and this re-formulated SQuAD task. This made minimal impact, but we hope future work will explore the relationship between knowledge grounded dialogue and QA tasks, which we have now added to the future work in the conclusion.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your review and detailed feedback. We apologize if our answer is long, but you had a lot of questions! We have tried to answer them all and make necessary changes to the paper. Thank you for your constructive comments.\\n\\n- Missing reference for goal-oriented dialogue datasets: Wen et al. 2017, A Network-based End-to-End Trainable Task-oriented Dialogue System, https://arxiv.org/abs/1604.04562:\\n o Thank you, we have added this citation.\\n\\n- How does the proposed dataset differ from the Reddit and Wikipedia datasets discussed in the last paragraph of the related work section? This should be explained:\\n o Compared to the related datasets we describe in Section 2, our dataset provides specific and high-quality grounding as we ask the Wizard to author dialogue based on the given knowledge (so we know what to ground to later when training models). Further, we ask the Wizard to select which sentence the knowledge is from, giving even more fine-grained information. In those existing tasks the data used to ground was not accessible during dialogue collection, and thus may or may not be related. We believe this is why our task can lead to more successful models. Moreover, our task provides easier analysis, e.g. we can measure knowledge selection metrics. We have clarified this in the text.\\n\\n- What is the maximum number of turns:\\n o There is no maximum number of turns, the two human speakers can continue to speak (but the crowdworker pay is fixed, so typically they only do this when they are really enjoying it). The maximum conversation length in the dataset is 23 utterances long. This has been clarified in the paper.\\n\\n- Page 3, paragraph \\\"Knowledge Retrieval\\\": how were the top 7 articles and first 10 sentences choices made? This seems arbitrary. Also, why wasn't the whole text used:\\n o In order to keep crowdworkers from being overwhelmed, we chose for the worker to be exposed to no more than 15 different wikipedia articles at once (7 based on the Wizard's previous message, 7 based on the Apprentice's previous message, and 1 for the chosen topic), any higher was too much work to annotate. The first ten sentences translated to roughly the first and second paragraphs of the wikipedia topic article, which we felt would be ample information for the conversations. We did initially test varying amounts of sentences and articles, and we ultimately settled on these choices as they struck the best balance between keeping a conversation moving while also ensuring the Wizard had enough information to use in a response. Note this does not necessarily limit a model using more, but was simply the sentences shown to crowdworkers at training time.\\n\\n- Page 3, paragraph \\\"Knowledge Selection and Response Generation\\\": how do you deal with co-reference problems if you only ever select one sentence at a time? The same goes for the \\\"Knowledge Attention\\\" model described in Section 4:\\n o We do not handle coreference in any special way, but it is part of the task. Annotators can read the sentences surrounding the one they click on, so they may make use of them. Further, the crowdworkers and the model were both provided titles of the Wikipedia articles as well as their content for each of the potential knowledge sentences. We observed that our generator model did learn to substitute this title in place of pronouns in certain cases. Making use of further context is not addressed in our particular models, but could be in future work.\\n\\n- Page 3, paragraph \\\"Knowledge Selection and Response Generation\\\": how often do annotators choose \\\"no sentence selected\\\"? It would be interesting to see more such statistics about the dataset:\\n o The Wizards choose \\u201cno sentence selected\\u201d around 6.2% of the time. We have provided additional details about the human annotation interface, topic selection, and information about the data in the Appendix.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your review and detailed feedback. We address your 3 points below:\\n\\n- \\u201cit is pretty uncommon for a human to ground his/her every sentence on external knowledges\\u201d: \\n o In fact in our task, at any time in the dialogue the Wizard can choose \\u201cno sentence used instead\\u201d, as stated in the paper, but we have made this clearer (adding it in two places). We do agree that the Wizard task is not completely natural for a human as we are trying to make the human conversationalist help to train our bot maximally by grounding their sentences so that we can learn how to ground \\u2014 for this reason we ask the human to read Wikipedia sentences and use them if possible rather than their own personal knowledge (which the model cannot retrieve). It is this setup that we believe makes our dataset so useful for knowledge grounded dialogue model training compared to existing datasets.\\n\\n- REINFORCE for learning the whole system:\", \"o__indeed_our_methods_do_use_at_least_two_modules\": \"one for knowledge retrieval (searches over all of Wikipedia), and one that combines knowledge selection and generation/retrieval. Training the parts of the system together, e.g. by REINFORCE is a great idea for future work. We have added this to the future work section of the conclusion.\\n\\n- Noisy or adversarial apprentice: \\n o Again, making the systems more robust is also a good direction for future work. Our task will be made publicly available for researchers to try such improved follow-up techniques.\"}", "{\"metareview\": \"The paper proposes a new dataset for studying knowledge grounded conversations, that would be very useful in advancing this field. In addition to the details of the dataset and its collection, the paper also includes a framework for advancing the research in this area, that includes evaluation methods and baselines with a relatively new approach.\\nThe proposed approach for dialogue generation however is a simple extension of previous work by (Zhang et al) to user transformers, hence is not very interesting. The proposed approach is also not compared to many previous studies in the experimental results.\\nOne of the reviewers highlighted the weakness of the human evaluation performed in the paper. Moving on, it would be useful if further approaches are considered and included in the task evaluation. \\n\\nA poster presentation of the work would enable participants to ask detailed questions about the proposed dataset and evaluation, and hence may be more appropriate.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting dataset and evaluation framework\"}", "{\"title\": \"Good work\", \"review\": \"This work proposes a brand new dataset to fill in the vacancy of current conversational AI community, specifically the introduced dataset aims at providing a platform to perform large-scaled knowledge-grounded chit-chat. Overall, the dataset is well-motivated and well-designed, its existence will potentially benefit the community and inspire more effective methods to leverage external knowledge into dialog system. Besides, the paper also utilizes many trending models like Transformers, Memory Networks, etc to ensure the state-of-the-art performance. The clear structure and paragraphs also makes the paper easy to read and follow.\", \"here_are_some_questions_i_want_to_raise_about_the_paper\": \"1. First of all, the design of the conversation flow though looks reasonable, but it is pretty uncommon for a human to ground his/her every sentence on external knowledges. Therefore, it would probably be better to introduce some random ungrounded turns into the conversation to make it more humanlike.\\n\\n2. Secondly, the whole framework is based on many modules and every one of them are prone to error. I\\u2019m afraid that such cascaded errors will accumulate and lead to compromised performance in the end. Have you thought about using REINFORCE\\nalgorithm to alleviate this issue?\\n\\n3. Finally, it would be better to introduce some noisy or adversarial apprentice to raise unrelated turns and see how the system react. Have you thought about how to deal with such cases?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"interesting task and dataset\", \"review\": \"This paper collects a new annotated dataset for knowledge grounded dialog task. The proposed models combine two recent neural networks, Memory Net and Transformer, for the purpose of the task. I highly appreciate the efforts to collect such a precious dialog dataset for the community. Also, the setup in data collection actually narrows down the scope of chitchat dialog into a specific topic by grounding it to a set of knowledge.\\n\\nHere are summaries of my concerns and questions about the paper. \\n\\n# applicability of the knowledgeable bot\\nWhat is the basic motivation of this work? Once you develop a chatbot that can produce a response grounded by knowledge, how could it be applied to real-world applications? Are you trying to teach a student who is looking for more knowledge about a topic? If so, you should be more careful about what knowledge the student (or apprentice in the paper) knows or don\\u2019t know about the topic and how their knowledge models dynamically change over the chat. Otherwise, the proposed model seems a simple knowledge retrieval model given the dialog context. Would you please provide motivations of the work?\\n\\n# No explicit goal of a dialog makes the chat divergent and open-ended\\nWithout a specific goal given to the annotators or a restriction in the instruction, a dialog in the current setting might diverge beyond the context. For example, if an apprentice says about her/his personal opinion about the topic (e.g., I hate the Gouda cheese) or past experience (e.g., I went to a music festival by Michael Jackson 23 years ago), then how do you control the chat between two annotators or how do you train a model not to pay much attention on out-of-topic utterances? \\n\\n# Lack of further analysis of the dataset\\nData collection part itself seems to be the biggest contribution to this work. Why don\\u2019t you bring one of real dialog example in Figure 3 to the main paper and say more about it? For example, what other interesting applications can you develop on this dataset? \\n\\nCompared to the Wizard, the role of apprentice seems unclear to me. I found from the examples in Figure 3 that most of the apprentices\\u2019 responses are a follow-up question about the knowledge, a personal agreement or feeling or their preference. Do you have any post analysis on the types of responses from the apprentices so highlighting utilities of the dataset in a real application? \\n\\n# Some questions on data collection\\nDo you have any incentive mechanism to make annotators more engage in the dialog?\\nDid you filter out some bad dialogs? Then, how did you measure the quality of a dialog? \\nHow do you penalize bad annotators that often make aggressive words or don\\u2019t follow the instruction you set up? \\n\\n# A question on the model\\nCompared to previous works such as (Zhang at al., ACL18), the proposed model seems to have the only replacement with Transformer encoder and a loss term for knowledge selection. Have you tried another way of dealing with the knowledge part? For example, a ranking loss might be better than the attention. \\n\\n# Questions on the Experiment section\\nAny experiment to show the effect of different \\\\lambda value in the loss of the generative model? \\n\\nWhen you evaluate the generative model, have you also tried other automatic metrics such as BLEU instead of only PPL and Unigram-F1? For this task, the possible response grounded by the topic+knowledge might be too diverse to measure though. Could you possibly add some constraints to the annotators to do some clear tasks over the dialog so you can systematically evaluate the dialog w.r.t the constraint? Otherwise, evaluation of this task seems to be mostly the same as chitchat systems.\\n\\nIn Table 5, human evaluators only measure the likeness of the dialog which seems very naive. Why don\\u2019t you measure whether the apprentice gets new knowledge of which s/he didn\\u2019t know before, whether the knowledge provided from the model was informative, whether the dialog was fun and engaging or more? The current human evaluation seems very weak though.\", \"this_might_be_an_auxiliary_question\": \"have you tried to train the model for apprentice and make two models chat with each other? How does the chat look like then?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting new dataset\", \"review\": \"This paper introduces a new dataset and method for chatbots. In contrast to previous work, this paper specifically probes how well a dialogue system can use external unstructured knowledge.\", \"quality\": \"Overall, this is a very high-quality paper. The dataset is developed well, the experimental setup is well thought-through and the authors perform many ablation studies to test different model variants. The main criticism I have would be that the human evaluation is rather simple (rating 1-5), I would have expected more fine-grained categories, especially ones that relate to how much knowledge the system uses (I appreciate the \\\"Wiki F1\\\" metric, but that is an automatic metric). As it is, the human evaluation shows that most of their contributions are not appreciated by human annotators. Further, the paper ends a bit abruptly, I would have expected a more in-depth discussion of next steps.\", \"clarity\": \"The description of the work is clear in most places. I particularly like the abstract and introduction, which set up the rest of the paper nicely. In some places, perhaps due to space restrictions, method descriptions are a bit too short.\", \"originality\": \"The paper is fairly original, especially the aspect about specifically using external knowledge. The authors could have been more clear on how the work differs from other work on non-goal directed dialogue work though (last paragraph of related work section).\", \"significance\": \"The dataset is really well-developed, hence I believe many working in the dialogue systems community will re-use the developed benchmark and build on this paper.\", \"more_detailed_comments\": [\"Missing reference for goal-oriented dialogue datasets: Wen et al. 2017, A Network-based End-to-End Trainable Task-oriented Dialogue System, https://arxiv.org/abs/1604.04562\", \"How does the proposed dataset differ from the Reddit and Wikipedia datasets discussed in the last paragraph of the related work section? This should be explained.\", \"Page 3, paragraph \\\"Conversational Flow\\\": what is the maximum number of turns, if the minimum is 5?\", \"Page 3, paragraph \\\"Knowledge Retrieval\\\": how were the top 7 articles and first 10 sentences choices made? This seems arbitrary. Also, why wasn't the whole text used?\", \"Page 3, paragraph \\\"Knowledge Selection and Response Generation\\\": how do you deal with co-reference problems if you only ever select one sentence at a time? The same goes for the \\\"Knowledge Attention\\\" model described in Section 4.\", \"Page 3, paragraph \\\"Knowledge Selection and Response Generation\\\": how often do annotators choose \\\"no sentence selected\\\"? It would be interesting to see more such statistics about the dataset\", \"Section 4.2: did you run experiments for BPE encoding? Would be good to see as this is a bit of a non-standard choice.\", \"Section 4.2: it would be good to explain the Cer et al. 2018 method directly in the paper\", \"Section 4.2: is there a reference for knowledge dropout? Also, it would be good to show ablation results for this.\", \"Section 5.1: why did you choose to pre-train on the Reddit data? There should be some more in-depth description of the Reddit dataset to motivate this choice.\", \"Section 5.1: what is the setup you use for multi-task learning on SQuAD? Is it just a hard parameter sharing model, or?\", \"Section 5.3: as stated above, the human evaluation is a little bit underwhelming, both in terms of setup and results. I'd expect a more fine-grained way of assessing conversations by humans, and also an explanation of why the retrieval performer without knowledge was assessed as being on par with the retrieval transformer memnet.\", \"Section 5.3: I assume higher=better for the human scores? This should be made explicit.\", \"Section 5.3: Have others used the \\\"F1 overlap score\\\"? If so, cite.\", \"Section 5.3: I don't understand the argument that the human evaluation shows that humans prefer more natural responses. How does it show that?\", \"Section 5.3: The Wiki F1 score is kind of interesting because it shows to what degree the model uses knowledge. But the side-by-side comparison with the human scores shows that humans don't necessarily prefer chatbot models that use a lot of knowledge. I'd expect this to be discussed, and suggestions for future work to be made accordingly.\", \"Section 6: The paper ends a bit abruptly. It's be nice to suggest future areas of improvement.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SkeQniAqK7
Combining Learned Representations for Combinatorial Optimization
[ "Saavan Patel", "Sayeef Salahuddin" ]
We propose a new approach to combine Restricted Boltzmann Machines (RBMs) that can be used to solve combinatorial optimization problems. This allows synthesis of larger models from smaller RBMs that have been pretrained, thus effectively bypassing the problem of learning in large RBMs, and creating a system able to model a large, complex multi-modal space. We validate this approach by using learned representations to create ``invertible boolean logic'', where we can use Markov chain Monte Carlo (MCMC) approaches to find the solution to large scale boolean satisfiability problems and show viability towards other combinatorial optimization problems. Using this method, we are able to solve 64 bit addition based problems, as well as factorize 16 bit numbers. We find that these combined representations can provide a more accurate result for the same sample size as compared to a fully trained model.
[ "Generative Models", "Restricted Boltzmann Machines", "Transfer Learning", "Compositional Learning" ]
https://openreview.net/pdf?id=SkeQniAqK7
https://openreview.net/forum?id=SkeQniAqK7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJgWpEnyl4", "HJec4jK_R7", "HklrWoFdCX", "S1e9R9KOCQ", "SJee_5FOC7", "SklwBiVQaQ", "rker_F1q3X", "BylH3HnQnm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544697016808, 1543179057997, 1543179004887, 1543178962411, 1543178856208, 1541782335336, 1541171564837, 1540765100582 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper696/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper696/Authors" ], [ "ICLR.cc/2019/Conference/Paper696/Authors" ], [ "ICLR.cc/2019/Conference/Paper696/Authors" ], [ "ICLR.cc/2019/Conference/Paper696/Authors" ], [ "ICLR.cc/2019/Conference/Paper696/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper696/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper696/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"Dear authors,\\n\\nThank you for submitting your work to ICLR. The original goal of using smaller models to train a bigger one is definitely interesting and has been the topic of a lot of works.\\n\\nHowever, the reviewers had two major complaints: the first one is about the clarity of the paper and the second one is about the significance of the tasks on which the algorith is tested. For the latter point, your rebuttal uses arguments which are little known in the ML community and so should be expanded in a future submission.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Lack of clarity and justification for the final task\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your comments, we will be responding with specific comments to AnonReviewer3 here, and more general comments to the reviewer above.\", \"r2\": \"Overall, the paper seems to be a report consisting of a few interesting observations rather than introducing a solid and novel contribution with theoretical guarantees.\\n\\nIn regards to the lack of theoretical guarantees, we have shown that the equilibrium distribution is what we expect it to be, and mathematically have shown that the final distribution of interest has the mode we expect it to. It has been shown in many texts that Gibbs Sampling converges to this equilibrium distribution at a geometric rate in Markov Random Fields. Finding the exact convergence rate involves calculation of the eigenstructure of the markov chain transition matrix, which is in general computationally intractable for RBMs of moderate size [1]. Given this, we have added an extra theorem to show how the upper bounds on convergence rate changes as we merge RBMs, this can be seen in Section 3.1 on \\u201cConvergence Rate and MCMC\\u201d. We show that the rate of convergence of the RBM is geometric in the number of sampling steps, and that the combined RBM will have a convergence rate bounded by the sum of the convergence rates of the individual RBMs. \\n\\n If we want to have further theoretical guarantees, we have the ability to exactly set model parameters, as mentioned in section 3.2 to get the exact distribution of interest, and to combine those RBMs with directly calculated parameters. As mentioned in that section, this is not a data efficient, or computationally efficient method which is why we chose to not pursue it. \\n\\n[1] Pierre. Bremaud. Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues, volume 1.Springer New York, 1999. ISBN 9781441931313.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"Thank you for your comments, we will be responding with specific comments to AnonReviewer3 here, and more general comments to the reviewer above.\", \"r3\": \"That learning simple functions and composing them to compute more complex functions would be more data efficient than directly learning the complex functions does not seem very surprising.\\n\\nWe agree that this method of composing simple functions to compute more complex ones is intuitive, and may not be very surprising, but we think that this helps data and model efficiency in a different manner than presented in previous papers. \\n\\nAs far as scaling up the tasks and problem sizes, we are showing a method of combination here, and are scaling up the problem sizes continuously. We believe this combination method could be used for other things, and have presented it here as a proof of concept rather than a definitive survey with all possible uses.\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"Thank you for your comments, we will be responding with specific comments to AnonReviewer4 here, and more general comments to the reviewer above.\", \"r4\": \"The way I see it, implementing reversible boolean logic circuits using RBMs is an artificial problem, and the key idea of the paper -- which I find interesting -- is that in some cases it appears to be possible to combine RBMs trained for sub-problems into larger RBMs without needing to fine-tune the model.\\n\\nWe also agree that there may be other applications to this type of merging of RBMs without further training, and we are working to look at those in greater detail. Invertible Boolean Logic provides a good test bed for this idea, and as explained above, we do believe it has a very intimate relationship with Boolean Satisfiability problems and other combinatorial optimization problems.\"}", "{\"title\": \"General Comments to Reviewers\", \"comment\": \"We thank the reviewers for their detailed and thoughtful reviews! Based on their feedback, we uploaded a revised version of the paper.\\n\\nWe will be referring to the reviewers AnonReviewer2, AnonReviewer3, and AnonReviewer4 as R2, R3, and R4 respectively, and attempting to answer some of the comments that all of the reviewers had here.\", \"r4\": \"I have two issues with the chosen example: 1) the connection with combinatorial optimization is not clear to me, and 2) it\\u2019s not very well explained.\", \"r3\": \"\\u201cThe term \\\"combinatorial optimization\\\" is used in a confusing way -- addition would not usually be called a combinatorial optimization problem.\\u201d\", \"r2\": \"\\u201cThe term \\\"Combinatorial optimization\\\", which is used in the title and throughout the body of paper, sounds a bit confusing to the reviwer. This term is typically used in other contexts.\\u201d\\n\\nWe have also further in explaining our usage of the term \\u201ccombinatorial optimization\\u201d. We view our combination method with invertible Boolean logic as a method of solving the Boolean satisfiability problem, which is the classic example of an NP-Hard combinatorial optimization problem, and many combinatorial optimization problems can be reduced to it (as shown in the paper by Karp et al. [3]) . We have shown arithmetic and integer factorization as a further application of the invertible Boolean logic that inherently solves the boolean satisfiability problem in its construction. This is further supported by showing the method of creating a full adder circuit through combinations of logic circuits. \\n\\nWe have revised the paper to further explain this concept, and refer the reviewers to the Section 3 \\u201cApproach\\u201d. \\n\\nAll three reviewers also commented on our choice of invertible Boolean logic to validate this approach. We note that the integer factorization problem is in NP, and the Boolean satisfiability problem (which is solved within our invertible Boolean logic formulation) is also an NP-Complete problem. The reviewers also suggested using this approach on a more complex task such as TSP. We argue that integer factorization, invertible Boolean logic, and Boolean satisfiability can be harder in some ways than TSP, as integer factorization is an \\u201call or nothing\\u201d problem, where the solution is either completely correct or completely incorrect. Our choice of invertible Boolean logic was due to the simple and intuitive factorization of the problem into smaller sub problems. We also note (as mentioned above, and the revision of the paper) that NP-Complete formulations of the TSP problem can be reduced to a Boolean satisfiability problem ([2, 3]). We have also suggested how we could directly implement a TSP problem in the RBM. \\n\\n[1] Pierre. Bremaud. Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues, volume 1.Springer New York, 1999. ISBN 9781441931313.\\n\\n[2] Karp, Richard M. \\\"Reducibility among combinatorial problems.\\\" Complexity of computer computations. Springer, Boston, MA, 1972. 85-103.\\n\\n[3]Stephen A. Cook. The complexity of theorem-proving procedures. In Proceedings of the Third Annual ACM Symposium on Theory of Computing, STOC \\u201971, pp. 151\\u2013158, New York, NY, USA, 1971. ACM. doi: 10.1145/800157.805047.\\n\\n[4] Jansen, Boris, and Kenji Nakayama. \\\"Neural networks following a binary approach applied to the integer prime-factorization problem.\\\" Neural Networks, 2005. IJCNN'05. Proceedings. 2005 IEEE International Joint Conference on. Vol. 4. IEEE, 2005.\"}", "{\"title\": \"Review\", \"review\": \"The paper proposes to combine several smaller, pretrained RBMs into a larger model as a way to solve combinatorial optimization problems. Results are presented on RBMs trained to implement binary addition, multiplication, and factorization, where the proposed approach is compared with the baseline of training a full model from scratch.\\n\\nI found the paper confusing at times. It is well-written from a syntactical and grammatical point of view, but some key concepts are stated without being explained, which gives the impression that the authors have a clear understanding of the material presented in the paper but communicate only part of the full picture to the reader.\\n\\nFor instance, there\\u2019s a brief exposition of the connection between Boltzmann machines and combinatorial optimization problems: the latter is mapped onto the former by expressing constraints as a fixed set of Boltzmann machine weights and biases, and low-energy states (i.e. more optimal solutions) are found by sampling from the model, which involves no training. What\\u2019s less clear to me is what kinds of combinatorial optimization problems can be mapped onto the RBM *training* problem. The paper states that the problem of training \\\"large modules\\\" is \\\"equivalent to solving the optimization problem\\\", but does not explain how.\\n\\nSimilarly, the paper mentions that the \\\"general approach to solving these combinatorial optimization problems is to recognize the atomic unit necessary to solve the problem\\\", but at that point the reader has no concrete example of what combinatorial optimization problem would be mapped onto training and inference in RBMS.\", \"a_concrete_example_is_provided_in_the_experiments_section\": \"the authors propose to implement invertible (reversible?) boolean logic circuits by combining smaller pre-trained RBMs which implement certain logical operations into larger circuits. I have two issues with the chosen example: 1) the connection with combinatorial optimization is not clear to me, and 2) it\\u2019s not very well explained. As far as I understand, these reversible boolean logic operations are expressed as sampling a subset of the RBM\\u2019s inputs conditioned on another subset of its inputs. An example is presented in Figure 3 but is not expanded upon in the main text. I\\u2019d like the authors to validate my understanding:\\n\\nAn RBM is trained to implement a complete binary adder circuit by having it model the joint distribution of the adder\\u2019s inputs and outputs [A, B, Cin, S, Cout] (A is the first input bit, B is the second input bit, Cin is the input carry bit, S is the output sum bit, and Cout is the output carry bit), where (I assume) the distribution over [A, B, Cin] is uniform, and where S and Cout follow deterministically from [A, B, Cin]. After training, the output of the circuit is computed from [A, B, Cin] by clamping [A, B, Cin] and sampling [S, Cout] given [A, B, Cin] using Gibbs sampling.\\n\\nThe alternative to this, which is examined in the paper, is to train individual XOR, AND, and OR gates in the same way and compose them into a complete binary adder circuit as prescribed by Section 3.\\n\\nI think the paper has the potential to be a lot more transparent to the reader in explaining these concepts, which would avoid them spending quite a bit of time inferring meaning from figures.\\n\\nI\\u2019m also confused by the presentation of the results. For instance, I don\\u2019t know what \\\"log\\\", \\\"FA1\\\", \\\"FA2\\\", etc. refer to in Figure 6. Also, Figure 6 is referenced in the text in the context of binary multiplication (\\\"[...] is able to outperform a multiplier created just by training, as can be seen in Figure 6\\\"), but presents results for addition and factorization only.\\n\\nThe way I see it, implementing reversible boolean logic circuits using RBMs is an artificial problem, and the key idea of the paper -- which I find interesting -- is that in some cases it appears to be possible to combine RBMs trained for sub-problems into larger RBMs without needing to fine-tune the model. I think there are interesting large-scale applications of this, such as building an autoregressive RBM for image generation by training a smaller RBM on a more restricted inpainting task. The connection to combinatorial optimization, however, is much less clear to me.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Need experiments on more challenging tasks\", \"review\": [\"The paper proposes learning Restricted Boltzmann Machines for solving small computational tasks (e.g., 1-bit addition) and composing those RBMs to form a more complex computational module (e.g., 16-bit addition). The claim is that such an approach can be more data efficient than learning a single network to directly learn the more complex module. Results are shown for addition and factoring tasks.\", \"The paper is somewhat easy to follow and the figures are helpful. But the overall organization and flow of ideas can be improved significantly.\", \"The term \\\"combinatorial optimization\\\" is used in a confusing way -- addition would not usually be called a combinatorial optimization problem.\", \"It would be good to understand what benefit does the stochasticity of RBMs provide. How do deterministic neural networks perform on the addition and factoring tasks? The choice of RBMs is not motivated well and without any comparisons to alternatives, it comes across as arbitrary.\", \"That learning simple functions and composing them to compute more complex functions would be more data efficient than directly learning the complex functions does not seem very surprising. After all, the former approach gets a lot more knowledge about the target function built into it. It's good that the paper empirically confirms the intuition, but doesn't feel like a significant contribution on its own.\", \"The paper would be stronger if it includes more complex tasks, e.g., TSP, and show that the same ideas can be applied to improve the learning a solver for such tasks. The current tasks and problem sizes are not very convincing, and the accuracy results are not very compelling.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"The novelty and supporting theory is not significant\", \"review\": \"The paper introduces a new approach to combine small RBMs that are pretrained in order to obtain a large RBM with good performance. This will bypass the need of training large RBMs and suggests to break them into smaller ones. The paper then provides experimental evidence by applying the method on \\\"invertible boolean logic\\\". MCMC is used to find the the solution to large RBM and compare it against the combined solutions of smaller RBMs.\\n\\n\\nThe paper motivates the problem well, however, it is not well-written and at times it is hard to follow. The details of the approach is not entirely clear and no theoritcal results are provided to support the approach. For instance, in the introduced approach, only an example of combination is provided in Figure 1. It is not clear how smaller RBMs (and their associated parameters) are combined to obtain the larger RBM model. From the experimental perspective, the experimental evidence on \\\"invertible boolean logic\\\" does not seem to be very convincing for validating the approach. Additionally, the details of the settings of the experiments are not fully discussed. For example, what are the atomic/smaller problems and associated RBMs? what is the larger problem and how is the corresponding RBM obtained? Overall, the paper seems to be a report consisting of a few interesting observations rather than introducing a solid and novel contribution with theoretical guarantees.\", \"remark\": \"The term \\\"Combinatorial optimization\\\", which is used in the title and throughout the body of paper, sounds a bit confusing to the reviwer. This term is typically used in other contexts.\", \"typos\": \"** Page 2 -- Paragraph 2: \\\"Therefore, methods than can exploit...\\\"\\n** Page 3 -- 2nd line of math: Super-scripts are missing for some entries of the matrices W^A and W^{A+B}\\n** Page 5 -- Last paragraph: \\\"...merged logical units is more likly to get get stuck in a ...\\\"\\n** Page 5 -- Last paragraph: \\\"...and combining their distributions using the mulistart heuristic...\\\"\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HJlmhs05tm
EnGAN: Latent Space MCMC and Maximum Entropy Generators for Energy-based Models
[ "Rithesh Kumar", "Anirudh Goyal", "Aaron Courville", "Yoshua Bengio" ]
Unsupervised learning is about capturing dependencies between variables and is driven by the contrast between the probable vs improbable configurations of these variables, often either via a generative model which only samples probable ones or with an energy function (unnormalized log-density) which is low for probable ones and high for improbable ones. Here we consider learning both an energy function and an efficient approximate sampling mechanism for the corresponding distribution. Whereas the critic (or discriminator) in generative adversarial networks (GANs) learns to separate data and generator samples, introducing an entropy maximization regularizer on the generator can turn the interpretation of the critic into an energy function, which separates the training distribution from everything else, and thus can be used for tasks like anomaly or novelty detection. This paper is motivated by the older idea of sampling in latent space rather than data space because running a Monte-Carlo Markov Chain (MCMC) in latent space has been found to be easier and more efficient, and because a GAN-like generator can convert latent space samples to data space samples. For this purpose, we show how a Markov chain can be run in latent space whose samples can be mapped to data space, producing better samples. These samples are also used for the negative phase gradient required to estimate the log-likelihood gradient of the data space energy function. To maximize entropy at the output of the generator, we take advantage of recently introduced neural estimators of mutual information. We find that in addition to producing a useful scoring function for anomaly detection, the resulting approach produces sharp samples (like GANs) while covering the modes well, leading to high Inception and Fréchet scores.
[ "Energy based model", "Generative models", "MCMC", "GANs" ]
https://openreview.net/pdf?id=HJlmhs05tm
https://openreview.net/forum?id=HJlmhs05tm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1xRO-QnkV", "SJxxa341JN", "SJl43Qz60Q", "rylBNM1p0Q", "B1e3y6R9CQ", "ryg6_Ob8CQ", "Sygm8_bUAX", "S1g9XO-8Rm", "SJe1WpEj6m", "B1xh8rOKpQ", "SkxyPcXtTX", "Hklnlcmtp7", "Hke-_uQKpX", "BJe6b_7Y6m", "r1xZNcnCh7", "rklO0eJC27", "H1xdBXjg3Q", "SyxC8mgZ5Q", "BJeKYRtx97", "rJgXAUSCtm", "rkeYbXIjtX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment", "official_comment", "comment" ], "note_created": [ 1544462710083, 1543617720083, 1543476139562, 1543463469140, 1543331043717, 1543014517079, 1543014475314, 1543014433536, 1542307062889, 1542190419669, 1542171222762, 1542171124181, 1542170728784, 1542170628732, 1541487144584, 1541431504266, 1540563775595, 1538487126481, 1538461313514, 1538311883305, 1538118401006 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper695/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper695/Authors" ], [ "ICLR.cc/2019/Conference/Paper695/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper695/Authors" ], [ "ICLR.cc/2019/Conference/Paper695/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper695/Authors" ], [ "ICLR.cc/2019/Conference/Paper695/Authors" ], [ "ICLR.cc/2019/Conference/Paper695/Authors" ], [ "ICLR.cc/2019/Conference/Paper695/Authors" ], [ "ICLR.cc/2019/Conference/Paper695/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper695/Authors" ], [ "ICLR.cc/2019/Conference/Paper695/Authors" ], [ "ICLR.cc/2019/Conference/Paper695/Authors" ], [ "ICLR.cc/2019/Conference/Paper695/Authors" ], [ "ICLR.cc/2019/Conference/Paper695/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper695/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper695/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper695/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper695/Authors" ], [ "~Xiaojian_Ma1" ] ], "structured_content_str": [ "{\"metareview\": \"The proposed method is an extension of Kim & Bengio (2016)'s energy-based GAN. The novel contributions are to approximate the entropy regularizer using a mutual information estimator, and to try to clean up the model samples using some Langevin steps. Experiments include mode dropping experiments on toy data, samples from the model on CelebA, and measures of inception score and FID.\\n\\nThe paper is well-written, and the proposal seems sensible. But as various reviewers point out, the work is a fairly incremental extension of Kim and Bengio (2016). Most of the new elements, such as Langevin sampling and the gradient penalty, have also been well-explored in the deep generative modeling literature. It's not clear there is a particular contribution here that really stands out.\\n\\nThe experimental evidence for improvement is also fairly limited. Generated samples, inception scores, and FID are pretty weak measures for generative models, though I'm willing to go with them since they seem to be standard in the field. But even by these measures, there doesn't seem to be much improvement. I wouldn't expect SOTA results because of computational limitations, but the generated samples and quantitative evaluations seem worse than the WGAN-GP, even though the proposed method includes the gradient penalty and hence should be able to at least match WGAN-GP. The MCMC sampling doesn't appear to have helped, as far as I can tell.\\n\\nOverall, the proposal seems promising, but I don't think this paper is ready for publication at ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"incremental extension of past work; benefits unclear\"}", "{\"title\": \"Evaluation results for MCMC sampling\", \"comment\": \"> \\\"This further obscures the significance of section 4. And not good to state this in the appendix. Moreover, what is the evaluation result for the latent-space MCMC sampling?\\\"\\n\\nWe have conducted additional experiments to address the concerns raised by the reviewer.Your feedback has already been very helpful in improving the paper. The reason we mentioned it in the appendix is that we presented our algorithm as a generic framework, and we cited what we did experimentally in this specific case in the appendix (experiments section).\\n\\nAs per reviewer's request for evaluation results for MCMC sampling: we had originally only shown qualitative samples on CelebA. Upon your request, we decided to run quantitative numbers on CIFAR-10 and noticed that we get 6.50 (+- 0.6) Inception Score and 36.75 FID after applying multiple iterations of MCMC sampling. \\n\\n> \\\"This response make the paper more problematic. The question in my first post regarding correctness is still not answered.\\\"\\n\\nIf we understand right, the reviewer has raised 2 points regarding the correctness. \\n1.) Justification for gradient norm regularizer.\\n2.) Whether latent space sampling necessarily implies that P_EG leads to be closer to P_E\\n\\n1.). Gradient norm regularizer in Eq. (3): the regularizer ||dEnergy(x)/dx||^2 is not just a smoothness regularizer but it also makes data points x energy minima (because ||dEnergy(x)/dx|| should be 0 at data points). This thus helps to learn a better energy function. Note that this is similar in spirit to score matching, which also carves the energy function so that it has local minima at the training points. The regularizer also stabilizes the temperature (scale) of the energy function, making training stable. Our diverse set of experiments on Anomaly Detection and Natural Image Generation corroborates this result.\\n\\n\\n2.) We admit that p_EG leads to be closer to P_E only if we minimize KL(P_EG || P_E), which requires us to use samples from P_EG to maximize entropy and train the Generator. This is a slight modification which should be made to the general framework (algorithm) presented in Section 4.\\n\\nIn our specific paper, we mention that during training we minimize KL(P_G || P_E) [not KL(P_EG || P_E)]. As the reviewer points out, this means that latent space MCMC from P_EG does not mathematically imply that we sample from P_E. However, it is a beneficial / advantageous method of sampling from a region of P_E that hides the spurious modes of the energy function and provides better mixing.\\n\\nHowever, with the generic framework presented, it could well be made to sample from P_E, provided (like the reviewer mentioned), we minimize KL(P_EG || P_E).\\n\\nThank you again for the thoughtful review. We would like to know if our response adequately addressed your concerns. Are there any other aspects of the paper that you think could be improved? Are there particular additional methods that the reviewer would prefer a comparison to? Or anything else we can do to address the concern about the motivation? We think these additional analyses answers reviewer's questions about the novelty and the application of gradient norm regularizer. Does reviewer still think, that it's problematic ? If yes, then can reviewer be precise what the reviewer has in mind ?\\n\\nOur work proposed a way to begin addressing these challenges, and compares extensively to prior papers and several ablations. Specifically, we tackle an important problem of training energy-based models with maximum likelihood, by providing a way to approximate the negative phase using a generator, which is trained using neural estimators of mutual information to approximate the entropy and a score norm penalty that is very crucial for training stability. We also provide a general, flexible framework where MCMC in latent-space could be used to further better approximate the negative phase and produce better samples which mix substantially better than visible space MCMC. We corroborate our claims through a diverse set of experiments on toy data, discrete mode collapse, natural image generation and anomaly detection and show that our results are very encouraging since our model does not exhibit mode dropping, produces sharp images and produces state of the art results on anomaly detection among energy-based models.\"}", "{\"title\": \"This response make the paper more problematic.\", \"comment\": \"> we trained our models with n_mcmc = 0 (as mentioned in Section 7.2)\\n\\nThis further obscures the significance of section 4. And not good to state this in the appendix. Moreover, what is the evaluation result for the latent-space MCMC sampling? \\n\\nThis response make the paper more problematic.\\n\\nThe question in my first post regarding correctness is still not answered.\"}", "{\"title\": \"Clarifications regarding MCMC\", \"comment\": \"Yes, it is indeed correct that if we used samples from P_EG(x) to approximate samples from P_E, we need to be maximizing the entropy H(p_EG). In our particular case, we trained our models with n_mcmc = 0 (as mentioned in Section 7.2). Hence we used samples from P_G to approximate P_E (not P_EG).\\n\\nIn short, we are NOT using MCMC in latent space (sampling from p_EG(x)) to produce the samples used to train the energy function E. We are using ordinary samples from p_G for this purpose, thus following equation 3 in the paper. The latent-space MCMC in this version of our work is introduced as a sampling mechanism since MCMC in the visible space has issues with mixing and spurious modes. As mentioned in the rebuttals, the latent-space MCMC provides a way of performing MCMC sampling that hides the spurious modes of the energy function.\"}", "{\"title\": \"The method is problematic\", \"comment\": \"The reviewer would like to thank the authors for their response, which clarifies some unclear issues. However, the response does not address my concern about the correctness of the proposed method.\\n\\nParticularly, p_EG(x) (the distribution implicitly defined by composing the generator and the data-space energy) != p_G(x).\\nNote that the entropy term H(p_G) arises from using samples from p_G to approximate sampling from p_E in maximum likelihood training of p_E. If using samples from p_EG(x) for approximation, then the entropy term would be H(p_EG). Estimation of H(p_EG) is needed. The whole method seems to be problematic, from this perspective.\\n\\nReviewer-2 shows the same concern.\"}", "{\"title\": \"Feedback by reviewer. Thanks for your time! :)\", \"comment\": \"We would appreciate it if the reviewer could take another look at our changes, and let us know if the reviewer would like to request additional changes that would alleviate reviewers concerns. We hope that our updates address the reviewer's concerns. We once again thank the reviewer for the feedback of our work.\"}", "{\"title\": \"Feedback by Reviewer. Thanks for your time! :)\", \"comment\": \"We would appreciate it if the reviewer could take another look at our changes, and let us know if the reviewer would like to request additional changes that would alleviate reviewers concerns. We hope that our updates address the reviewer's concerns about comparison to Kim and Bengio. We once again thank the reviewer for the feedback of our work.\"}", "{\"title\": \"Feedback by Reviewer. Thanks for your time! :)\", \"comment\": \"We would appreciate it if the reviewer could take another look at our changes, and let us know if the reviewer would like to request additional changes that would alleviate reviewers concerns. We hope that our updates address the reviewer's concerns. We once again thank the reviewer for the feedback of our work.\"}", "{\"title\": \"Further clarification\", \"comment\": \"We agree that the first line of the equations above is incorrect. Thanks for pointing it out. It works in the discrete case but in the continuous case it needs an extra factor for the determinant of G'. Note that this discussion is about future work to extend the submitted paper, so indeed, in order to encourage the density matching property we would need a regularizer (or a hard constraint) to make G not just invertible but also with G' having singular values as close to 1 as possible to preserve volume. We could use a NICE/NVP-like parametrization but the surprising thing is that experimentally the proposed method works without that hard constraining in the various settings we tried. One interesting hypothesis towards explaining this observation is that to first approximation the data density for images is almost discrete, in the sense that what matters to get good images is whether you are pretty much on the data manifold (with a very small tolerance for noise) or off of it, and not so much the relative density on the manifold. This is simply a consequence of the manifold hypothesis stating that probability mass concentrates on the manifold.\"}", "{\"title\": \"The explanation is problematic :(\", \"comment\": \"Thank you for following up my biggest concern.\\n\\nUnfortunately I don't think you can easily go away from this issue just by doing integrals. Otherwise all previous developments on normalising flows will be problematic! More specifically, they (e.g. NICE/real NVP/IAF/MAF) considered the following model:\\np_z(z) = N(0, I), x = G(z), G is invertible\\nAnd you can see all of them clearly wrote p_x(x) = p_z(G^{-1}(x)) |dz/dx|.\\n\\nI would suggest reading the following relevant paper which might be helpful to clear your confusions.\", \"https\": \"//arxiv.org/abs/1708.01529\\n\\nBest,\"}", "{\"title\": \"Addressing concern about latent-space MCMC not sampling from p_theta (2/2)\", \"comment\": \"** In general, we agree that the proposed MCMC sampling procedure is not sampling from p_theta and the energy function (see (1) why this is actually a good thing). But consider the special case (not enforced here) where G is invertible, i.e., there exists only one z such that G(z)=x for all x in the regions of interest. Let p_{EG}(x) be the density in x-space corresponding to x=G(z) and z sampled following the composed energy function E(G(z)). Then\\n\\np_{EG}(x) = \\\\propto \\\\int 1_{G(z)=x} exp(-E(G(z)) dz \\n = exp(-E(x)) \\\\int 1_{G(z)=x} dz \\n = exp(-E(x)) \\n\\nwhere the first line (with 1_{} indicating a dirac delta function) comes from considering all the z's which could give rise to the given x and weighing their probability by exp(-E(G(z)) (and ignoring the partition function as we only care about the relative probabilities here), the second line comes from the observation that G(z)=x for all the non-zero integrands so we can take the exponential out of the integral at G(z)=x, and the 3rd line from integrating a dirac when there is only one point z which satisfies the condition, i.e., G is invertible. \\n\\nOngoing work is investigating how to make G approximately invertible using a reconstruction loss in z-space, although an alternative would be to structure G so that it is invertible by construction, as in NICE / real NVP.\"}", "{\"title\": \"Thanks for the constructive feedback, composing the generator with the energy function gets rid of spurious modes (1/2)\", \"comment\": \"We thank the reviewer for the positive and constructive feedback. We appreciate that the reviewer finds that our method is clearly explained.\\n\\n* \\u201c1. The MCMC method essentially samples z from another EBM, where that EBM(z) has energy function -E_{\\\\theta}(G(z)), and then generate x = G(z). Note here EBM(z) != p(z). The key issue is, even when p_G(x) = p_{\\\\theta}(x), there is no guarantee that the proposed latent-space MCMC method would return x samples according to distribution p_{\\\\theta}(x). You can easily work out a counter example by considering G is an invertible transformation. Therefore I don't understand why doing MCMC on this latent-space EBM can help improve sample quality in x space.\\u201d\", \"our_hypothesis_is_the_following\": \"composing the generator with the energy function gets rid of spurious modes of the energy which the generator cannot represent. If the generator did sample from these spurious modes, then the negative samples from the generator would have made the energy function learn to get rid of these spurious modes, via the 2nd term of eqn 3 when training the energy function. Hence we get a cleaned-up version of the energy function. Spurious modes of the energy function which have not been eliminated via training through eqn 3 are thus erased by this composition of G with E. Now there may be a price to pay for this, i.e., G may also be missing some modes (as usual with GANs). However, because we have the entropy maximization term (eqn 4), we at least train in a way that attempts to minimize this problem. We agree that the composed energy function is different from E. The other good thing about MCMC in the composed energy function is that it seems to also be easier, following the observations of Bengio et al 2013, because the data manifold has been somewhat flattened in the latent space of the generator.\\n\\n* \\u201c2. Continuing point 1, with Algorithm 1 that only fits p_G(x) towards p_{\\\\theta}(x), I am confident that the negative phase gradient is still quite biased. Why not just use the latent-space MCMC sampler composited with G as the generator, and use these MCMC samples to train both the decoder G and the mutual information estimator?\\u201d\\n\\nThis is a good idea, which we did not execute yet because it would slow down training 10-fold, but it is an interesting direction to follow-up with.\\n\\n* \\u201c3. I am not exactly sure why the gradient norm regularizer in (3) make sense here? True that it would be helpful to correct the bias of the negative phase, but why this particular form? We are not doing WGAN here and in general we don't usually put a Lipschitz constraint on the energy function. I've seem several GAN papers arguing that gradient penalty helps in cases beyond WGAN, but most of them are just empirical observations...\\nAlso the Omega regularizer is computed on which x? On data? Do you know whether the energy is guaranteed to be minimized at data locations? In this is that appropriate to call Omega a regularizer?\\u201d\\n\\nThe regularizer ||dEnergy(x)/dx||^2 is not just a smoothness regularizer but it also makes data points x energy minima (because ||dEnergy(x)/dx|| should be 0 at data points). This thus helps to learn a better energy function. Note that this is similar in spirit to score matching, which also carves the energy function so that it has local minima at the training points. The regularizer also stabilizes the temperature (scale) of the energy function, making training stable (avoiding continued growth of precision, inverse temperature, as training continues).\\n\\n* \\u201c2. Equation 3 is inconsistent with the energy update equation in Algorithm 1. The latter one makes more sense.\\u201d\\n\\nSorry for the typo in eqn 3. The LHS should have been the gradient of L_E wrt theta, and Omega on the RHS should have been dOmega/dtheta.\\n\\n* \\u201c3. Where is the ratio between the transition kernels in the acceptance ratio equation? In general for Langevin dynamics the transition kernel is not symmetric.\\u201d\\n\\nThe correction term to be added to -E(G(z'))+E(G(z)) (where z' = new z, and z = old z) would be:\\n\\nlog (q(z|z')/q(z|z')) = 0.5 ( ||eps||^2 - ||eps - sqrt(alpha/2)(E'(z) - E'(z'))||^2 )\\n\\nwhere q is the proposal distribution producing z' from z and E' = gradient of E. We tried using the full formula but that it did not seem to make a discernible difference.\\n\\nPlease let us know if anything is unclear here or if there is any other comparison that would be helpful in clarifying things more.\"}", "{\"title\": \"Thanks for your feedback . Paper goes well beyond Kim & Bengio 2016\", \"comment\": \"We thank the reviewer for their time and feedback. We hope to address concerns the reviewer has here.\\n\\n* \\u201cThe justification of adding a gradient norm regularizer in Eq. (3) for turning a GAN discriminator into an energy function is not clear.\\u201d\\n\\nGradient norm regularizer in Eq. (3): the regularizer ||dEnergy(x)/dx||^2 is not just a smoothness regularizer but it also makes data points x energy minima (because ||dEnergy(x)/dx|| should be 0 at data points). This thus helps to learn a better energy function. Note that this is similar in spirit to score matching, which also carves the energy function so that it has local minima at the training points. The regularizer also stabilizes the temperature (scale) of the energy function, making training stable.\\n\\n* \\u201cIn my view, the paper is an extension of Kim&Bengio 2016\\u201d\\n\\nWe strongly believe this paper goes well beyond Kim & Bengio 2016. First, a major issue of Kim & Bengio 2016 is that it used covariance to maximize entropy. When we tried reproducing the results in that paper, even with the help of the authors, we could not get stable results. Entropy maximization using a mutual information estimator is much more robust compared to covariance maximization. But that alone was not enough and we got strong improvements by using the gradient norm regularizer (see (3) below) which helped stabilize the training as well. Finally, we show a successful form of MCMC exploiting the generator latent space composed with the energy function and we show new and successful empirical results on anomaly detection and sharp image generation, something which had not been done earlier for an energy-based model (and certainly not by Kim & Bengio). We also direct the reviewer towards our empirical results on discrete mode collapse where we show our model naturally covers all the modes in that data (in the expanded, 10^4 mode StackedMNIST dataset) and also better matches the mode count distribution as evidenced by the very low KL divergence scores.\\n\\n* \\u201cSampling in latent space and then converting to data space samples to approximate the sampling from p_theta is operationally possible. There are three distributions - the generator distribution p_G, the distribution p_comp implicitly defined by the latent-space energy obtained by composing the generator and the data-space energy, and the energy-based model p_E., p_G is trained to approximate p_E, since we minimize KL(p_G||p_E). Does latent space sampling necessarily imply that p_comp leads to be closer to p_E ?\\u201d\\n\\nMCMC on the energy function p_E in data space did not give good results, while doing it in the latent space worked. We hypothesize that the reason for this is (a) walking on the data manifold is much easier in the latent space, as shown earlier by Bengio et al 2013 (because the data manifold has been somewhat flattened when represented in the latent space) and (b) composing the generator with the energy function gets rid of spurious modes of the energy which the generator cannot represent (if it did, then the negative samples from the generator would have made the energy function learn to get rid of these spurious modes, via the 2nd term of eqn 3 when training the energy function).\\n\\n* \\u201cThe results of image generation in Table 2 on CIFAR-10 are worse than WGAN-GP, which is now in fact only moderately performed GANs. In a concurrent ICLR submission - \\\"Learning Neural Random Fields with Inclusive Auxiliary Generators\\\", energy-based models trained with their method are shown to significantly outperform WGAN-GP\\u201d\\n\\nThe objective was not to beat the best GANs (which do not provide an energy function) but to show that it was possible to have both an energy function and good samples by appropriately fixing issues with the Kim & Bengio setup (and we clearly did not know about the concurrent ICLR submissions on energy-based models).\\n\\n\\nPlease let us know if anything is unclear here or if there is any other comparison that would be helpful in clarifying things more.\"}", "{\"title\": \"Clarifying technical questions and providing justification\", \"comment\": \"We thank the reviewer for their time and feedback. We hope to address concerns the reviewer has here.\\n\\n* \\u201cHowever, the major problem of this paper is the novelty. The algorithm is basically an extension of the Kim&Bengio 2016 and Dai et.al. 2017, with other existing learning technique\\u201d\\n\\nWe strongly believe this paper goes well beyond Kim & Bengio 2016. First, a major issue of Kim & Bengio 2016 is that it used covariance to maximize entropy. When we tried reproducing the results in that paper, even with the help of the authors, we could not get stable results. Entropy maximization using a mutual information estimator is much more robust compared to covariance maximization. But that alone was not enough and we got strong improvements by using the gradient norm regularizer (see (3) below) which helped stabilize the training as well. Finally, we show a successful form of MCMC exploiting the generator latent space composed with the energy function and we show new and successful empirical results on anomaly detection and sharp image generation, something which had not been done earlier for an energy-based model (and certainly not by Kim & Bengio). We also direct the reviewer towards our strong empirical results on discrete mode collapse where we show our model naturally covers all the modes in that data (in the expanded, 10^4 mode StackedMNIST dataset) and also better matches the mode count distribution as evidenced by the very low KL divergence scores.\\n\\n\\n* \\u201cMaybe the only novel part is combining the MCMC with the learned generator for generating samples. However, the benefits of such combination is not well justified empirically. Based the figure 4, it seems the MCMC does not provide better samples, comparing to directly generate samples from G_z. It will be better if the authors can justify the motivation of using MCMC step.\\u201d\\n\\nOur contribution is also to enforce entropy maximization on the discriminator and using a regularizer on the energy-function to stabilize training. This specific combination was instrumental in obtaining our empirical result: (1) Covering all modes in our discrete mode collapse experiment where our model matches the mode count distribution of the data significantly better than WGAN-GP as pointed out in Section 5.2 and Table 1. (2) Using the learned energy function to perform anomaly detection, beating the previous SOTA energy-based model (DSEBM) by a large margin (as mentioned in Section 5.4 Table 3) and comparable to the SOTA anomaly detection method (DAGMM) which is purely designed for anomaly detection and not generative modeling (3) Natural image generation, where our energy-based method performs comparable to a strong WGAN-GP baseline in perceptual quality and doesn\\u2019t exhibit the common blurriness issue in standard maximum-likelihood trained EBMs (Section 5.3 Table 2).\", \"regarding_the_justification_of_latent_space_mcmc\": \"Note that the MCMC on the energy function in data space did not give good results, while doing it in the latent space worked. (Refer Figure 5 for data-space MCMC samples). We hypothesize that the reason for this is (a) walking on the data manifold is much easier in the latent space, as shown earlier by Bengio et al 2013 and (b) composing the generator with the energy function gets rid of spurious modes of the energy which the generator cannot represent (if it did, then the negative samples from the generator would have made the energy function learn to get rid of these spurious modes).\\n\\n* \\u201cSecondly, it is reasonable that the authors introduce the gradient norm as the regularization to the objective for training stability. However, it will be better if the effect of the regularization for the energy model estimation can be discussed.\\u201d\", \"effect_of_the_regularization_of_the_energy_function\": \"the regularizer ||dEnergy(x)/dx||^2 is not just a smoothness regularizer but it also makes data points x energy minima (because ||dEnergy(x)/dx|| should be 0 at data points). This thus helps to learn a better energy function. Note that this is similar in spirit to score matching, which also carves the energy function so that it has local minima at the training points (i.e it is helping to make data points as an energy minima). The regularizer also stabilizes the temperature (scale) of the energy function, making training stable.\\n\\n* \\u201cThe loss function for potential in Eq(3) is incorrect and inconsistent with the Algorithm 1. I think the formulation in the Algorithm box is correct.\\u201d\\n\\nIndeed there was a typo in eqn 3. The LHS should have been the gradient of L_E wrt theta, and Omega on the RHS should have been dOmega/dtheta.\"}", "{\"title\": \"An interesting combination of the recently developed techniques for a better algorithm\", \"review\": \"In this paper, the authors extend the framework proposed by Kim&Bengio 2016 and Dai et.al. 2017, which introduce an extra step to fit a generator to approximate the current model for estimating the deep energy model. Specifically, the generator is fitted by reverse KL divergence. To bypass the difficulty in handling the entropy term, the authors exploit the Deep INFOMAX formulation, which introduces one more discriminator. Finally, to obtain better samples, the authors inject the Metropolis-adjusted Langevin algorithm within the learned generator to generate samples in latent space. They demonstrate the better performances of the proposed algorithms in both synthetic and real-world datasets, and apply the learned model for anomaly detection task.\\n\\nThe paper is well-written and does a quite good job in combining several existing algorithms to obtain the ultimate algorithm. The algorithm achieves quite good empirical performances. However, the major problem of this paper is the novelty. The algorithm is basically an extension of the Kim&Bengio 2016 and Dai et.al. 2017, with other existing learning technique. Maybe the only novel part is combining the MCMC with the learned generator for generating samples. However, the benefits of such combination is not well justified empirically. Based the figure 4, it seems the MCMC does not provide better samples, comparing to directly generate samples from G_z. It will be better if the authors can justify the motivation of using MCMC step. \\n\\nSecondly, it is reasonable that the authors introduce the gradient norm as the regularization to the objective for training stability. However, it will be better if the effect of the regularization for the energy model estimation can be discussed.\", \"minor\": \"The loss function for potential in Eq(3) is incorrect and inconsistent with the Algorithm 1. I think the formulation in the Algorithm box is correct. \\n\\nIn sum, I personally like the paper as a nice combination of recently developed techniques to improve the algorithm for solving the remaining problem in statistics. The paper can be better if the above mentioned issues can be addressed\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"There are some unclear issues, regarding correctness and significance\", \"review\": \"It is well known that energy-based model training requires sampling from the current model.\\nThis paper aims to develop an energy-based generative model with a generator that produces approximate samples.\\nFor this purpose, this paper combines a number of existing techniques, including sampling in latent space, using a GAN-like technique to maximize the entropy of the generator distribution.\\nEvaluation experiments are conducted on toy 2D data, unsupervised anomaly detection, image generation.\\n\\nThe proposed method is interesting, but there are some unclear issues, which hurts the quality of this paper.\\n\\n1. Correctness\\n\\nThe justification of adding a gradient norm regularizer in Eq. (3) for turning a GAN discriminator into an energy function is not clear.\\n\\nSampling in latent space and then converting to data space samples to approximate the sampling from p_theta is operationally possible. There are three distributions - the generator distribution p_G, the distribution p_comp implicitly defined by the latent-space energy obtained by composing the generator and the data-space energy, and the energy-based model p_E.\\np_G is trained to approximate p_E, since we minimize KL(p_G||p_E). Does latent space sampling necessarily imply that p_comp leads to be closer to p_E ?\\n\\n2. Significance\\n\\nIn my view, the paper is an extension of Kim&Bengio 2016.\\nTwo extensions - providing a new manner to calculate the entropy term, and using sampling in latent space. In this regard, Section 3 is unnecessarily obscure.\\n\\nThe results of image generation in Table 2 on CIFAR-10 are worse than WGAN-GP, which is now in fact only moderately performed GANs. In a concurrent ICLR submission - \\\"Learning Neural Random Fields with Inclusive Auxiliary Generators\\\", energy-based models trained with their method are shown to significantly outperform WGAN-GP.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting approach, but not fully justified\", \"review\": \"Thank you for an interesting read.\\n\\nThe paper proposes an approximate training technique for energy-based models (EBMs). More specifically, the samples used negative phase gradient in EBM training is approximated by samples from another generator. This \\\"approximate generator\\\" is a composition of a decoder (which, with a Gaussian prior on latent variable z, is trained to approximate the data distribution) and another EBM in latent space. The authors show connections to WGAN training, thus the name EnGAN. Experiments on natural image generation and anomaly detection show promising improvements, although not very significant.\\n\\nFrom my understanding of the paper, the main contribution of the paper comes from section 4, which proposes a latent-space MCMC scheme to improve sample quality. I have seen several papers fusing EBMs and GAN training together and to the best of my knowledge section 4 is novel (but with problems, see below). Section 3's recipe is quite standard, e.g. as seen in Kim and Bengio (2017), and in principle contrastive divergence also uses the same idea. The idea of estimating of the entropy term for the implicit distribution p_G with adversarial mutual information estimation is something new, although quite straight-forward.\\n\\nAlthough I do agree that MCMC mixing in x space can be much harder than MCMC mixing in z space, since I don't think the proposed latent-space MCMC scheme is exact (apart from finite-time simulation, rejection...), I don't see theoretically why the method works.\\n\\n1. The MCMC method essentially samples z from another EBM, where that EBM(z) has energy function -E_{\\\\theta}(G(z)), and then generate x = G(z). Note here EBM(z) != p(z). The key issue is, even when p_G(x) = p_{\\\\theta}(x), there is no guarantee that the proposed latent-space MCMC method would return x samples according to distribution p_{\\\\theta}(x). You can easily work out a counter example by considering G is an invertible transformation. Therefore I don't understand why doing MCMC on this latent-space EBM can help improve sample quality in x space.\\n\\n2. Continuing point 1, with Algorithm 1 that only fits p_G(x) towards p_{\\\\theta}(x), I am confident that the negative phase gradient is still quite biased. Why not just use the latent-space MCMC sampler composited with G as the generator, and use these MCMC samples to train both the decoder G and the mutual information estimator?\\n\\n3. I am not exactly sure why the gradient norm regulariser in (3) make sense here? True that it would be helpful to correct the bias of the negative phase, but why this particular form? We are not doing WGAN here and in general we don't usually put a Lipschitz constraint on the energy function. I've seem several GAN papers arguing that gradient penalty helps in cases beyond WGAN, but most of them are just empirical observations...\\nAlso the Omega regulariser is computed on which x? On data? Do you know whether the energy is guaranteed to be minimized at data locations? In this is that appropriate to call Omega a regulariser?\\n\\nThe presentation is overall clear, although I think there are a few typos and confusing equations:\\n\\n1. There should be a negative sign on the LHS of equation 2.\\n2. Equation 3 is inconsistent with the energy update equation in Algorithm 1. The latter one makes more sense.\\n3. Where is the ratio between the transition kernels in the acceptance ratio equation? In general for Langevin dynamics the transition kernel is not symmetric.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Clarifications on the details\", \"comment\": \"Thanks for your questions.\\n1.) Yes, as you point out, we noticed as well that our score norm penalty is similar to the WGAN-GP regularizer. Our intuition behind using the score penalty regularizer in our case is to have 0 score norm for training points because that corresponds to a minimum of the energy (and we want to carve the energy function to have minima at data points). That is, there shouldn't be any gradient with respect to the input for true data (or equivalently, 0 reconstruction error). This, in practice was sufficient to fix the temperature explosion issue as mentioned in Section 3. We also noticed that in practice, the absence of constraint on P_G (fake samples) does not cause unstable training or any numerical problems.\\n\\n2.) Yes, it is right to point out that if the distributions of P_\\\\theta and P_G do not have overlapping support, the KL divergence will be infinite and there will be no gradient signal to align the 2 distributions. However, our intuition is that this is less likely to happen since the entropy maximization term arising from the KL divergence will help align the supports of the 2 distributions and hence provide gradient signal to match the 2 distributions. Also, the KL gradient on P_G is basically doing two reasonable things: putting more probability mass where the energy is low and increasing entropy (otherwise all the mass could be concentrated in one point). Note how the latter term will prevent p_G to be too concentrated.\"}", "{\"comment\": \"This paper proposed an interesting idea on combining the GANs and energy based models.\", \"i_have_following_questions_on_the_details_of_this_paper\": \"1) In Equation (3), there is a regularization item added to avoid the \\\"numerical problems\\\". It looks very similar to the Gradient Penalty item in WGAN-GP[Gulrajani et al., 2017]. In WGAN-GP, the gradient norm regularizer is utilized on the linear interpolation space of P_D and P_G. While in algorithm 1 of this paper, I find the regularizer only penalized the point on real data distribution, and second term in Equation(3) can be infinity as there is no constraint on P_G. I think that this will still cause unstable training and numerical problems.\\n\\n2) To make P_\\\\theta and P_G match, the authors selected to minimize a KL divergence. As illustrated in [Arjovsky et al., 2017], KL divergence will behave poorly when the two distribution is without overlapping. I am curious about whether the KL divergence is suitable to estimate the difference on P_\\\\theta and P_G, especially when the distribution is high dimensional.\", \"title\": \"Some questions on the details\"}", "{\"title\": \"Thanks for the feedback and spotting the typo\", \"comment\": \"Jeasine, thanks for the feedback and spotting the sign typo! We will add the very relevant Finn et al 2016 reference.\"}", "{\"comment\": \"Previously I've read a NIPS workshop paper[Finn et.al., 2016] that try to reveal the inherent connection between training an energy-based model and a generative adversarial net. The main contribution of that paper is providing a full derivation of the equivalence of $L_G \\\\leftrightarrow KL(p_G || p_\\\\theta)$ and $L_D \\\\leftrightarrow \\\\mathbb{E}_{x \\\\sim p_{data}}\\\\left[-\\\\log{p_\\\\theta(x)}\\\\right]$. However, in that paper, this equivalence only holds when D takes the form of $\\\\frac{p_\\\\theta(x)}{p_\\\\theta(x) + p_G(x)}$, and this is so-called the \\\"optimal discriminator\\\" mentioned in [Goodfellow et.al., 2014]. But the problem is the discriminator actually cannot always hold such form during gradient descent, which implies that it's basically not appropriate to directly cast the training of original GAN as training an EBM as what [Finn et.al.,2016] claimed.\\n\\nIn this paper, the authors alternatively choose to optimize $KL(p_G || p_\\\\theta)$ in Eq.4 by explicitly maximizing the entropy of G with DIM estimator, such techniques eliminate the dependency of the optimal discriminator in [Finn et.al.,2016] while the equivalence to the EBM objective could still be retained. On the other hand, although the proposed method still relies on MCMC, sampling with learned energy could be an essence to optimizing strictly with the EBM objective in this generator-discriminator architecture, and the authors do report promising results compared with original GAN and WGAN-GP.\\n([Finn et.al., 2016] tries to prove that the original GAN training procedure implicitly contains the MCMC step for estimating the partition function, but such conclusions depend on the optimal discriminator form).\\n\\nOne minor suggestion, Is there a typo in Sec.3? I think KL(p_||p_\\\\theta) = H[p_G] - E_{p_G}[log p\\u03b8(x)] should be KL(p_||p_\\\\theta) = -H[p_G] - E_{p_G}[log p\\u03b8(x)], a minus is missing.\\n\\n[Finn et.al., 2016] A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models, in NIPS Workshop, 2016, https://arxiv.org/abs/1611.03852\\n[Goodfellow et.al., 2014] Generative Adversarial Nets, in NIPS, 2014\", \"title\": \"Good paper with insightful ideas\"}" ] }
SyxXhsAcFQ
Cohen Welling bases & SO(2)-Equivariant classifiers using Tensor nonlinearity.
[ "Muthuvel Murugan", "K Venkata Subrahmanyam" ]
In this paper we propose autoencoder architectures for learning a Cohen-Welling (CW)-basis for images and their rotations. We use the learned CW-basis to build a rotation equivariant classifier to classify images. The autoencoder and classi- fier architectures use only tensor product nonlinearity. The model proposed by Cohen & Welling (2014) uses ideas from group representation theory, and extracts a basis exposing irreducible representations for images and their rotations. We give several architectures to learn CW-bases including a novel coupling AE archi- tecture to learn a coupled CW-bases for images in different scales simultaneously. Our use of tensor product nonlinearity is inspired from recent work of Kondor (2018a). Our classifier has very good accuracy and we use fewer parameters. Even when the sample complexity to learn a good CW-basis is low we learn clas- sifiers which perform impressively. We show that a coupled CW-bases in one scale can be deployed to classify images in a classifier trained and tested on images in a different scale with only a marginal dip in performance.
[ "group representations", "group equivariant networks", "tensor product nonlinearity" ]
https://openreview.net/pdf?id=SyxXhsAcFQ
https://openreview.net/forum?id=SyxXhsAcFQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJlJZP7ElN", "rke0Td-K0m", "ryxwx6cGAQ", "rJe_WwNyR7", "HJgtLS2STQ", "rkgZoE2rTX", "ryxWwVhrpQ", "HJl2WNnS6Q", "rklZjm3rTm", "r1gwWQ3Bpm", "H1gjQCG63m", "SygqtUCqnm", "Bye1UtGK2m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544988406722, 1543211205687, 1542790382548, 1542567679680, 1541944656723, 1541944473262, 1541944408715, 1541944324411, 1541944217172, 1541944063378, 1541381667481, 1541232257968, 1541118278643 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper694/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper694/Authors" ], [ "ICLR.cc/2019/Conference/Paper694/Authors" ], [ "ICLR.cc/2019/Conference/Paper694/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper694/Authors" ], [ "ICLR.cc/2019/Conference/Paper694/Authors" ], [ "ICLR.cc/2019/Conference/Paper694/Authors" ], [ "ICLR.cc/2019/Conference/Paper694/Authors" ], [ "ICLR.cc/2019/Conference/Paper694/Authors" ], [ "ICLR.cc/2019/Conference/Paper694/Authors" ], [ "ICLR.cc/2019/Conference/Paper694/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper694/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper694/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper studies group equivariant neural network representations by building on the work by [Cohen and Welling, '14], which introduced learning of group irreducible representations, and [Kondor'18], who introduced tensor product non-linearities operating directly in the group Fourier domain.\\n\\nReviewers highlighted the significance of the approach, but were also unanimously concerned by the lack of clarity of the current manuscript, making its widespread impact within ICLR difficult, and the lack of a large-scale experiment that corroborates the usefulness of the approach. They were also very positive about the improvements of the paper during the author response phase. The AC completely agrees with this assessment of the paper. Therefore, the paper cannot be accepted at this time, but the AC strongly encourages the authors to resubmit their work in the next conference cycle by addressing the above remarks (improve clarity of presentation and include a large-scale experiment).\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting ideas, but currently not sufficiently well presented\"}", "{\"title\": \"Added experimental results on Fashion-MNIST\", \"comment\": \"We have experimented our algorithms on Fashion-MNIST dataset and reported the results in the current revision of the paper.\"}", "{\"title\": \"Details about computing Z_i\", \"comment\": \"We have added the details in the appendix of the current revision of the paper. Please let us know if you need more information.\"}", "{\"title\": \"More details in algorithmic block\", \"comment\": \"Thanks for responding. Can you give more details on how Z_i is computed in the algorithmic block?\"}", "{\"title\": \"Responses to AnonReviewer2\", \"comment\": \"Thanks for the review comments.\\n>>>Review: This paper deals with the issue of learning rotation invariant autoencoders and classifiers. While this problem is well motivated, I found that this paper was fairly weak experimentally, and I also found it difficult to determine what the exact algorithm was. For example, how the optimization was done is not discussed at all. At the same time, I'm not an expert in group theory, so it's possible that the paper has technical novelty or significance which I did not appreciate. \\n\\n[Reply] The algorithm for learning a CW basis is now stated explicitly in the appendix. We have summarised our approach in the response to AnonReviewer3. We have given it as comments titled \\\"Summary 1 of 2\\\" & \\\"Summary 2 of 2\\\".\\n\\nWe discuss our Implementation now in Section 3.4. The main technical novelty is that equivariance is easily learned in the CW basis. As AnonReviewer1 points out, tensor product nonlinearity is perhaps more important than the basis itself.\\n-----\\n\\n>>>Strengths: \\n\\n>>> -The challenge of learning rotation equivariant representations is well motivated and the idea of learning representations which transfer between different scales also seems useful. \\n\\n[Reply] Thanks for this encouragement.\\n-----\\n\\n>>>Weaknesses: \\n \\n>>>-I had a difficult time understanding how the preliminaries (section 2) were related to the experiments (section 3). \\n\\n[Reply] Sorry about this. Perhaps a reason for confusion is that whereas we use the phrase G-morphism\\nin Section 2, we use the phrase SO(2) equivariant maps in Section 3. These are the same. \\n-----\\n\\n>>>-The reference (Kondor 2018) is used a lot but could refer to three different papers that are in the references. \\n\\n[Reply] Sorry about this. This is now corrected.\\n-----\\n\\n>>> -Only reported results are on rotated mnist, but the improvements seem reasonable, but unless I'm missing something are worse than the 1.62% error reported by harmonic nets (mentioned in the introduction of the paper). In addition to rot-mnist, harmonic nets evaluated boundary detection on the berkeley segmentation dataset. \\n\\n[Reply] Yes we get about 97%, less than what harmonic nets get but the architecture is very simple. One aspect we did not emphasise much is the last column in table 1. It is known that Harmonic nets (and many other equivariant networks) need a large amount of data augmentation to perform well on MNIST-rot when trained on upright MNIST. We need no such augmentation once we have a reasonable W_{28}. In that sense our network is like spherical and FFS2CNN - truly rotation equivariant. \\n\\nWe will take a look at Berkeley segmentation data and see what harmonic nets do and see if we can conduct those experiments. Thanks for this suggestion.\\n-----\\n\\n>>> -It's interesting that the model learns to be somewhat invariant across scales, but I think that the baselines for this could be better. For example, using a convolution network with mean pooling at the end, one could estimate how well the normal classifier handles evaluation at a different scale from that used during training (I imagine the invariance would be somewhat bad but it's important to confirm). \\n\\n[Reply] Thanks for this suggestion. We have run these experiments. We trained a CNN with about 489K parameters on MNIST-rot 28x28 images, getting a 95.1 percent accuracy.\\nWhen this was fed 14x14 images scaled up to 28x28, we get 90.5 percent accuracy. Should we report this in the main paper?\\n-----\\n\\n>>>Questions: \\n\\n>>>-Section 3.1 makes reference to \\\"learning parameters\\\". I assume that this is done in the usual way with backpropagation and then SGD/Adam or something? \\n\\n[Reply] Yes, backpropogation using ADAM optimiser. We make this explicit in Section 3.4\\n-----\\n\\n>>>-How is it guaranteed that W is orthogonal in the learning procedure? \\n\\n[Reply] Sorry, we should have mentioned this -we do so now - we add a regularizer to the reconstruction loss.\\n-----\"}", "{\"title\": \"Summary 2 of 2\", \"comment\": \"\", \"learning_cw_basis\": \"In the experiment 1 to construct a CW basis our hyperparameters are [a_0,a_1,..,a_24] , a_i denoting the number of CW basis indexed by i in the Fourier space of input images. Likewise b=[b_0,..] are hyperparameters of the Fourier space of the range V of the filter (equivariant map) $\\\\phi$. So \\\\phi is an appropriate sized block diagonal matrix taking us in an equivariant manner from the Fourier space of images to V. And we need to learn \\\\phi. \\n\\nHaving decided b_i ,we know from Remark 4 the multiplicities of CW basis indexed by i in the Fourier domain of the vector space V \\\\otimes V. We come down from the Fourier space of V \\\\oplus (V \\\\otimes V) to the Fourier space of the input images by a filter (equivariant map) \\\\psi. So the composite map denoted $\\\\psi(phi(\\\\hat{y}) \\\\ otimes phi(\\\\hat{y})) \\\\oplus \\\\phi(\\\\hay{y}))$ is a filter from the Fourier space of input images back to itself. The variables of the filter are the entries of the block diagonal matrices describing \\\\psi and \\\\phi. And this composite map is a nonlinear function of the CW coefficients \\\\hat{y}, obtained using tensor product. \\n\\nWe do not know the Fourier basis W_{28} of input images to begin with. Given about 500, 28 x 28 images we can discover a good Fourier basis which works for all images. Please refer to the algorithm given now in the appendix. Setting a_i and b_i as given gives us the Fourier basis W_{28}, whose plots are given in Figures 4,5,6. \\n\\nThe second and third experiments are similar. But we are learning more filters (equivariant maps) in the second experiment since we are learning a W_{14} also.\\n\\nAs AnonReviewer1 explicitly points out, we do not compute the CW basis vectors of the intermediate vector space V nor of V \\\\otimes V. We only need to compute the CW basis of the image space and use tensor product nonlinearity.\", \"classification\": \"The set of CW coefficients of an image are its \\\"elementary linear features\\\" (paragraph 8 of page 3). We can construct more abstract features by multiplying these coefficients and taking linear combinations of coefficients indexed by the same integer n.\\n\\nHyperparameters are L_1=[l_10,l_11,..] the multiplicities of CW basis of the Fourier space of the range of \\\\phi_1 and L_2=[l_20,.], multiplicities of the range of \\\\phi_2. And once again filters (equivariant maps) $\\\\phi_1, \\\\phi_2, to be learned are block diagonal linear transformations.\\n\\nAll of this is implemented as neural networks in tensor flow. We can release our code anytime. \\n\\nImplementation details are stated explicitly now in Section 3.4\\n\\nAgain we only need a good vector space basis of the Fourier space of input images - we don\\u2019t need to explicitly compute a CW basis of the intermediate Fourier spaces we encounter. And again we use tensor products to compute more abstract features.\"}", "{\"title\": \"Summary 1 of 2\", \"comment\": \"\", \"summary_of_work\": \"A conventional neural network is equivariant to translations - i.e whether we translate the image and convolve it with a filter or we convolve the image first and then translate it we get the same result. We wish to implement this with rotations - to do this it is easier to work in the \\\"Fourier space\\\" of the group SO(2), which are functions on irreducible representations of SO(2). However a vector space basis in the Fourier space is not readily available - (this is what we call CW basis - Cohen and Welling compute one in their paper). Now every CW basis vector comes indexed by a non-negative integer (Paragraph 5 on Page 3). There could be multiple basis vectors indexed with the same integer which is called the multiplicity (Paragraph 6 of page 3). So in the Fourier space an image is a linear combination of CW basis vectors with coefficients (which depend upon the image). Lets call them CW coefficients. We don't need the CW basis to span the entire Fourier space, it suffices to find enough basis vectors which give a good approximation (Paragraph 7, page 3). \\n\\nThis is what we first compute, given a reasonable number of samples of the images and their rotations. \\n\\nConvolving an image by a G-equivariant filter means that whether we rotate an image and then convolve or first convolve with the filter and then rotate has same effect. This translates to the following in the Fourier domain - taking linear combinations of CW coefficients of CW-basis vectors of the same type m. The variables of this filter are entries of this linear map. So if we have a Fourier space V and a CW basis with $m_i$ basis vectors indexed by integer $i$ and another Fourier space W with $n_i$ basis vectors indexed by integers $i$, the search space for G-equivariant filters in the Fourier domain is of dimension $\\\\sum_i m_i n_i$ - corresponding to block diagonal matrices, the i-th block being of size n_i m_i (paragraph 4 on page 3).\\n\\nThe natural nonlinearity in the Fourier space is multiplication of CW coefficients. When we multiply the coefficient of a basis vector of type m and the coefficient of a basis vector of type n, we get two coefficients, for basis vectors of type m+n and m-n (content of Remark 4). These are quadratic functions, NOT linear functions of the starting CW coefficients - since this nonlinearity is obtained from tensor product of irreducible representations we call it tensor product nonlinearity.\\n\\nAll our learning happens in the Fourier world.\"}", "{\"title\": \"Responses to AnonReviewer3 - 2 of 2\", \"comment\": \">>>* The baseline methods should also be run on the smaller numbers of examples (500 or 12K) that the proposed approach is run on.\\n\\n[Reply] Sorry, we are not clear about what you mean - since we have decoupled learning the bases and using it for classification, we are not sure what it will mean to run the the baselines on 500 samples - since when deploying for classification we train our classifier on the full training set of 12K samples of MNIST-ROT. What we are only saying is that a good bases for reconstruction can be learned with 500 input samples. And the projected linear CW coefficients they give us are good for classification - our classifier constructs (upto) degree six polynomials in these coefficients and then does a softmax classification.\\n-----\\n\\n>>>* A planar CNN baseline should be considered for the autoencoder experiments.\\n\\n[Reply] Sorry, again it is not clear to us what you mean. A CNN auto encoder will probably settle down and reconstruct the image well. But we will need to train it for classification - it is not clear to us how to use the parameters that such a CNN autoencoder learns during reconstruction, when it is deployed for classification. \\n-----\\n\\n>>>* Validating on MNIST alone (rotated, spherical, or otherwise) isn\\u2019t good enough in 2018. The conclusions section mentions testing the models with deeper nets on CIFAR, but the results are not reported -- only hinting that it doesn\\u2019t work well. This doesn\\u2019t inspire much confidence.\\n\\n[Reply] Yes, we agree that we need to do more. However we started focussing on experiments with symmetric group representations mentioned in the conclusions because this hasn't been studied much, and has some interesting connections. That said we will surely take up your suggestion and resume working with CIFAR.\\n-----\\n\\n>>>* Why are Spherical CNNs (Cohen et al., 2018) a good baseline for this dataset? The MNIST-rot data is not spherical.\\n\\n[Reply] The authors embed MNIST-rot images on a sphere and then test their model. Their models and ours do their learning in the Fourier space.\\n-----\\n\\n>>>* Table 1: The method labels (Ours, 28/14 Tensor, and 28/14 Scale) are not very clear (though they are described in the text)\\n\\n[Reply] Thanks for the suggestion. We have expanded on this. Please let us know if this reads better.\\n-----\\n\\n>>>*Table 1: Why not include the classification results for the standard AE? (They are in the Fig. 6 plot, but not the table.)\\n\\n[Reply] Since the accuracies of our AE are similar to that of our CAE we have not included it. We could certainly do it. Please let us know your feedback on this.\\n-----\\n\\n>>>* Conclusions: \\u201cWe believe our classifiers built from bases learnt in a CAE architecture should be robust to noise\\u201d -- Why? No reasons are given for this belief.\\n\\n[Reply] We believe the coupled bases will be robust because they do work well on downsampled images - of course all this needs to be tested. It is something we would like to do.\\n-----\\n\\n>>>* There are many typos and grammatical errors and odd/inconsistent formatting (e.g., underlined subsection headers) throughout the paper that should be revised.\\n\\n[Reply] Sorry about this. We have revised it accordingly and we think all typos are now taken care off.\\n-----\"}", "{\"title\": \"Responses to AnonReviewer3 - 1 of 2\", \"comment\": \"Thanks for the review comments.\\n>>>I found most of this submission difficult to read and digest. I did not understand much of the exposition. .........I don\\u2019t doubt this paper makes some interesting and important contributions -- I just don\\u2019t understand what they are.\\n\\n[Reply] We are sorry that the you found the paper difficult to read. We think one source for confusion could be that we never explicitly stated that the term G-morphism used in Section 2 is the same as equivariant map used in Section 3. We do so now.\\n\\nWe have a short summary of what we do in comments titled \\\"Summary 1 of 2\\\" using a language more familiar to the ML community. Hope this helps. In \\\"Summary 2 of 2\\\" (again short:) we show how we apply this. We could try and incorporate this into the main paper.\\n-----\\n\\n>>>Here are some specific comments and questions, mostly on the proposed approaches and experiments:\\n\\n>>>* What actually is the \\u201ctensor (product) nonlinearity\\u201d? Given that this is in the title and is repeatedly emphasized in the text, I expected that it would be presented much more prominently. But after reading the entire paper I\\u2019m still not 100% sure what \\u201ctensor nonlinearity\\u201d refers to.\\n\\n[Reply] We hope this is answered in the explanation given in comments titled \\\"Summary 1 of 2\\\".\\n-----\\n\\n>>>* Experiments: all models are described in long-form prose. It\\u2019s very difficult to read and follow. This could be made much clearer with an algorithm box or similar.\\n\\n[Reply] Since we were referring to diagrams to explain the algorithm we felt it was easier to follow it this way. However we have written Experiment 1 as an algorithm as suggested by you and AnonReviewer1. Currently it is in the appendix. Please let us know if this should replace the long text.\\n-----\\n\\n>>>* The motivation for the \\u201cCoupled Autoencoder\\u201d model isn\\u2019t clear. What, intuitively, is to be gained from reconstructing a high-resolution image from a low-resolution basis and vice versa? The empirical gains are marginal.\\n\\n[Reply] That the abstract elementary features learned from such a basis should be invariant to scale is the motivation for defining this. When we started we expected that features learned from this basis would be superior at classification, but our experiments show that is not the case. However the coupled bases could be used interchangeably as we show in Section 3.3 Results of classification, Coupling interchangeability. Our experiments also show that we can simultaneously learn Fourier bases in different scales, which can later deal with downsampled images.\", \"as_an_application\": \"- Consider the problem of farmers having to deal with pests which they do not recognize, but limited by resources of bandwidth, and not having cellphones which take high resolution images. A solution would be to have a high end server deployed at a central location which is trained to recognise pests using say the basis from a Coupled autoencoder. When a farmer sees a new pest she could take a photograph of this on her cell phone and transmit this low resolution image to the server - the server can then use our model. (This needs to be tested on real world examples, something we hope to take up)\\n-----\\n\\n>>>* Experiments: the structure of the section is hard to follow. (1) and (2) are descriptions of two different models to do the same thing (autoencoding); then (3) (bootstrapping) is another step done on top of (1), and finally (4) is a classifier, trained on top of (1) or (2). This could benefit from restructuring.\\n\\n[Reply] Thanks for the suggestion - We have restructured it a little by giving subsection headings and we have rewritten some parts.\\n-----\\n\\n>>>* There are long lists of integer multiplicities a_i and b_i: these seem to come out of nowhere, with no explanation of how or why they were chosen -- just that they result in \\u201clearn[ing] a really sharp W_28\\u201d. Why not learn them?\\n\\n[Reply] These are hyperparameters fine tuned by us - how many CW basis vectors to choose which are indexed by integers 0, 1,..., 24 respectively. As for learning them, yes we could try learning them and would like to do carry out experiments to see if that works. \\n-----\\n \\n>>>* How are the models optimized? (Which optimizer, hyperparameters, etc.?)\\n\\n[Reply] We mention this now explicitly in an Implementation Section 3.4 \\n\\nWe use Adam optimiser and implement all of this in Tensorflow. We just used what tensor flow offers with no modification. Everywhere hyperparameters are multiplicities of irreducible representations in the domain and range of our SO(2) equivariant maps \\\\psi and \\\\phi . We do mention hyperparameters explicitly in all our experiments.\\n-----\"}", "{\"title\": \"Responses to AnonReviewer1\", \"comment\": \"Thanks for the review comments.\\n>>>The paper is a little rough around the edges. In the first 4 pages it launches into an exposition of ideas from representation theory which is too general for the purpose: SO(2) is a really simple commutative group, so the way that \\\"tensor product\\\" representations reduce to irreducibles could be summed up in the formula $e^{-2\\\\pi i k_1 x}e^{-2\\\\pi i k_2 x}=e^{-2\\\\pi i (k_1+k_2) x}$. I am not sure why the authors choose to use real representations (maybe because complex numbers are not supported in PyTorch, but this could easily be hacked) and I find that the real representations make things unnecessarily complicated. I suspect that at the end of the day the algorithm does something very simple (please clarify if working with real representations is somehow crucial). \\n\\n[Reply] Yes what you say is absolutely correct - that we needn't have presented it in this generality. But one reason to do this was to suggest that the same idea will work for other groups also if that group acts naturally on objects like we have SO(2) acting on images. All you need is to understand how tensor products of irreducibles split for that group. As we mention in the conclusion we have begun exploring with the symmetric group. \\n\\nWe have implemented our algorithms in the complex world also and the results are almost the same. However since we are using tensor flow we decided to work with reals. And since we wished to highlight the Cohen -Welling paper as one of our inspirations, we work over reals following what Cohen and Welling do.\\n-----\\n\\n>>>But this is exactly the beauty of the approach. The whole algorithm is very rough, there are only two layers (!), no effort to carefully implement nice exact group convolutions, and still the network is as good as the competition. Another significant point is that this network is only equivariant to rotations and not translations. \\n\\n[Reply] Thanks for these encouraging comments.\\n-----\\n\\n>>>1. Having a covariant nonlinearity is strong enough of a condition to force the network to learn a group adapted (Cohen-Welling) basis. This is interesting because Fourier space (\\\"tensor\\\") nonlinearities are a relatively new idea in the literature. This finding suggests that the nonlinearity might actually be more important than the basis.\\n\\n[Reply] Thanks for making this so explicit. We will include this in our paper. Please refer to the explanation given to AnonReviewer3, where we summarise our work and point to this remark of yours.\\n-----\\n\\n>>>2. The images that the authors work on are not functions on R^2, but just on a 28x28 grid. Rotating a rasterized image with eg. scikit-rotate introduces various artifacts. Similarly, going back and forth between a rasterized and polar coordinate based representation (which is effectively what would be required for \\\"Harmonic Networks\\\" and other Fourier methods) introduces messy interpolation issues. Not to mention downsampling, which is actually addressed in the paper. If a network can figure out how to best handle these issues from data, that makes things easier.\\n\\n[Reply] Again, thanks for the encouraging comments. We will emphasize these points.\\n-----\\n\\n>>>The experiments are admittedly very small scale, although some of the other publications in this field also only have small experiments. At the very least it would be nice to have standard deviations on the results and some measure of statistical significance. It would be even nicer to have some visualization of the learned bases/filters, and a bare bones matrix-level very simple description of the algorithm. Again, what is impressive here is that such a small network can learn to do this task reasonably well.\\n\\n[Reply] Thanks for this suggestion. We have given a separate table with some statistics of our experiments - our earlier table reported accuracies in the scale 0 to 1 but deviations are better expressed in percentage. So we have put a new table. Should we replace the earlier table with the new table (adding the percentage accuracies of the baseline models)? And pictures of filters are now in the appendix. And again thanks for appreciating that a small network suffices. We have a complete description of Experiment 1 as an algorithm in the appendix now. Should we put this in place of the current text?\\n\\n-----\\n\\n>>>Suggestions: \\n\\n>>>1. Also cite the Tensor Field Networks of Thomas et al in the context of tensor product nonlinearities.\\n\\n[Reply] Thanks for pointing this out. We will make an explicit reference to this in the next revision.\\n------\\n\\n>>>2. Clean up the formatting. \\\"This leads us to the following\\\" in a line by itself looks strange. Similarly \\\"Classification ising the learned CW-basis\\\". I think something went wrong with \\\\itemize in Section 3.1. \\n\\n[Reply] Sure. Sorry for this. It has been cleaned up.\\n-----\"}", "{\"title\": \"Difficult to read, insufficient evaluation\", \"review\": [\"This paper proposes autoencoder architectures based on Cohen-Welling bases for learning rotation-equivariant image representations. The models are evaluated by reconstruction error and classification in the space of the resulting basis on rotated-MNIST, showing performance improvements with small numbers of parameters and samples.\", \"I found most of this submission difficult to read and digest. I did not understand much of the exposition. I\\u2019ll freely admit I haven\\u2019t followed this line of work closely, and have little background in group theory, but I doubt I\\u2019m much of an outlier among the ICLR audience in that regard. The \\u201cPreliminaries\\u201d section is very dense and provides little hand-holding for the reader in the form of context, intuition, or motivation for each definition and remark it enumerates. I can't tell how much of the section is connected to the proposed models. (For comparison, I skimmed the prior work that this submission primarily builds upon (Cohen & Welling, 2014) and found it relatively unintimidating. It gently introduces each concept in terms that most readers familiar with common machine learning conventions would be comfortable with. It's possible to follow the overall argument and get the \\\"gist\\\" of the paper without understanding every detail.)\", \"All that being said, I don\\u2019t doubt this paper makes some interesting and important contributions -- I just don\\u2019t understand what they are.\", \"Here are some specific comments and questions, mostly on the proposed approaches and experiments:\", \"What actually is the \\u201ctensor (product) nonlinearity\\u201d? Given that this is in the title and is repeatedly emphasized in the text, I expected that it would be presented much more prominently. But after reading the entire paper I\\u2019m still not 100% sure what \\u201ctensor nonlinearity\\u201d refers to.\", \"Experiments: all models are described in long-form prose. It\\u2019s very difficult to read and follow. This could be made much clearer with an algorithm box or similar.\", \"The motivation for the \\u201cCoupled Autoencoder\\u201d model isn\\u2019t clear. What, intuitively, is to be gained from reconstructing a high-resolution image from a low-resolution basis and vice versa? The empirical gains are marginal.\", \"Experiments: the structure of the section is hard to follow. (1) and (2) are descriptions of two different models to do the same thing (autoencoding); then (3) (bootstrapping) is another step done on top of (1), and finally (4) is a classifier, trained on top of (1) or (2). This could benefit from restructuring.\", \"There are long lists of integer multiplicities a_i and b_i: these seem to come out of nowhere, with no explanation of how or why they were chosen -- just that they result in \\u201clearn[ing] a really sharp W_28\\u201d. Why not learn them?\", \"How are the models optimized? (Which optimizer, hyperparameters, etc.?)\", \"The baseline methods should also be run on the smaller numbers of examples (500 or 12K) that the proposed approach is run on.\", \"A planar CNN baseline should be considered for the autoencoder experiments.\", \"Validating on MNIST alone (rotated, spherical, or otherwise) isn\\u2019t good enough in 2018. The conclusions section mentions testing the models with deeper nets on CIFAR, but the results are not reported -- only hinting that it doesn\\u2019t work well. This doesn\\u2019t inspire much confidence.\", \"Why are Spherical CNNs (Cohen et al., 2018) a good baseline for this dataset? The MNIST-rot data is not spherical.\", \"Table 1: The method labels (Ours, 28/14 Tensor, and 28/14 Scale) are not very clear (though they are described in the text)\", \"Table 1: Why not include the classification results for the standard AE? (They are in the Fig. 6 plot, but not the table.)\", \"Conclusions: \\u201cWe believe our classifiers built from bases learnt in a CAE architecture should be robust to noise\\u201d -- Why? No reasons are given for this belief.\", \"There are many typos and grammatical errors and odd/inconsistent formatting (e.g., underlined subsection headers) throughout the paper that should be revised.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"A bit rough around the edges, but there are some interesting lessons to be learned here if one reads between the lines.\", \"review\": \"Recently there has been a spate of work on generalized CNNs that are equivariant to various symmetry groups, such a 2D and 3D rotations, the corresponding Euclidean groups (comprising not just rotations but also translations) and so on. The approach taken in most of the recent papers is to explicitly build in these equivariances by using the appropriate generalization of convolution. In the case of nontrivial groups this effectively means working in Fourier space, i.e., transforming to a basis that is adapted to the group action. This requires some considerations from represntation theory.\\n\\nEarlier, however, there was some less recognized work by Cohen and Welling on actually learning the correct basis itself from data. The present paper takes this second approach, and shows for a simple task like rotated MNIST, the basis can be learned from a remarkably small amount of data, and actually performs even better than some of the fixed basis methods. There is one major caveat: the nonlinearity itself has to be rotation-covariant, and for this purpose they use the recently introduced tensor product nonlinearities. \\n\\nThe paper is a little rough around the edges. In the first 4 pages it launches into an exposition of ideas from representation theory which is too general for the purpose: SO(2) is a really simple commutative group, so the way that \\\"tensor product\\\" representations reduce to irreducibles could be summed up in the formula $e^{-2\\\\pi i k_1 x}e^{-2\\\\pi i k_2 x}=e^{-2\\\\pi i (k_1+k_2) x}$. I am not sure why the authors choose to use real representations (maybe because complex numbers are not supported in PyTorch, but this could easily be hacked) and I find that the real representations make things unnecessarily complicated. I suspect that at the end of the day the algorithm does something very simple (please clarify if working with \\nreal representations is somehow crucial). \\n\\nBut this is exactly the beauty of the approach. The whole algorithm is very rough, there are only two layers (!), no effort to carefully implement nice exact group convolutions, and still the network is as good as the competition. Another significant point is that this network is only equivariant to rotations and not translations. \\n\\nNaturally, the question arises why one would want to learn the group adapted basis, when one could just compute it explicitly. There are two interesting lessons here that the authors could emphasize more:\\n\\n1. Having a covariant nonlinearity is strong enough of a condition to force the network to learn a group adapted (Cohen-Welling) basis. This is interesting because Fourier space (\\\"tensor\\\") nonlinearities are a relatively new idea in the literature. This finding suggests that the nonlinearity might actually be more important than the basis.\\n\\n2. The images that the authors work on are not functions on R^2, but just on a 28x28 grid. Rotating a rasterized image with eg. scikit-rotate introduces various artifacts. Similarly, going back and forth between a rasterized and polar coordinate based representation (which is effectively what would be required for \\\"Harmonic Networks\\\" and other Fourier methods) introduces messy interpolation issues. Not to mention downsampling, which is actually addressed in the paper. If a network can figure out how to best handle these issues from data, that makes things easier.\\n\\nThe experiments are admittedly very small scale, although some of the other publications in this field also only have small experiments. At the very least it would be nice to have standard deviations on the results and some measure of statistical significance. It would be even nicer to have some visualization of the learned bases/filters, and a bare bones matrix-level very simple description of the algorith. Again, what is impressive here is that such a small network can learn to do this task reasonably well.\", \"suggestions\": \"1. Also cite the Tensor Field Networks of Thomas et al in the context of tensor product nonlinearities.\\n\\n2. Clean up the formatting. \\\"This leads us to the following\\\" in a line by itself looks strange. Similarly \\\"Classification ising the learned CW-basis\\\". I think something went wrong with \\\\itemize in Section 3.1.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An Important Problem, but insufficient experiments and unsure about some details of method\", \"review\": \"Review: This paper deals with the issue of learning rotation invariant autoencoders and classifiers. While this problem is well motivated, I found that this paper was fairly weak experimentally, and I also found it difficult to determine what the exact algorithm was. For example, how the optimization was done is not discussed at all. At the same time, I'm not an expert in group theory, so it's possible that the paper has technical novelty or significance which I did not appreciate.\", \"strengths\": \"-The challenge of learning rotation equivariant representations is well motivated and the idea of learning representations which transfer between different scales also seems useful.\", \"weaknesses\": \"-I had a difficult time understanding how the preliminaries (section 2) were related to the experiments (section 3). \\n\\n-The reference (Kondor 2018) is used a lot but could refer to three different papers that are in the references. \\n\\n -Only reported results are on rotated mnist, but the improvements seem reasonable, but unless I'm missing something are worse than the 1.62% error reported by harmonic nets (mentioned in the introduction of the paper). In addition to rot-mnist, harmonic nets evaluated boundary detection on the berkeley segmentation dataset. \\n\\n -It's interesting that the model learns to be somewhat invariant across scales, but I think that the baselines for this could be better. For example, using a convolution network with mean pooling at the end, one could estimate how well the normal classifier handles evaluation at a different scale from that used during training (I imagine the invariance would be somewhat bad but it's important to confirm).\", \"questions\": \"-Section 3.1 makes reference to \\\"learning parameters\\\". I assume that this is done in the usual way with backpropagation and then SGD/Adam or something? \\n\\n-How is it guaranteed that W is orthogonal in the learning procedure?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
HkeGhoA5FX
Residual Non-local Attention Networks for Image Restoration
[ "Yulun Zhang", "Kunpeng Li", "Kai Li", "Bineng Zhong", "Yun Fu" ]
In this paper, we propose a residual non-local attention network for high-quality image restoration. Without considering the uneven distribution of information in the corrupted images, previous methods are restricted by local convolutional operation and equal treatment of spatial- and channel-wise features. To address this issue, we design local and non-local attention blocks to extract features that capture the long-range dependencies between pixels and pay more attention to the challenging parts. Specifically, we design trunk branch and (non-)local mask branch in each (non-)local attention block. The trunk branch is used to extract hierarchical features. Local and non-local mask branches aim to adaptively rescale these hierarchical features with mixed attentions. The local mask branch concentrates on more local structures with convolutional operations, while non-local attention considers more about long-range dependencies in the whole feature map. Furthermore, we propose residual local and non-local attention learning to train the very deep network, which further enhance the representation ability of the network. Our proposed method can be generalized for various image restoration applications, such as image denoising, demosaicing, compression artifacts reduction, and super-resolution. Experiments demonstrate that our method obtains comparable or better results compared with recently leading methods quantitatively and visually.
[ "Non-local network", "attention network", "image restoration", "residual learning" ]
https://openreview.net/pdf?id=HkeGhoA5FX
https://openreview.net/forum?id=HkeGhoA5FX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ByxCIyGegE", "B1lVIRTKAX", "H1llV0aYAX", "SJlreA6FCQ", "BylyOpaKRX", "B1gPVaaYAQ", "Bygp3i6YA7", "B1lKz511pQ", "BJeBKz333m", "BJx1ow5K2X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544720214387, 1543261771924, 1543261736339, 1543261677125, 1543261543019, 1543261487485, 1543261108909, 1541499409350, 1541354109452, 1541150614804 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper693/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper693/Authors" ], [ "ICLR.cc/2019/Conference/Paper693/Authors" ], [ "ICLR.cc/2019/Conference/Paper693/Authors" ], [ "ICLR.cc/2019/Conference/Paper693/Authors" ], [ "ICLR.cc/2019/Conference/Paper693/Authors" ], [ "ICLR.cc/2019/Conference/Paper693/Authors" ], [ "ICLR.cc/2019/Conference/Paper693/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper693/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper693/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.\\n\\n- strong qualitative and quantitative results\\n- a good ablative analysis of the proposed method.\\n \\n2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.\\n\\n- clarity could be improved (and was much improved in the revision).\\n- somewhat limited novelty.\\n \\n3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it\\u2019s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.\\n\\nNo major points of contention.\\n \\n4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.\\n\\nThe reviewers reached a consensus that the paper should be accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"somewhat limited novelty but significant advancement of SOTA\"}", "{\"title\": \"Author response to Reviewer3 (part 3 of 3)\", \"comment\": \"Q3-8: - The proposed RNAN model is trained on a big dataset (800 images with ~2 million pixels each). Are the competing methods trained on datasets of similar size? If not, this could be a major reason for improved performance of RNAN over competing methods. At least in the appendix, RNAN and FFDNet are compared more fairly since they are trained with the same/similar data.\", \"a3_8\": \"First, for image super-resolution, EDSR and our RNAN used DIV2K 800 images for training. SRMDNF and D-DBPN used DIV2K 800 images and Flickr2K 2650 images for training, much more images than ours. Our ANAN obtains better results, while using similar or smaller training set and much less network parameters than those of EDSR and D-DBPN.\\nSecond, for image denoising, demosaicing, and compression artifacts reduction, the compared methods use smaller training size. It\\u2019s hard to use their official released code to retrain their models with DIV2K 800 images mainly for two reasons. One is that it\\u2019s very hard to preprocess data with their codes for DIV2K training data. Second, some of the compared methods (e.g., MemNet) would need large-memory GPU (e.g., Nvidia P40 with 24G memory to train MemNet) and very long training time (e.g., 5 days to train MemNet). \\nHowever, to make fair comparisons, we retrain our RNAN with smaller dataset and show the results in Table 8. As we can see, our RNAN still achieves better results, even using smaller training data (e.g., for denoising, we use BSD400, FFDNet uses BSD400+, which has 5144 more images than BSD400). It should also be noted that we only train our network about 2 hours, being far away from well-trained. While, other compared methods would have to take much longer training time. For example, MemNet trains for about 5 days, almost 60 times longer than ours.\", \"q3_9\": [\"The qualitative examples in the appendix mostly show close-ups/details of very structured regions (mostly stripy patterns). Please also show some other regions without self-similar structures.\"], \"a3_9\": \"First, our RNAN obtains pretty good results for regions with self-similar structures. This comparison also demonstrates the effectiveness of our proposed residual non-local attention network. Thanks for the suggestions, we further add more qualitative results without self-similar structures in the revised paper.\", \"q3_10\": [\"# Misc\", \"Residual non-local attention learning (section 3.3) was not clear to me.\", \"The word \\\"trunk\\\" is used without definition or explanation.\", \"Fig. 2 caption is too short, please expand.\"], \"a3_10\": \"Thanks for pointing them out. We have revised the paper to make it better understandable and easy to follow. The word \\u201ctrunk\\u201d mainly means main body to extract features, just being distinguished with mask branch. Moreover, we show it in the Fig. 2. We also expand the caption of Fig. 2.\"}", "{\"title\": \"Author response to Reviewer3 (part 2 of 3)\", \"comment\": \"Q3-3: # Clarity\\nI think the paper is not self-contained enough, since it seems to implicitly assume substantial background knowledge on attention mechanisms in CNNs.\", \"a3_3\": \"Due to the limited space, we only included key references about attention mechanisms in the previous paper. Thanks for the reviewer\\u2019s suggestions, in the revised paper, we add more descriptions about attention mechanisms.\", \"q3_4\": \"Furthermore, the introduction of the paper identifies three problems with existing CNNs that I don't necessarily fully agree with. None of these supposed problems are backed up by (experimental) evidence.\", \"a3_4\": \"For the first issue, Zhang et al. [R2] investigated that larger patch size contributes more for image denoising to make better use of receptive field size, especially when the noise level is high. In this paper, we use non-local attention to make full use of all the pixels of the inputs simultaneously. We compared DnCNN in [R2] to show the effectiveness of our method.\\nFor the second issue, we provide analyses about previous methods, which didn\\u2019t use non-local attention for image restoration and lacked discriminative ability according to the specific noisy content. We also provide visual results to demonstrate our analyses. For example, to denoise the kodim11 in Fig. 4, all the previous methods cannot recover the line above the boat. They take the tiny line as a part of plain sky and just remove it. However, our RNAN could keep the line and remove the noise by distinctively treat the line and sky. \\nFor the third issue, previous methods seldomly take the features distinctively in channel-wise or spatial-wise. Namely, they take the feature maps equally, which lacks flexibility in the real cases. Instead, we learn non-local mixed attention to guide the network training and obtain stronger representational ability. We support this claim with the ablation study and comparisons with other methods quantitatively and qualitatively.\\n[R2] Zhang, Kai, et al. \\\"Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising.\\\" TIP 2017.\", \"q3_5\": \"I don't think it is sufficient to just show superior results than previous methods. It is also important to disentangle why the results are better. However, the presented ablation experiments are not very illuminating to me.\", \"a3_5\": \"Please refer to A3-2 for the reasons and analyses why our results are better. On the other hand, the ablation study is used to verify the effects of each proposed component. It also serves as a guidance for us to decide the final network structure.\", \"q3_6\": [\"The attempts at explaining what the novel attention blocks do and why they lead to superior results are very vague to me. Maybe they are understandable in the context of related work, but I found many statements, such as the following, devoid of meaning:\", \"\\\"Without considering the uneven distribution of information in the corrupted images, [...]\\\"\", \"\\\"However, in this paper, we mainly focus on learning non-local attention to better guide feature extraction in trunk branch.\\\"\", \"\\\"We only incorporate residual non-local attention block in low-level and high-level feature space. This is mainly because a few non-local modules can well offer non-local ability to the network for image restoration.\\\"\", \"\\\"The key point in mask branch is how to grasp information of larger scope, namely larger receptive field size, so that it\\u2019s possible to obtain more sophisticated attention map.\\\"\"], \"a3_6\": \"We summarize our main contribution as three-fold and corresponding brief explanations at the end of Introduction. We also try our best to revise the, aiming to make it better understandable to readers.\", \"q3_7\": \"# Experiments\\n- The experimental results are the best part of the paper. However, it would've been nice to include some qualitative results in the main paper.\", \"a3_7\": \"Due to the limited space, we didn\\u2019t include some qualitative results in the main body of the paper. Thanks for the reviewer\\u2019s suggestion, we add some qualitative results in the main body of the revised one.\"}", "{\"title\": \"Author response to Reviewer3 (part 1 of 3)\", \"comment\": \"We thank Reviewer3 for his/her valuable comments. We will release the code and pretrained model reproducing the results in the paper soon. Our responses are as follows:\", \"q3_1\": \"# Results\\nThe strongest point of the paper is that the quantitative and qualitative image restoration results appear to be very good, although they seem almost a bit too good.\", \"a3_1\": \"We mainly show the effectiveness of our idea and don\\u2019t pursue higher performance. We were surprising to find that our current model has achieved much better performance than most previous methods in image restoration. Actually, in our later research, we further obtained better results based on the idea in this paper. Anyway, we will release the train/test codes and pretrained models soon, which reproduce the exact results in this paper.\", \"q3_2\": \"# Novelty\\nI'm not sure about the novelty of the paper, but I suspect it to be rather incremental. The paper says \\\"To the best of our knowledge, this is the first time to consider residual non-local attention for image restoration problems.\\\" Does that mean non-local attention (in a very similar way) has already been used, just not in a residual fashion? If so, that would not constitute much novelty. I have to admit that I'm not familiar with the related work on attention, but I did not understand *why* the results of the proposed method are supposed to be much better than that of previous work.\", \"a3_2\": \"Non-local attention was NOT used for image restoration in terms of papers in CVPR/ICCV/ECCV/NIPS/ICML/ICLR. We are the first to investigate non-local attention for image denoising, demosaicing, compression artifact reduction, and super-resolution simultaneously. The reasons why we propose residual non-local attention learning (in Section 3.3 of the main paper) are mainly as follows:\\n(1) It is a proper way to incorporate non-local attention into the network and contribute to the image restoration performance.\\n(2) It allows us to train very deep networks by preserving more low-level features, being more suitable for image restoration. \\n(3) It allows the network to pursue better representational ability. We demonstrate its effectiveness in both the main paper and our response to Reviewer2.\", \"the_reasons_why_our_proposed_method_achieves_much_better_results_than_that_of_previous_works_are_as_follows\": \"(1) Our residual non-local attention network is an effective network structure for high-quality image restoration. No matter we use small training data (e.g., Table 8 in main paper) or DIV2K (e.g., Table 6 in the main paper), our method achieves better results than most compared ones. Let\\u2019s take image super-resolution as an example, even though some other methods have larger number of network parameters (e.g., EDSR and D-DBPN), our method still achieves better performance.\\n(2) Our proposed residual attention learning allows we train very deep network, achieve stronger representation ability. We\\u2019re the first to investigate such a deep network for image denoising, demosiacing, and compression artifacts reduction.\\n(3) Our proposed method is powerful enough to further take advantage of larger training set (e.g., DIV2K). As we show Table 8 in the main paper, for small training data, we only train our network about 2 hours, being far away from well-trained. While, other compared methods would have to take much longer training time. For example, MemNet (Tai et al., 2017) trains for about 5 days, almost 60 times longer than ours.\"}", "{\"title\": \"Author response to Reviewer1 (part 2 of 2)\", \"comment\": \"Q1-3: - The contribution of the non-local operation is not clear to me. For example, how does the global information (i.e., long-range dependencies between pixels) help to solve image denoising tasks such as image denoising?\", \"a1_3\": \"Zhang et al. [R2] investigated that larger patch size contributes more for image denoising to make better use of receptive field size, especially when the noise level is high. Similar observation could also be found in image super-resolution [R3]. Although large patch size makes better use of larger receptive field size, previous methods are restricted by local convolutional operation and equal treatment of spatial and channel-wise features.\\nIn this paper, we use non-local attention to make full use of all the pixels of the inputs simultaneously. Namely, all the positions are considered to obtain better attention maps. Such non-local mixed attention enhances the network with distinguished power for noise and image content. For example, to denoise the kodim11 in Fig. 4, all the previous methods cannot recover the line above the boat. They take the tiny line as a part of plain sky and just remove it. However, our RNAN could keep the line and remove the noise by distinctively treat the line and sky with non-local mixed attention.\\n[R2] Zhang, Kai, et al. \\\"Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising.\\\" TIP 2017.\\n[R3] Wang, Xintao, et al. \\\"ESRGAN: Enhanced super-resolution generative adversarial networks.\\\" ECCVW 2018.\", \"q1_4\": \"Overall, the technical contribution of the proposed method is not so high, but the proposed method is valuable and promising if we focus on the performance.\", \"a1_4\": \"As Reviewer2 said \\u2018However, up to some point all the new ConvNet designs can be seen as incremental developments of the older ones, yet they are needed for the progress of the field.\\u2019, we have to admit that too many CNN based works focus on performance. What\\u2019s more, some works by famous companies need hundreds and thousands of high-performance GPUs, use tons of data, and take tens of days to train their networks. Although they achieve very impressive results based on existing network structures, researchers (e.g., students in most universities) without so much resource cannot even run their released codes. Such works consumes so much resource that it becomes undoable for researchers with limited resources. However, such kinds of works are not challenged or blamed with their \\u2018novelty\\u2019 very much and there tends to be more and more such very-large-resource-consuming works.\\nIn contrast, in this work, we design a compact yet effective network for image restoration. We conduct extensive experiments to demonstrate the positive contributions of each component and the effectiveness of the idea. We are the first to investigate non-local attention in image restoration tasks. Although we can make more complex network structures to achieve more \\u2018novelty\\u2019 and better performance, we didn\\u2019t. In fact, in our later works, we obtained much better results based on the idea in this paper. \\nWe want to inspire other researchers to investigate more about non-local attention for the large community, image restoration, with limited resource. All the experiments can be done with one regular GPU (e.g., 12G memory). The results are also reproducible, as we will release the train/test codes and pretrained models.\"}", "{\"title\": \"Author response to Reviewer1 (part 1 of 2)\", \"comment\": \"We thank Reviewer1 for his/her valuable comments. We will release the code and pretrained model reproducing the results in the paper soon. Our responses are as follows:\", \"q1_1\": [\"Cons\", \"It would be better to provide the state-of-the-art method[1] in the super-resolution task.\", \"[1] Y. Zhang et al., Image Super-Resolution Using Very Deep Residual Channel Attention Networks, ECCV, 2018.\"], \"a1_1\": \"Thanks for the suggestion. RCAN [1] is very powerful and shows great performance gains over previous SR methods. We include the RCAN [1] for comparison in the revised paper. It should be noted that RCAN mainly focus on much deeper network design and channel attention. Our network depth is much shallower than that of RCAN. Our RNAN mainly focus on investigating residual non-local attention and its application for image restoration. We believe that our RNAN could also contribute to RCAN to obtain better performance.\", \"q1_2\": [\"The technical contribution of the proposed method is not high, because the proposed method seems to be just using existing methods.\"], \"a1_2\": \"Our main principle of network design is to make it \\u2018Compact yet work\\u2019. This work mainly focuses on investigating the usage of residual local and non-local attention for image restoration. Based on some existing concepts (e.g., residual block, non-local network), we conduct extensive experiments to obtain such a compact network structure and demonstrate its effectiveness. We mainly show the effectiveness of our idea and don\\u2019t pursue higher performance by further refining the network modules. We believe that more and more related works could be done to further improve such a compact network.\"}", "{\"title\": \"Author response to Reviewer2\", \"comment\": \"We thank Reviewer2 for his/her valuable comments and approval for our work. We will release the code and pretrained model reproducing the results in the paper soon. Our responses are as follows:\", \"q2_1\": \"The main weakness of the paper is the limited novelty, as the proposed design builds upon existing ideas and concepts. However, up to some point all the new ConvNet designs can be seen as incremental developments of the older ones, yet they are needed for the progress of the field.\", \"a2_1\": \"Our main principle of network design is to make it \\u2018Compact yet work\\u2019. This work mainly focuses on investigating the usage of residual local and non-local attention for image restoration. Based on some existing concepts (e.g., residual block, non-local network), we conduct extensive experiments to obtain such a compact network structure and demonstrate its effectiveness. We mainly show the effectiveness of our idea and don\\u2019t pursue higher performance by further refining the network modules. We believe that more and more related works could be done to further improve such a compact network.\", \"q2_2\": \"Inclusion of more related works, such as:\\nTimofte et al., \\\"NTIRE 2018 Challenge on Single Image Super-Resolution: Methods and Results\\\", CVPRW 2018\\nWang et al., \\\"A fully progressive approach to single-image super-resolution\\\", CVPRW 2018\\nAgustsson et al., NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study, CVPRW 2017\\nBlau et al., \\\"2018 PIRM Challenge on Perceptual Image Super-resolution\\\", ECCVW 2018\\nZhang et al., \\\"Image Super-Resolution Using Very Deep Residual Channel Attention Networks\\\", ECCV 2018\", \"a2_2\": \"The NTIRE and PIRM challenges and recent related works really contribute to the image restoration community very much. We have included those valuable works and given corresponding analyses in the revised paper.\", \"q2_3\": \"Why not using dilated convolutions instead of or complementary with the mask branch or other design choices from this paper?\", \"a2_3\": \"First of all, we investigated the usage of dilated convolutions in mask branch before and found that it didn\\u2019t make obvious difference. Dilated convolution may be a good choice to obtain spatial attention, as done in BAM [R1]. While, in this paper, we target to obtain non-local mixed attention, including channel and spatial attention simultaneously.\\n[R1] Park, Jongchan, et al. \\\"BAM: bottleneck attention module.\\\" BMVC 2018.\\nFurthermore, we provide more experiments using dilated convolutions in mask branch to demonstrate our above claims. Here we give a brief introduction to the experiments. As dilated convolutions are good at obtaining larger receptive field size, we remove all the non-local blocks in our network. We divide the experiments as 4 cases.\", \"case_1\": \"31.486 dB; Case-2: 31.508 dB; Case-3: 31.535 dB; Case-4: 31.552; RNAN: 31.602 dB.\\nCompare Case-1 and -2, or Case-3 and -4, we can see that our proposed residual attention learning is more suitable for image restoration and contributes to the performance.\\nCompare Case-2 and RNAN, we find that mix attention works better than simple spatial attention.\\nCompare Case-4 and RNAN, we find that non-local block helps to learn better attention by taking long-range dependencies between pixels than that with dilated convolutions.\", \"case_2\": \"we replace the mask branch with two dilated convolutions and keep our proposed residual attention learning (in Section 3.3 of the main paper) strategy. Namely, we use Eq. (8) for attention learning.\", \"case_3\": \"we add two dilated convolutions in the previous mask branch and remove our proposed residual attention learning (in Section 3.3 of the main paper) strategy. Namely, we use Eq. (7) for attention learning.\", \"case_4\": \"we add two dilated convolutions in the previous mask branch and keep our proposed residual attention learning (in Section 3.3 of the main paper) strategy. Namely, we use Eq. (8) for attention learning.\\nWe test the performance on Set5 for color image denoising with noise level=30. To save training time, we set path size as 48, block number as 7. The performance comparisons (in terms of PSNR (dB) within 200 epochs) are as follows:\"}", "{\"title\": \"excellent application oriented paper; new state-of-the-art results; yet limited novelty\", \"review\": \"The authors propose a residual non-local attention net (RNAN) which combines local and non-local blocks to form a deep CNN architecture with application to image restoration.\\n\\nThe paper has a compact description, provides sufficient details, and including the appendix has an excellent experimental validation.\", \"the_proposed_approach_provides_top_results_on_several_image_restoration_tasks\": \"image denoising, demosaicing, compression artifacts reduction, and single image super-resolution.\\n\\nThe main weakness of the paper is the limited novelty, as the proposed design builds upon existing ideas and concepts. However, up to some point all the new ConvNet designs can be seen as incremental developments of the older ones, yet they are needed for the progress of the field.\", \"i_would_suggest_to_the_authors_the_inclusion_of_related_works_such_as\": \"Timofte et al., \\\"NTIRE 2018 Challenge on Single Image Super-Resolution: Methods and Results\\\", CVPRW 2018\\nWang et al., \\\"A fully progressive approach to single-image super-resolution\\\", CVPRW 2018\", \"note_that_div2k_dataset_was_introduced_in\": \"Agustsson et al., NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study, CVPRW 2017\\n\\nalso, the more recent related works:\\nBlau et al., \\\"2018 PIRM Challenge on Perceptual Image Super-resolution\\\", ECCVW 2018\\nZhang et al., \\\"Image Super-Resolution Using Very Deep Residual Channel Attention Networks\\\", ECCV 2018\\n\\nAlso, I would like a response from the authors on the following:\\nWhy not using dilated convolutions instead of or complementary with the mask branch or other design choices from this paper?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"excellent results, but unclear novelty and lacking explanations\", \"review\": \"The paper proposes a convolutional neural network architecture that includes blocks for local and non-local attention mechanisms, which are claimed to be responsible for achieving excellent results in four image restoration applications.\\n\\n\\n# Results\\nThe strongest point of the paper is that the quantitative and qualitative image restoration results appear to be very good, although they seem almost a bit too good.\\n\\n\\n# Novelty\\nI'm not sure about the novelty of the paper, but I suspect it to be rather incremental. The paper says \\\"To the best of our knowledge, this is the first time to consider residual non-local attention for image restoration problems.\\\" Does that mean non-local attention (in a very similar way) has already been used, just not in a residual fashion? If so, that would not constitute much novelty. I have to admit that I'm not familiar with the related work on attention, but I did not understand *why* the results of the proposed method are supposed to be much better than that of previous work.\\n\\n\\n# Clarity\\nI think the paper is not self-contained enough, since it seems to implicitly assume substantial background knowledge on attention mechanisms in CNNs. \\n\\nFurthermore, the introduction of the paper identifies three problems with existing CNNs that I don't necessarily fully agree with. None of these supposed problems are backed up by (experimental) evidence.\\n\\nI don't think it is sufficient to just show superior results than previous methods. It is also important to disentangle why the results are better. However, the presented ablation experiments are not very illuminating to me.\\n\\nThe attempts at explaining what the novel attention blocks do and why they lead to superior results are very vague to me. Maybe they are understandable in the context of related work, but I found many statements, such as the following, devoid of meaning:\\n- \\\"Without considering the uneven distribution of information in the corrupted images, [...]\\\"\\n- \\\"However, in this paper, we mainly focus on learning non-local attention to better guide feature extraction in trunk branch.\\\"\\n- \\\"We only incorporate residual non-local attention block in low-level and high-level feature space. This is mainly because a few non-local modules can well offer non-local ability to the network for image restoration.\\\"\\n- \\\"The key point in mask branch is how to grasp information of larger scope, namely larger receptive field size, so that it\\u2019s possible to obtain more sophisticated attention map.\\\"\\n\\n\\n# Experiments\\n- The experimental results are the best part of the paper. However, it would've been nice to include some qualitative results in the main paper.\\n- The proposed RNAN model is trained on a big dataset (800 images with ~2 million pixels each). Are the competing methods trained on datasets of similar size? If not, this could be a major reason for improved performance of RNAN over competing methods. At least in the appendix, RNAN and FFDNet are compared more fairly since they are trained with the same/similar data.\\n- The qualitative examples in the appendix mostly show close-ups/details of very structured regions (mostly stripy patterns). Please also show some other regions without self-similar structures.\\n\\n\\n# Misc\\n- Residual non-local attention learning (section 3.3) was not clear to me.\\n- The word \\\"trunk\\\" is used without definition or explanation.\\n- Fig. 2 caption is too short, please expand.\\n\\n# Update (2018-11-29)\\nGiven the substantial author feedback, I'm willing to raise my score.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Technical contribution is not high, but good performing approach on several image restoration tasks\", \"review\": [\"Summary\", \"This paper proposes a residual non-local attention network for image restoration. Specifically, the proposed method has local and non-local attention blocks to extract features which capture long-range dependencies. The local and non-local blocks consist of trunk branch and (non-) local mask branch. The proposed method is evaluated on image denoising, demosaicing, compression artifacts reduction, and super-resolution.\", \"Pros\", \"The proposed method shows better performance than existing image restoration methods.\", \"The effect of each proposed technique such as the mask branch and the non-local block is appropriately evaluated.\", \"Cons\", \"It would be better to provide the state-of-the-art method[1] in the super-resolution task.\", \"[1] Y. Zhang et al., Image Super-Resolution Using Very Deep Residual Channel Attention Networks, ECCV, 2018.\", \"The technical contribution of the proposed method is not high, because the proposed method seems to be just using existing methods.\", \"The contribution of the non-local operation is not clear to me. For example, how does the global information (i.e., long-range dependencies between pixels) help to solve image denoising tasks such as image denoising?\", \"Overall, the technical contribution of the proposed method is not so high, but the proposed method is valuable and promising if we focus on the performance.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
BJgGhiR5KX
Learning Cross-Lingual Sentence Representations via a Multi-task Dual-Encoder Model
[ "Muthuraman Chidambaram", "Yinfei Yang", "Daniel Cer", "Steve Yuan", "Yun-Hsuan Sung", "Brian Strope", "Ray Kurzweil" ]
A significant roadblock in multilingual neural language modeling is the lack of labeled non-English data. One potential method for overcoming this issue is learning cross-lingual text representations that can be used to transfer the performance from training on English tasks to non-English tasks, despite little to no task-specific non-English data. In this paper, we explore a natural setup for learning crosslingual sentence representations: the dual-encoder. We provide a comprehensive evaluation of our cross-lingual representations on a number of monolingual, crosslingual, and zero-shot/few-shot learning tasks, and also give an analysis of different learned cross-lingual embedding spaces.
[ "sentence", "embeddings", "zero-shot", "multilingual", "multi-task", "cross-lingual" ]
https://openreview.net/pdf?id=BJgGhiR5KX
https://openreview.net/forum?id=BJgGhiR5KX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HkxCyPD-gN", "SkeILk8t07", "H1xSxOazaQ", "rJe1bDpfp7", "SkeB-rpzTX", "r1erqlaM6m", "SJlXDD81pX", "HkxR4Ojt3m", "BkgYgVIIh7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544808165549, 1543229261667, 1541752812877, 1541752567232, 1541752061168, 1541750925355, 1541527387449, 1541154870326, 1540936688639 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper692/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper692/Authors" ], [ "ICLR.cc/2019/Conference/Paper692/Authors" ], [ "ICLR.cc/2019/Conference/Paper692/Authors" ], [ "ICLR.cc/2019/Conference/Paper692/Authors" ], [ "ICLR.cc/2019/Conference/Paper692/Authors" ], [ "ICLR.cc/2019/Conference/Paper692/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper692/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper692/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": [\"Pros:\", \"A new framework for learning sentence representations\", \"Solid experiments and analyses\", \"En-Zh / XNLI dataset was added, addressing the comment that no distant languages were considered; also ablation tests\"], \"cons\": [\"The considered components are not novel, and their combination is straightforward\", \"The set of downstream tasks is not very diverse (See R2)\", \"Only high resource languages are considered (interesting to see it applied to real low resource languages)\", \"All reviewers agree that there is no modeling contribution. Overall, it is a solid paper but I do not believe that the contribution is sufficient.\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"interesting but not very novel framework\"}", "{\"title\": \"Secondary revision with XNLI evaluation and ablation tests\", \"comment\": \"Hi all, we have updated our paper to include evaluations on XNLI in the main body, as per the suggestions of the reviewers. We have also included further ablation tests in the supplementary material. Thank you all again for your comments!\"}", "{\"title\": \"Initial revision with clarifications\", \"comment\": [\"Thanks to all of the reviewers for their helpful comments. We have updated our paper with an initial revision that handles the following main requests:\", \"Simplifies tables and language when possible.\", \"Further clarifies datasets and training procedures.\", \"Adds more detail concerning the systems being compared to.\", \"We will be looking to make future revisions that also include more evaluations, particularly focusing on adding XNLI with evaluations on English/Non-European language pairs.\"]}", "{\"title\": \"Addressing comments and clarifications\", \"comment\": \"Thank you for your comments and thoughtful questions. We address each comment individually below:\\n\\nAddressing Major Comments\\n(I) Novelty. While we agree that we have not introduced a new architectural component in our cross-lingual multi-task models, we believe that our combination of current SOTA language modeling components and the accompanying analysis still raises interesting questions and demonstrates strong enough results to motivate new research.\\n\\n(II) Evaluations. Thank you - we also plan to add further evaluations (i.e. comparing to more lightweight encoder architectures such as the Deep Averaging Network of Iyyer et al. (2015)) to the supplementary material in the next revision of our paper.\\n\\n(III) Language Pairs. Our main reason for choosing English-Spanish, English-French, and English-German language pairs was the fact that these language pairs had a number of pre-existing evaluations available.\\n\\n(IV) Datasets. \\n(a) We are in the process of experimenting with further evaluations (paraphrases, summaries) in more languages, and hope to add these evaluations to future revisions of our paper. \\n(b) XNLI was not available at the time we were preparing the initial draft of this paper, but we plan to include evaluations on XNLI in our next revision. We do have some preliminary XNLI results using our trained English-French model, which shows an accuracy of 69% on English and 64.5% on French.\\n(c) We agree that translated STS is not intended to be a surefire evaluation of STS performance in target languages, but evaluating sentence representations on translated datasets has been considered before (Eriguchi et al., 2018). Additionally, the Google NMT architecture uses an encoder-decoder structure as opposed to our dual-encoder architecture, so we felt there were sufficient enough differences between our approaches that evaluating on translated data may still provide some insight into STS performance in non-English languages. \\n\\n(V) Embedding Space Analysis. We absolutely agree that there are a number of interesting and different analyses that can be done on the learned sentence embedding spaces; our reason for using the eigen-similarity analysis was to extend the previous work done for word embeddings. We did consider other approaches, such as aligning the sentence embeddings, but we ultimately felt that a proper treatment of the many different analyses techniques that are possible for the embedding spaces would be outside of the scope of this work. We found your suggestion about performing the eigen-similarity analysis without translation data and only the monolingual tasks to be interesting, and an oversight on our part for not including in the initial version of our paper. We plan to include this evaluation in the next revision of our paper.\\n\\nAddressing Minor Comments\\n*All table numbers should now read correctly.\\n\\n*We have greatly simplified Table 2 as suggested.\"}", "{\"title\": \"Clarifying datasets, language, and claims\", \"comment\": \"Thank you for your many comments and suggestions - we have addressed each individual comment below:\\n\\nAddressing Major Cons\\n1. Monolingual/Cross-lingual Claims. Our intention with including the zero-shot SNLI and Amazon sentiment classification tasks was to show that a cross-lingual model could leverage English data to perform well on target language tasks even with very little target language data for the task. We did not mean to claim that our cross-lingual models would perform better than optimized, monolingual target language models; we apologize for not being clearer about this. \\n\\n2. Dataset References. Thank you for bringing this oversight to our attention, we have updated the discussion of table 1 with a reference to SentEval and a pointer to the supplementary material, which has brief descriptions of the different tasks along with their accompanying references.\\n\\n3. Usage of Existing Approaches. The purpose of our work in this paper was to investigate how different ideas from SOTA models could be combined in a way that could lead to effective cross-lingual representations that are useful in many scenarios. While we agree that we do not introduce a new novel architectural component ourselves, we believe that our experiments and accompanying analysis are sufficiently interesting for motivating new research in cross-lingual and multi-lingual sentence representations (i.e. eigen-similarity-based regularization).\\n\\nAddressing Comments\\n1. Our encoding architecture is inspired by the work of Guo et al. (2018) as well as Cer et al. (2018) and Logeswaran & Lee (2018) (all cited in introduction). We consider our proposed training setup to build upon the previous work by introducing multiple tasks across languages that are connected via translation.\\n\\n2. As mentioned in section 2, the only task that is specific to the source language (English) is SNLI, and we have made this more clear in the revision.\\n\\n3. While the DAN model of Iyyer et al. (2015) also uses a softmax, the DAN paper does not mention the dual-encoder-style response ranking approach with dot products to the best of our understanding.\\n\\n4. Yes, symmetric tasks was intended to mean that we used the same corpora across languages (i.e. Wikipedia for source and target languages). We have reworded that line in the revision to make it clearer.\\n\\n5. This is correct, we do not initialize using pre-trained word embeddings and instead learn all embeddings from scratch. We noticed that using pre-trained embeddings made practically no difference in final performance, which we now make clear in the section discussion word and character embeddings.\\n\\n6. In bringing up computational efficiency in our discussion of word and character n-gram embeddings, we meant to draw attention to the fact that we pool embeddings. Choosing to not pool embeddings would have greatly increased the parameter count of our model.\\n\\n7. The primary reason we used a relatively shallow transformer model was to keep the number of parameters in our cross-lingual, multi-task model in the same ballpark as other SOTA monolingual models. Additionally, we did not notice a significant gain from training with 4, 5, or 6 transformer layers instead of 3. We note that the Universal Sentence Encoder model shown in table 1 uses 6 transformer layers, and our cross-lingual models have comparable performance to it. That being said, we are also trying to add results for much deeper transformer networks (as in Al-Rfou et al., 2018) in the next revision of our paper.\\n\\n8. Our apologies for not making the notion of convergence more clear; convergence is determined to be 30M steps for all cases (as that was also when training stabilized). This has been cleared up in the revision.\\n\\n9. We also apologize for not making all of our dataset splits more clear; we have updated the paper with this information.\\n\\n10. The lack of dataset detail was a large oversight on our part - we have made all of our monolingual English evaluations clear and added the necessary citations in our revision.\\n\\nAddressing Minor Issues\\n1. We have gone back over the paper and tried to simplify/remove sentences when it is possible to do so.\\n\\n2. We have made sure the use of \\u201ctarget language\\u201d is consistent throughout the paper, so that there is no confusion between terms.\\n\\n3. Similarly, we have also tried to clear up the names used for referencing our different model types so that they are as consistent as possible.\"}", "{\"title\": \"Addressing clarifications and currently working on new evaluations\", \"comment\": \"Thank you for taking the time to write such detailed feedback about our paper. We address each of your concerns below:\\n\\nAddressing Main Concerns\\n*Ablation Studies. We think this concern is spot on, and plan to add a much more detailed analysis of how each monolingual task contributes to our cross-lingual multi-task setup to the next revision of our paper (we are trying these experiments now). Currently, we do have experiments where we remove mirrored corpora in the supplementary material of our paper. When we wrote the initial version of our paper, we chose to prioritize many different evaluations of our cross-lingual models over more analysis of the training setup due to time and page constraints, and we acknowledge that this was an oversight on our part.\\n\\n*Encoder Architecture. Our choice of the transformer architecture for our experiments was based on many recent SOTA results in different language modeling tasks coming from the use of transformer architectures (Al Rfou et al., 2018; Cer et al., 2018), but we agree that it is possible for other architectures to potentially perform better in our setup. We did run some initial experiments with LSTMs and Deep Averaging Networks (Iyyer et al., 2015) and found that they did not outperform transformer models. Additionally, the main inhibiting factor in using different encoding architectures was the multiplicative effect that it would have had on our model evaluations/analysis, which we felt would have been infeasible for conference submission. \\n\\n*Distant Language Pairs. We have also been interested in extending our experiments to more language pairs, and plan to add experiments with English/Non-European language pairs to the next revision of our paper as well. We focused on English-French, English-Spanish, and English-German language pairs in our first draft due to having the highest number of standard evaluations available for these language pairs.\\n\\n*SOTA STS Systems. We have updated the paper to better explain how ECNU and BIT work (details are now in the supplementary material), which we hope clarifies the complexity of these systems relative to our own.\\n\\n*Comparison with InferSent. The comparison between our cross-lingual models and InferSent (Conneau et al., 2017) is actually done in Table 1, and we apologize for not making that more clear in our discussion. Adding cross-lingual multi-task training led to better performance in TREC, SUBJ, and SST, and worse performance in MR, CR, and MPQA.\\n\\nAddressing Minor Concerns\\n*Our apologies for not making the hard negative similarity computation more clear - we had some additional information about it in the supplementary material, and we have now moved this info to the main body of the paper.\\n\\n*Given the performance of the no-SNLI models in the supplementary material, we did not expect MultiNLI data to make a significant impact on cross-lingual model performance on downstream tasks. \\n\\n*We apologize for not clarifying how our data splits were done, that was another oversight on our part. As a development set, we took a 10% slice of each of our datasets - this information has now been made clear in the body of the paper.\\n\\n*We have clarified all of the task abbreviations and added the relevant citations for each task (all in supplementary material), to make table 1 more understandable.\\n\\n*In the next revision of our paper, we plan to include performance on XNLI (once all of these experiments have finished running). We do have some initial experiment using the trained English-French model and fine-tuning using MultiNLI data with frozen encoders. Our initial XNLI evaluation shows an accuracy of 69% on English and 64.5% on French.\"}", "{\"title\": \"A new framework for cross-lingual sentence representation which is an interesting mix of standard building blocks, but more convincing experiments are needed to appreciate the main contributions.\", \"review\": \"This paper proposes a novel cross-lingual multi-tasking framework based on a dual-encoder model that can learn cross-lingual sentence representations which are useful in monolingual tasks and cross-lingual tasks for both languages involved in the training, as observed on the experiments for three language pairs. The main idea of the approach is to model all tasks as input-response ranking tasks and introduce cross-lingual representation tying through the translation ranking task, introduced by Guo et al. (2018). All components of the framework are quite standard and deja-vu, but I like the paper in general, and the results seem quite encouraging. I have several comments on how to further strengthen the paper and improve the presentation of the main findings.\\n\\nThe proposed framework does not offer any substantial modeling contribution (i.e., all major components are based on SOTA models), but the framework is still quite interesting as a mixture of these SOTA components. I believe that some additional experiments would make the main contributions clearer and would also provide additional insights into the main properties of the proposed framework: 1) cross-linguality and 2) multi-tasking. \\n\\n*Most of all, I am surprised not to see any ablation studies. For instance, what happens if we remove one of the two monolingual tasks in each language? How does that reduced model compare to the full model? Which monolingual task is more beneficial for the final performance in downstream tasks? Can we think of adding another monolingual task to boost performance further? I think that this sort of experiment would be more beneficial for the paper than a pretty long analysis from Section 5 (this analysis is still valid, but should be shortened substantially). Evaluating only multi-tasking without any cross-lingual training would also be very beneficial to recognise the extent of improvement achieved by adding cross-linguality to the model.\\n\\n*How much does the proposed architecture depend on the choice of the encoding model for the function g? Have the authors experimented with other (recent and (near-)SOTA) encoding models? I would like to see a comparative analysis of this 'hyper-parameter'.\\n\\n*I would like to see more experiments on more distant language pairs. This would make the paper even more interesting imho. I am also curious whether there would be a drop in performance reported conditioned on the distance/proximity between two languages in a language pair.\\n\\n*I would like to see a more detailed description of the two best performing STS systems (ECNU and BIT). In what respect are these systems state-of-the-art feature engineered and mixed? I am not sure what this means without providing any additional context to the claim and description.\\n\\n*How does the monolingual English STS model trained with the cross-lingual multi-task framework compare to the work of Conneau et al. (EMNLP 2017) which also used SNLI as the task on which to learn universal sentence representations. This would be a good experiment imho as it would show how much we gain from cross-lingual training and multi-tasking.\", \"minor\": \"*Page 3: Could you add a short footnote discussing how hard-negatives for the translation ranking task are selected? How do you compute similarity here?\\n*Do you expect performance to improve further by training MultiNLI instead of SNLI (or combining the two datasets)?\\n*\\\"All hyperparameters are tuned based on preliminary experiments on a development set.\\\" -> What is used as the development set? More details needed.\\n*\\\"Finally, as an additional training heuristic, we multiply the gradients to the word and character embeddings by a factor of 100.\\\" -> How is the value for the embedding gradient multiplier determined? Is there an automatic procedure to fine-tune this hyper-parameter or has this been done in a completely empirical way?\\n*Table 1: please define the task abbreviations before showing them in the table. It is not clear what each task is by relying only on the abbreviation.\\n*This dataset was not available at the time of the submission, but for the revision it would make sense to also evaluate on the new XNLI dataset of Conneau et al. (EMNLP 2018) for multilingual NLI experiments.\\n\\n(After the first revision) I have raised the score after the very detailed author response (thanks for that!), but this is also conditioned on the authors making the actual revisions promised in their response. I am still quite interested to check how well the method works in a setup with distant language pairs.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"It is an interesting paper which explores multi-task model for simultaneously improving both monolingual and cross-lingual tasks. However, due to missing information and lacking of clarity makes it hard to accept at this point of time.\", \"review\": \"Summary\\n----------\\nIn this paper, authors explore learning of cross-lingual sentence representations with their proposed dual-encoder model. Evaluation conducted with learned cross-lingual representations on several tasks such as monolingual, cross-lingual, and zero-shot/few-shot learning show the effectiveness of the proposed approach. Also, they show provide a graph-based analysis of the learned representations.\", \"three_positive_and_negative_points_of_the_paper_is_presented_as_follows\": \"pros\\n------\\n\\n1. cross-lingual representation learning by combinining ideas from learning sentence representations and cross-language retrieval.\\n2. Multi-task setup of different tasks for improving cross-language and monolingual tasks.\\n3. Lot of experimental results.\\n\\n\\ncons\\n-----\\n1. Claim it works for monolignual tasks in target language such as zero-shot learning for sentiment classification and NLI. Also, for cross-lingual STS and eigen-similarity metric is hard to retrieve from the paper.\\n\\n2. Many terms, datasets are used without being referenced.\\n\\n3. Usage of existing approaches to build a single model for many tasks.\\n\\ncomments to authors\\n-----------------------\\n\\n\\n1. Dual-encoder architecture is inspired from Guo et al (2018) which uses encoding of source and target sentence with Deep neural network. However, it is here replaced into multi-task dual-encoder model.\\n\\n2. What are the tasks that are very specific to source language? \\n\\n3. Equation-1 is basically a logistic regression or softmax over \\\\phi. However \\\\phi is dot product of encodings as similar to Deep averaging networks (Iyyer et al. 2015) ?\\n\\n4. In Section-2, it is unclear what does symmetric tasks mean? They use parallel corpora?\\n\\n5. In Section-2.1, it is mentioned that Word embeddings are learned end-to-end. Does this mean they are not initialized with pretrained ones?\\n\\n6. In Section-2.1, it is mentioned that word and character embeddings are learned in a computationally efficient way, what does it represent? They use less parameters, parallelizable?\\n\\n7. Why only three layers of transformer, It is understood that 6-12 layers is required for effective encoding of sentences (Al-Rfou et al., 2018)\\n\\n8. In model configuration, how is convergence decided. Any stopping criterion?\\n\\n9. What are the splits for reddit, wikipedia datasets?\\n\\n10. In Table-1, what does MR,CR etc., refer to? They are never mentioned before. Does all tasks only use only English ?\\n\\n\\n\\nOverall it is an interesting paper which explores multi-task model for simultaneously improving both monolingual and cross-lingual tasks. However, due to missing information and lacking clarity in some details it is hard to accept at this point of time.\\n\\nMinor issues\\n--------------\\n\\n1. Sentences are very long and not easily comprehensible.\\n2. Target language and repsonse are used without referencing each other. Better to use one of them for better tracking.\\n3. No common notation for the model. It is been referenced with different names (cross-lingual multi-task model, multi-task dual-encoder model).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Limited novelty, strong evaluation, other languages and tasks?\", \"review\": \"The paper presents an intuitive architecture for learning cross-lingual sentence representations. I see weaknesses and strengths:\\n\\n(i) The approach is not very novel. Using parallel data and similarity training (siamese, adversarial, etc.) to facilitate transfer has been done before; see [0] and references therein. Sharing encoder parameters across very different tasks is also pretty standard by now, going back to [1] or so. \\n(ii) The evaluation is strong, with a nice combination of standard benchmark evaluation, downstream evaluation, and analysis. \\n(iii) While the paper is on cross-lingual transfer, the authors only experiment with a small set of high-resource languages, where transfer is relatively easy. \\n(iv) I think the datasets used for evaluation are somewhat suboptimal, e.g.: \\na) Cross-lingual retrieval and multi-lingual STS are very similar tasks. Other tasks using sentence representations and for which multilingual corpora are available, include discourse parsing, support identification for QA, extractive summarization, stance detection, etc. \\nb) Instead of relying on Agic and Schluter (2017), why don\\u2019t the authors use the XNLI corpus [2]?\\nc) Translating the English STS data using Google NMT to evaluate an architecture that looks a lot like Google NMT sounds a suspicious. \\n(v) While I found the experiment with eigen-similarity a nice contribution, there is a lot of alternatives: seeing whether there is a linear transformation from one language to another (using Procrustes, for example), seeing whether the sentence graphs can be aligned using GANs based only on JSD divergence, looking at the geometry of these representations, etc. Did you think about doing the same analysis on the representations learned without the translation task, but using target language training data for the tasks instead? The question would be whether there exists a linear transformation from the sentence graph learned for English while doing NLI, to the sentence graph learned for German while doing NLI.\", \"minor_comments\": [\"\\u201cTable 3\\u201d on page 5 should be Table 2.\", \"Table 2 seems unnecessary. Since the results are not interesting on their own, but simply a premise in the motivating argument, I would present these results in-text.\", \"[0] http://aclweb.org/anthology/W18-3023\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
ryGfnoC5KQ
Kernel RNN Learning (KeRNL)
[ "Christopher Roth", "Ingmar Kanitscheider", "Ila Fiete" ]
We describe Kernel RNN Learning (KeRNL), a reduced-rank, temporal eligibility trace-based approximation to backpropagation through time (BPTT) for training recurrent neural networks (RNNs) that gives competitive performance to BPTT on long time-dependence tasks. The approximation replaces a rank-4 gradient learning tensor, which describes how past hidden unit activations affect the current state, by a simple reduced-rank product of a sensitivity weight and a temporal eligibility trace. In this structured approximation motivated by node perturbation, the sensitivity weights and eligibility kernel time scales are themselves learned by applying perturbations. The rule represents another step toward biologically plausible or neurally inspired ML, with lower complexity in terms of relaxed architectural requirements (no symmetric return weights), a smaller memory demand (no unfolding and storage of states over time), and a shorter feedback time.
[ "RNNs", "Biologically plausible learning rules", "Algorithm", "Neural Networks", "Supervised Learning" ]
https://openreview.net/pdf?id=ryGfnoC5KQ
https://openreview.net/forum?id=ryGfnoC5KQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJej2EhxeN", "SJxjeIlqCm", "BJeu0VlqAX", "ByeT17xqCm", "B1lH3_Ian7", "B1e45z9n2Q", "B1lQ16dO2Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544762546923, 1543271923501, 1543271631783, 1543271140723, 1541396652708, 1541345931903, 1541078235351 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper691/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper691/Authors" ], [ "ICLR.cc/2019/Conference/Paper691/Authors" ], [ "ICLR.cc/2019/Conference/Paper691/Authors" ], [ "ICLR.cc/2019/Conference/Paper691/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper691/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper691/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"this submission follows on a line of work on online learning of a recurrent net, which is an important problem both in theory and in practice. it would have been better to see even more realistic experiments, but already with the set of experiments the authors have conducted the merit of the proposed approach shines.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"accept\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for summarizing the positives features of KeRL while offering helpful critiques.\\n\\nWe may add that KeRL might also work in places that BPTT wouldn\\u2019t: Imagine a scenario where the network is tasked with processing large amounts of real-time data quickly, and there is a speed/accuracy tradeoff. An algorithm like KeRL, which sacrifices accuracy for speed, should outperform BPTT.\\n\\nWith respect to the specific critiques, we have fixed the tables and added an experiment on the time complexity of KeRL vs. BPTT. Finally, we have extensively edited sections 3-4 so for clarity and to emphasize the more important parts of the derivation.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for the detailed review comments and criticisms, which were extremely helpful in improving our paper.\\n\\nUnderstanding the Ansatz is indeed key to understanding KeRL, and indeed giving various implementational details at the beginning obscured where the rule comes from. We now begin by first more clearly stating the learning rule, which actually does have a very simple form (this simple form was previously obscured by poor notation; we have now simplified the notation and also explain why the rule is simple), and then immediately showing the Ansatz that leads from the gradient-descent chain rule computation to KeRL. We have extensively edited the presentation of the Ansatz for clarity, and moved details on how to train the feedback weights and inverse-timescales to a different section.\\n\\nAs for the detailed suggestions below, we have corrected the noted typos, clarified or added definitions where suggested, and replaced non-standard terminology. We think that our paper should be substantially clearer and the central idea easier to follow after implementing these suggestions.\", \"see_below_for_more_detailed_comments\": \"'Abstract & Section 1: Is \\\"sensitivity tensor\\\" or \\\"credit assignment tensor\\\" common term? Because I've never heard them before. Consider defining them before you discuss it, and using consistent jargon. Later in Section 2 you seem to call this the \\\"RTRL tensor\\\" (whose meaning I can infer).'\\n \\nReplaced \\u201ccredit assignment tensor\\u201d and \\u201cRTRL tensor\\u201d (both used interchangeably before) with the more clearly defined single term, \\u201csensitivity tensor\\u201d, which has some precedence in the literature (please see paper text). \\n\\n'Section 2: Gradient vanishing isn't so much a problem in itself, but a symptom that the sensitivity of the network's output to the action of some neuron in the past is very low. Ths gradient is just relaying this information, so I don't really see vanishing gradients as the problem to overcome, but rather low sensitivity on past activations.'\\n\\n\\nRemoved this comment about gradient vanishing. \\n\\n\\n'Section 3: Did you mean to write (W^out h^t + b^out) instead of (W^out h + b^out)^t ?\\n'\\n\\nCorrected typo.\\n\\n' \\\"[equation] represents the gradient of the cost with respect to the current hidden state\\\". The RHS of this equation makes no sense to me. Not only does this not depend on the nonlinearity in any way, it doesn't include any consideration of future outputs on which the current h surely depends.'\\n\\n \\nCorrected. \\n\\n'It would make the paper much more pleasant to read if you gave your derivation of the learning rule before you stated it in gory detail. It feels almost completely arbitrary reading it first without any justification. This might be fine if it were compact and elegant, but it's not.'\\n\\nPlease see response above, and fully edited presentation of rule in paper text. \\n\\n'\\n\\nConsider using exp(x) instead of e^x since the symbol e already means something else in your notation' \\n\\nDone\\n\\n'Section 4: Please define \\\"temporal variation\\\" '\\n\\nClarified. \\n\\n\\n\\n'Section 5: You should elaborate on the experimental setup you used. Especially for the Addition and MNIST problems. For example, what consistutes a \\\"step\\\" in figure 2? Does KeRL take \\\"one\\\" step per time-step? Or does \\\"step\\\" mean a complete gradient computation from running from t = 1 to t= T? Is the BPTT truncated? Are you counting one step of BPTT to be one complete forwards and backwards pass?\\n'\\n\\nClarified and elaborated. Also, replaced \\u201ctraining steps\\u201d with \\u201cnumber of minibatches.\\\"\\n \\n'\\nYou should include some basic description of what an IRNN is.'\\n\\n\\nDone\\n\\n\\n'When you say that for MNIST that KeRL \\\"does not converge to as good of an optimum\\\" this seems like unjustified inference. You don't really know that it is converging to a minimum of the original objective at all. It could be converging to the minimum of some other objective it is implicitly optimizing due to your approximations (if one even exists). Or it could be simply cycling around and failing to converge. The fact that the loss plateaus isn't direct evidence of convergence in any sense. If you wanted to measure this more directly you could look at the (true) gradient magnitude.\\n'\\n\\nReplaced with \\u201cdoes not reach as high an asymptotic performance\\u201d\\n\\n\\n' \\\"only requires a few tensor operations at each time step\\\" -> this is also true of UORO'\\n\\nClarified.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"\\\"The paper is reasonably well written, but somewhat dense and hard to follow.\\\"\\n\\nThank you for the valuable comments. We have extensively edited the exposition to both better motivate the rule and make its derivation clearer and easier to read. \\n\\n\\\"The contribution seems novel. The main issue is the empirical evaluation. All of the tasks (masked addition, pixel-by-pixel MNIST, and the AnBn problem) are artificial. In addition, the results on some of the tasks are mixed if not in favor of BPTT. \\\"\\n\\nSince KeRL performs stochastic gradient descent (SGD) with an approximate gradient, we do not expect it to outperform untruncated BPTT, which performs SGD with an exact gradient. Rather, the way to think about KeRL results is like the results on feedback alignment: relaxing the symmetric return weights in feedback alignment allows for learning by an approximation to SGD. Recurrent learning is notoriously harder than learning in feedforward networks; it's notable that KeRL, which makes a strong approximation to the sensitivity, is able to perform almost as well across a variety of difficult tasks. Furthermore, KeRL holds the advantage over BPTT that the computation of the gradient does not scale with the length of the graph.\\n\\n\\\"I am not convinced that these results are enough to showcase the practical advantages of KeRL.\\n I am willing to increase my score, if the authors address this issue.\\\" \\n\\nWe disagree that that the empirical evidence for KeRL is lacking. The adding problem was one of Schmidhuber\\u2019s \\u201cpathological\\u201d tasks that was used to demonstrate the utility of the LSTM. Using KeRL, we were able to solve sequence length 400 with a regular RNN and a squashing tanh nonlinearity. In fact, KeRL outperformed BPTT with the tanh nonlinearity, as the long timescales in the Ansatz were able to regularize the network towards a solution with longer sensitivity timescales. The pixel-by-pixel MNIST test, an even more challenging long term memory task as it involves remembering over nearly 1000 timesteps, was also solved on an RNN with KeRL. \\n\\n\\n\\\"Detailed comments:\\n\\n- The authors mention that BPTT is not biologically plausible. Although reasonable, I don't get why this would be an argument against it.\\\"\\n\\nWe are concerned with biological plausibility for three reasons. 1) From a neuroscience perspective we want to understand how the brain's recurrent networks learn tasks without the machinery to do BPTT; 2) Given that biological brains still outperform AI/deep networks on a wide range of problems, it is important to understand how brains solve these problems to build better AI, not just as a biological curiosity. 3) Designing and evaluating biologically realistic learning rules can be seen as an application of machine learning to neuroscience, one of the relevant topics of ICLR.\"}", "{\"title\": \"limited empirical evidence\", \"review\": \"The paper proposes an alternative to backprop through time for\\ntraining RNN models.\\n\\nThe paper is reasonably well written, but somewhat dense and hard\\nto follow. The contribution seems novel.\\n\\nThe main issue is the empirical evaluation. All of the tasks\\n(masked addition, pixel-by-pixel MNIST, and the AnBn problem)\\nare artificial.\\n\\nIn addition, the results on some of the tasks are mixed if not\\nin favor of BPTT. I am not convinced that these results are enough\\nto showcase the practical advantages of KeRL.\\n\\nI am willing to increase my score, if the authors address this\\nissue.\", \"detailed_comments\": \"- The authors mention that BPTT is not biologically plausible. Although\\n reasonable, I don't get why this would be an argument against it.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Interesting evidence that extreme approximations to BPTT can work\", \"review\": \"This paper proposes a simple method for performing temporal credit assignment in RNN training. While it seems somewhat naive and unlikely to work (in my opinion), the experimental results surprisingly show reasonable performance on several reasonably challenging artificial tasks.\\n\\nThe core of the approach is based on equation 7, which approximates the Jacobian between different hidden states at different time-steps as a single adaptively-learned matrix times a decay factor that depends on the time gap. While this seems like a very severe approximation to make the authors speculate that some kind of feedback alignment-like mechanism might be at play.\\n\\nThe presentation needs work in several areas, and the experimental results require more explanation, but otherwise this seems like a solid paper. I would probably increase my rating if the authors could address my issues satisfactorily.\", \"see_below_for_more_detailed_comments\": \"Abstract & Section 1: \\n\\nIs \\\"sensitivity tensor\\\" or \\\"credit assignment tensor\\\" common term? Because I've never heard them before. Consider defining them before you discuss it, and using consistent jargon. Later in Section 2 you seem to call this the \\\"RTRL tensor\\\" (whose meaning I can infer).\", \"section_2\": \"Gradient vanishing isn't so much a problem in itself, but a symptom that the sensitivity of the network's output to the action of some neuron in the past is very low. Ths gradient is just relaying this information, so I don't really see vanishing gradients as the problem to overcome, but rather low sensitivity on past activations.\", \"section_3\": \"Did you mean to write (W^out h^t + b^out) instead of (W^out h + b^out)^t ?\\n\\n\\\"[equation] represents the gradient of the cost with respect to the current hidden state\\\". The RHS of this equation makes no sense to me. Not only does this not depend on the nonlinearity in any way, it doesn't include any consideration of future outputs on which the current h surely depends. \\n\\nIt would make the paper much more pleasant to read if you gave your derivation of the learning rule before you stated it in gory detail. It feels almost completely arbitrary reading it first without any justification. This might be fine if it were compact and elegant, but it's not.\\n\\nConsider using exp(x) instead of e^x since the symbol e already means something else in your notation.\", \"section_4\": \"Please define \\\"temporal variation\\\"\", \"section_5\": \"You should elaborate on the experimental setup you used. Especially for the Addition and MNIST problems. For example, what consistutes a \\\"step\\\" in figure 2? Does KeRL take \\\"one\\\" step per time-step? Or does \\\"step\\\" mean a complete gradient computation from running from t = 1 to t= T? Is the BPTT truncated? Are you counting one step of BPTT to be one complete forwards and backwards pass?\\n\\nYou should include some basic description of what an IRNN is.\\n\\nWhen you say that for MNIST that KeRL \\\"does not converge to as good of an optimum\\\" this seems like unjustified inference. You don't really know that it is converging to a minimum of the original objective at all. It could be converging to the minimum of some other objective it is implicitly optimizing due to your approximations (if one even exists). Or it could be simply cycling around and failing to converge. The fact that the loss plateaus isn't direct evidence of convergence in any sense. If you wanted to measure this more directly you could look at the (true) gradient magnitude.\\n\\n\\\"only requires a few tensor operations at each time step\\\" -> this is also true of UORO\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An interesting idea of improving BPTT by kernel recurrent learning. Skip in backpropagation is proposed and illustrated.\", \"review\": \"The proposed kernel recurrent learning (KeRL) provides an alternative way to train recurrent neural network with backpropagation through time (BPTT) where the propagation of gradients can be skipped over different layers. The authors directly assume the sensitivity function between two layers with a distance of tau in a form of Eq. (7). The algorithm of BPTT is then approximated due to this assumption. The model parameters are changed to learn the network dynamics. The optimization problem turns out to estimate beta and gamma of the kernel function. The learned parameters are intuitive. There are a set of timescales to describe the memory of each neuron and a set of sensitivity weights to describe how strongly the neurons interact on average. The purpose of this study is to save the memory cost and to reduce the time complexity for online learning with comparable performance.\", \"pros\": \"1. KeRL only needs to compute a few tensor operations at each time step, so online KeRL learns faster than online BPTT for the case with a reasonably long truncation length.\\n2. Biologically plausible statements are addressed.\\n3. A prior is imposed for the temporal sensitivity kernel. The issue of gradient vanishing is mitigated.\\n4. Theoretical illustration for KeRL in Sections 3 and 4 is clear and interesting.\", \"cons\": \"1. The proposed method is an approximation to BPTT training. Suppose the system performance is constrained. Some guesses are made. The system performance can be further improved.\\n2. The experiment on time cost due to online learning is required so that the reduction of time complexity can be illustrated.\\n3. The format of tables 1 and 2 can be improved. Caption is required in Table 1. Overlarge size of Table 2 can be fixed.\\n4. A number of assumptions in Sections 3 and 4 are assumed. When addressing Section 3, some assumptions in Section 4 are used. The organization of Sections 3 and 4 can be improved.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1zz2i0cY7
Integer Networks for Data Compression with Latent-Variable Models
[ "Johannes Ballé", "Nick Johnston", "David Minnen" ]
We consider the problem of using variational latent-variable models for data compression. For such models to produce a compressed binary sequence, which is the universal data representation in a digital world, the latent representation needs to be subjected to entropy coding. Range coding as an entropy coding technique is optimal, but it can fail catastrophically if the computation of the prior differs even slightly between the sending and the receiving side. Unfortunately, this is a common scenario when floating point math is used and the sender and receiver operate on different hardware or software platforms, as numerical round-off is often platform dependent. We propose using integer networks as a universal solution to this problem, and demonstrate that they enable reliable cross-platform encoding and decoding of images using variational models.
[ "data compression", "variational models", "network quantization" ]
https://openreview.net/pdf?id=S1zz2i0cY7
https://openreview.net/forum?id=S1zz2i0cY7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HkgqoRyRBE", "H1g9SUNgxN", "rJgzoY72kV", "SJeUDvnYkE", "HJeiMdGdyN", "B1lVsHYh07", "S1gRGcFcRm", "BylRxUzXC7", "rJeSLLtxAm", "r1x3GUFeAX", "rJguaBteCQ", "ByxKt0Z8pX", "Syx712aCn7", "BJe8Typu3Q" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1550872226240, 1544730178451, 1544464794132, 1544304478513, 1544198162669, 1543439771580, 1543309846489, 1542821365910, 1542653517322, 1542653459820, 1542653375801, 1541967488711, 1541491674807, 1541095357737 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper690/Authors" ], [ "ICLR.cc/2019/Conference/Paper690/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper690/Authors" ], [ "ICLR.cc/2019/Conference/Paper690/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper690/Authors" ], [ "ICLR.cc/2019/Conference/Paper690/Authors" ], [ "ICLR.cc/2019/Conference/Paper690/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper690/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper690/Authors" ], [ "ICLR.cc/2019/Conference/Paper690/Authors" ], [ "ICLR.cc/2019/Conference/Paper690/Authors" ], [ "ICLR.cc/2019/Conference/Paper690/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper690/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper690/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Paper revision uploaded\", \"comment\": \"We uploaded the finalized revision of the paper, with an added paragraph in the discussion justifying our approach, and addressing the issues raised by AnonReviewer3.\\n\\nWe hope that the explanation given is easy to follow, and improves the presentation of our paper.\\n\\nThank you, and looking forward to meeting in New Orleans!\"}", "{\"metareview\": \"This paper addresses the issue of numerical rounding-off errors that can arise when using latent variable models for data compression, e.g., because of differences in floating point arithmetic across different platforms (sender and receiver). The authors propose using neural networks that perform integer arithmetic (integer networks) to mitigate this issue. The problem statement is well described, and the presentation is generally OK, although it could be improved in certain aspects as pointed out by the reviewers. The experiments are properly carried out, and the experimental results are good.\\nThank you for addressing the questions raised by the reviewers. After taking into account the author's responds, there is consensus that the paper is worthy of publication. I therefore recommend acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting solution to a practical problem\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for updating your score.\\n\\nNote that Huffman coding with a conditional probability model would have equivalent issues, because the design of the Huffman code would again be very sensitive to fluctuations in the probabilities. It's a fundamental issue with entropy coding methods in general.\\n\\nWe'll think about how we can improve this explanation further, to try to make it as clear as possible for the final paper. We believe it will benefit the community to have everyone on the same page.\"}", "{\"title\": \"Response to clarification\", \"comment\": \"Ok, thank you for the clarification; I am not familiar enough with the coding literature to be confident that Huffman would be so much worse for this particular task (I follow the reasoning but not sure how complex the latent structure would have to be), but I\\u2019ll take your word for it.\\n\\nAs you will/have included the explanation in the paper, I am happy to improve my score to above the accept threshold.\"}", "{\"title\": \"Feedback\", \"comment\": \"Dear AnonReviewer3,\\n\\nwe hope that the feedback we provided clarifies the issue. We'd be happy to go into more detail if necessary. Please let us know if you have any further questions or concerns.\\n\\nThank you.\"}", "{\"title\": \"Clarification\", \"comment\": \"Thanks for making an effort to explain your concerns better. We believe that these are relatively easy to sort out, because it should be only a matter of presentation. We kept the introduction to the paper relatively terse, believing that it should provide enough motivation as is. We see now that we were assuming more familiarity with the compression literature than we maybe should. However, we can be more specific in the camera-ready version.\\n\\nThere is currently no alternative to range coding or related methods. The reasoning is as follows:\\n\\nIn order to losslessly transmit any data point y, we would ideally like to represent it as a sequence of bits whose length corresponds to its self-information under a model prior p shared by sender and receiver (-log_2 p(y)). This is compression in a nutshell: Likely data points under the prior will be represented with short sequences, unlikely ones with long sequences. Simply sending binary representations of floating point numbers not only fixes the lengths of the sequences to a constant, it also disregards the probability structure of the data points, which is suboptimal. The only known \\u201centropy coding\\u201d algorithms which are applicable to arbitrary priors and achieve this mapping to bit sequences in an asymptotically optimal way are Huffman coding on one hand, and arithmetic/range coding (or ANS, all related) on the other. While Huffman coding is computationally less expensive in certain situations, it is generally not as practical with complex or conditional priors as the other class of algorithms, which is why we focus on range coding. Note that all entropy coding methods suffer from being sensitive to discrepancies in the prior between sender and receiver. This is a fundamental issue, because the optimal length of the sequence (-log_2 p(y)) and the likelihood of y under the prior (p(y)) are essentially the same quantity. We would also prefer to have a more \\u201crobust\\u201d range coder, such that small floating point discrepancies do not lead to catastrophic decoding errors, but such a method doesn\\u2019t exist. If we could invent one, it would invariably be suboptimal in terms of compression rate, because by design, it would need to allow for a certain error in p(y), which would have a direct effect on the sequence length. Furthermore, we would need to be able to control the error. Unfortunately, the magnitude of discrepancies in floating point computations can vary widely and is hard to predict. Therefore, all real-world image compression methods, including commercial ones, compute priors with discrete math (and all state-of-the-art methods use a form of range coding). There are no known short-cuts or ad-hoc fixes to this problem. It is well known in the compression community.\\n\\nBringing this discussion into the latent-variable domain, we now sample from the encoder distribution e(y|x), where x is the data point, and then losslessly encode the latent representation y. The ideal length of the binary sequence is log_2 e(y|x) - log_2 p(y), with y sampled from e(y|x). This is again the self-information of y under a shared prior (-log_2 p(y)). It is possible, albeit practically difficult, to discount for the uncertainty in the encoder via bits-back coding (getting back -log_2 e(y|x) bits). However, methods to do that build on the previous class of entropy coding algorithms and suffer from the same problem. They also need the receiver to decode y, and then re-evaluate the encoder, and hence require not only the prior to be deterministic, but also the encoder and the decoder. Integer networks can also be applied here.\\n\\nWe hope that this clarifies point 1 as well as point 2, and we\\u2019ll be happy to update the paper to include this reasoning (most likely as part of the introduction). If the reviewer thinks it is worthwhile, we can also include the motivation for the replacement gradient.\"}", "{\"title\": \"Response to author's rebuttal\", \"comment\": \"Many thanks to the authors for their detailed response.\\n\\nI will summarize my reasoning in broad strokes, before delving in detail into the authors rebuttal.\\n\\n# Reasoning behind why (to me) the paper was not yet ready for publication:\\n1. There are some well known good reasons for integer networks (typically based on speed/memory boosts), however I didn\\u2019t feel that the paper made it clear in the exposition why we should A: use range/arithmetic coding for VAE code transmission machine-machine (if we are sending elements from the latent space and not raw images, is this still more optimal than any alternative which would not suffer from such catastrophic FPE?). B: why this shortcoming of range coding for floating point elements should be solved in the model, and not at a lower level\\u2014how are floating point numbers usually compressed and transmitted without loss, do none of these ad hoc fixes work? \\n2. Some shortcomings in the exposition (again, to me). I think it would be very useful for the reader if the authors provided some more background on what the transmission coding schemes are, and why we need to go this route (integer networks) to address their shortcomings. This is what I was attempting to allude to when referring to the two example tables\\u2014I felt that some of this space could be used more effectively. I am willing to accept that this (point 2) *could* be done in time for the camera ready.\\n\\n>- We give two examples for the bit widths, because they show how the relevant parameters can be chosen in conjunction with >different activation functions. We talk about this in the following paragraphs. We do not understand why the reviewer thinks >that giving two examples for something constitutes bad quality.\\n- As explained above, I do not think that this constitutes bad quality, I was merely suggesting that you could drop some redundancy and replace it with a more in depth analysis of alternatives to the channel coding problem.\\n\\n>- Replacing the gradient of a quantization function (i.e. round()) with the identity is not a new idea (see, for example, Theis >(2017), which we cite, among many others). If the reviewer would like a justification, how about the following:\\n>Taking the gradient of round() yields a sum of Dirac delta distributions, i.e.,\\n>d/dx round(x) = sum_i delta(x-i). Obviously, this gradient function is not going to be helpful for optimization, because it will >produce too much variance in the gradients (+infinity for the half-integer positions, and 0 everywhere else). However, a >smoothed version of it is helpful. If we convolve this gradient function with a triangle function (https://en.wikipedia.org/wiki/>Triangular_function, which can be seen as the generator function for linear splines) to smooth it, the result is a constant (1), >which corresponds to the identity function when used in backpropagation.\\n- OK, makes sense\\u2014but you do not cite this in the section where you say you \\u2018replace the derivative of Q with the identity function\\u2019, as an example.\\n\\n>- K is not a kernel, but the bit-width of the kernel (the kernel itself is defined as H). K is defined right after equation (7), where >it is first mentioned. We modified the language to distinguish better between the kernel and its bit width. However, we believe >that given sufficient time and care, any reader should have been able to distinguish this from the context.\\n- This is a little unfortunate, looking back in the revisions you can see that you previously state: \\u201cHere, we simply rescale each element of b\\u2032 using the bit-width of the kernel K, and round it to the nearest integer\\u201d. Whilst I admit that you are correct that I could, given sufficient time, have determined that K was not some un-mentioned kernel, but instead the bit-width, I suspect I am not the only one to be thrown (at least temporarily) by that statement.\\n\\nOverall, based on the authors feedback and promises to address some of the issues, I would be willing to raise my current rating by 1. To improve my score past the \\u2018accept\\u2019 barrier, however, I would want to be confident that points 1 + 2 (above) would be addressed in the camera ready. Addressing 2. seems plausible, perhaps the authors can give some feedback on 1? Alternatively, if there is no further feedback but the AC feels that point 1. does not matter, then the results (conditional on the importance of the problem-solution pair) are reasonably compelling.\"}", "{\"title\": \"Response to authors\", \"comment\": \"I would like to thank authors for their reply.\\n\\n1) Post-hoc quantization\\nI understand your explanation and it sounds reasonably.\\n\\n2) Training plot\\nThank you very much for adding the plot. In my opinion the plot improves readability of the paper and confirms the intuition. \\n\\n3) Summary of the method\\nIt would be highly appreciated.\", \"conclusion\": \"In my opinion the paper is very important and should be accepted. The problem of compression is extremely important from the practical point of view. I would even dare to claim that without a good set of tools for compression we will make very little progress in AI. As a result, I decided to improve my score.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank the reviewer for their notes and constructive comments.\", \"regarding_the_remarks_made\": [\"We considered a post-hoc quantization approach, but did not end up experimenting with it much due to the complexity of its implementation. Simply quantizing the kernels and activations to an arbitrary choice of global quantization step size yielded terrible results (not even close to competitive \\u2013 the model simply failed). We believe a solution which tries to maximize usage of the dynamic range of both the kernel coefficients and the activations, such as ours, can perform better, because it minimizes the quantization error throughout the network. In a post-training quantization approach, the actual range of activations would need to be measured empirically, and then a solution to a set of complex constraints would need to be found. We did not attempt this, because our goal was to come up with a method that requires only a minimum of post-training modifications to the model.\", \"We agree that a training plot is a useful addition to the paper and have added one. We found that integer networks tend to train somewhat slower than floating point networks. When matching floating point and integer networks for asymptotic performance, integer networks take longer to converge (likely due to their larger number of filters). When matching by number of filters, it appears that the training time to convergence is roughly the same, but the performance ends up worse, of course. Indeed, the training loss also appears somewhat noisier.\", \"We will try to add a summary of the method to the final paper.\"]}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for their notes and constructive comments. However, we fundamentally do not understand the reasoning for the rejection. We think that all of the \\u201ccons\\u201d provided represent minor issues which can be fixed, and we address them further below.\\n\\nAs such, only the claim that our paper is of \\u201clow significance\\u201d remains as a potential reason for the rejection. We disagree with that assessment, for the following reasons:\\n\\n- Application-orientedness: Indeed, our paper addresses a problem relevant to using variational models for compression, which is an \\u201capplication\\u201d of neural networks. However, numerous \\u201capplication\\u201d papers are submitted to ICLR every year. We do not think that compression as such is any less important than, say, natural language processing.\\n\\n- Generality: More importantly, we are not presenting \\u201cyet another compression method\\u201d. Our method for achieving machine determinism is relevant and applicable to all latent variable models. This includes some of the state of the art models for learned compression.\\n\\n- Optimality: We cannot guarantee that our method is optimal, or the only solution to this problem, for that matter. This is due to the nature of the problem (optimizing fully discrete models is hard). However, as far as we know, this problem has never been addressed before in a learning context or in this generality. It does not represent incremental work, but a qualitative contribution. The fact that there now exists a first documented solution to this problem, which can be used as a baseline for future work, is a strong point for qualifying our paper.\\n\\nWe hope that the reviewer will respond to this and clarify what exactly is the reason for their rejection. We do not think we understand the reasoning at this point.\", \"regarding_the_other_points_made\": [\"We give two examples for the bit widths, because they show how the relevant parameters can be chosen in conjunction with different activation functions. We talk about this in the following paragraphs. We do not understand why the reviewer thinks that giving two examples for something constitutes bad quality.\", \"Replacing the gradient of a quantization function (i.e. round()) with the identity is not a new idea (see, for example, Theis (2017), which we cite, among many others). If the reviewer would like a justification, how about the following:\", \"Taking the gradient of round() yields a sum of Dirac delta distributions, i.e.,\", \"d/dx round(x) = sum_i delta(x-i). Obviously, this gradient function is not going to be helpful for optimization, because it will produce too much variance in the gradients (+infinity for the half-integer positions, and 0 everywhere else). However, a smoothed version of it is helpful. If we convolve this gradient function with a triangle function (https://en.wikipedia.org/wiki/Triangular_function, which can be seen as the generator function for linear splines) to smooth it, the result is a constant (1), which corresponds to the identity function when used in backpropagation.\", \"K is not a kernel, but the bit-width of the kernel (the kernel itself is defined as H). K is defined right after equation (7), where it is first mentioned. We modified the language to distinguish better between the kernel and its bit width. However, we believe that given sufficient time and care, any reader should have been able to distinguish this from the context.\", \"We have to push back on the claim that we did not conduct enough experiments leading up to our conclusion that (16) causes instabilities. In fact, our initial goal was to simply reuse (16), since it had been already published and is simpler. However, the results were consistently worse, and on further inspection, we observed that the prior and the encoder would always end up in a kind of oscillatory behavior. We will try to find a visualization of this for the final paper. Theis (2017) used Gaussian scale mixtures centered on 0 as the prior, which is a form of regularization and might have helped reduce this, as it would prevent arbitrary shifts. However, we would like to be able to use arbitrarily powerful priors, as is generally the goal in variational approximation. The combination of (17) and (18) is unfortunately slightly less simple to implement, but it has the benefit that the prior distribution need not be subjected to regularization, and thus enables a more general solution. So, this choice was not only informed by empirical results, but also by the desire to be able to do without regularization.\"]}", "{\"title\": \"Response to AnonReviewer4\", \"comment\": \"We thank the reviewer for their notes and constructive comments.\", \"regarding_the_questions_raised\": [\"Indeed, the approximations we present here are the end result of a long list of experiments. Our experiments included:\", \"Several different gradient substitutes such as straight-through, tanh, etc.\", \"Replacing the gradient of the activation functions vs. adding uniform noise as a substitute of the quantization during training.\", \"A distillation approach (i.e. first training a floating point compression model, and then attempting to match an integer model to it in terms of a simpler loss function).\", \"We considered a post-hoc quantization approach, but did not end up experimenting with it much due to the complexity of its implementation.\", \"We found that the presented solution is the best performing, while being conceptually simple and relatively easy to implement. Of course, we cannot claim that this solution is optimal in any sense. Optimizing fully discrete models is a hard problem. To our knowledge, our method is the first attempt to solve the non-determinism problem of neural networks in this context, and certain choices were made in an ad-hoc way. However, we hope that it can be used as a baseline for improved methods in the future.\", \"We agree we should have been more elaborate explaining how some of the choices were made. We improved the paper in this regard.\", \"The rescaling function s is designed to maximize usage of the dynamic range of kernel coefficients. It simply rescales the kernels such that the minimum or maximum coefficient of each filter is identical to one of the dynamic range bounds, but keeps zero at zero.\", \"The function r is a reparameterization of the divisor c, such that for small values of it, the effective descent step size on it is reduced. When c is small, large perturbations in c can lead to excessively large fluctuations of the quotient (i.e., the input to the nonlinearity). This leads to instabilities in training. The reparameterization ameliorates this. The effective step size on c ends up being multiplied with a factor that is approximately linear in c (this is previous work; see Ball\\u00e9, 2018).\", \"We did not conduct a lot of experiments regarding the choice of input scaling. We believe it doesn\\u2019t matter too much; in our experiments, we simply chose a value \\u201cby eye\\u201d, so that the tanh nonlinearity is reasonably well represented with the lookup table. (I.e., in figure 2, right panel, we made sure that at least two or three input values are mapped to each output value, in order to preserve the approximate shape of the nonlinearity.)\", \"We found in all of our experiments that integer networks tend to train somewhat slower than floating point networks. When matching floating point and integer networks for asymptotic performance, integer networks take longer to converge (likely due to their larger number of filters). When matching by number of filters, it appears that the training time is about the same, but the performance ends up worse, of course. We have added a figure to the paper to show this.\"]}", "{\"title\": \"Interesting read; would be helpful to better explain difficulties in training\", \"review\": \"This well-written paper addresses the restrictions imposed by binary communication channels on the deployment of latent variable models in practice. In order to range code the (floating point) latent representations into bit-strings for practical data compression, both the sender and receiver of the binary channel must have identical instances of the prior despite non-deterministic floating point arithmetic across different platforms. The authors propose using neural networks that perform integer arithmetic (integer networks) to mitigate this issue.\", \"pros\": [\"The problem statement is clear, as well as the approach taken to addressing the issue.\", \"Section 5 did a nice job tying together the relevant literature on using latent variable models for compression with the proposed integer network framework.\", \"The experimental results are good; particularly, Table 1 provides a convincing case for how using integer networks remedies the issue of decompression failure across heterogeneous platforms.\"], \"cons\": [\"In Section 3, it wasn\\u2019t clear to me as to why the authors were using their chosen gradient approximations with respect to H\\u2019, b\\u2019 and c\\u2019. Did they try other approximations but empirically find that these worked best? Where did the special rescaling function s come from? Some justifications for their design choices would be appreciated.\", \"The authors state in Section 2 that the input scaling is best determined empirically -- is this just a scan over possible values during training? This feels like an added layer of complexity when trying to train these networks. It would be nice if the authors could provide some insight into exactly how much easier/difficult it is to train integer networks as opposed to the standard floating point architectures.\", \"In Section 6, the authors state that the compromised representational capacity of integer networks can be remedied by increasing the number of filters. This goes back to my previous point, but how does this \\u201clarger\\u201d integer network compare to standard floating point networks in terms of training time?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Application paper: how to modify latent variable models s.t. they survive range coding transmission.\", \"review\": \"This paper explains that range coding as a mechanism for transmitting latent-variable codes from source to target for decoding is severely sensitive to floating point errors.\\n\\nThe authors propose what amounts to an integer version of Balle 2018, and demonstrate that it allows for transmission between platforms without catastrophic errors due to numerical round-off differences.\\n\\nThe paper (and problem) is of low significance, but the authors present a neat solution.\", \"pros\": [\"Well defined problem and solution.\", \"Practical question relating to use of ANNs for data en/de-coding.\"], \"cons\": [\"Presentation needs brushing up: e.g. why give two examples for H, b, v bit widths?\", \"Some approximations are not well motivated or justified. E.g. why is it valid to replace gradient of a function that has 0 gradients with the identity?\", \"Paper needs some rewriting for clarity. E.g. where is the kernel K defined?\", \"Lack of experimentation to justify the fact that the construction of (16) leads to instabilities, and is therefore less suitable than the method outlined here.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"An interesting approach for a very important problem\", \"review\": \"The paper presents a very important problem of utilizing a model on different platforms with own numerical round-offs. As a result, a model run on a different hardware or software than the one on which it was trained could completely fail due to numerical rounding-off issues. This problem has been considered in various papers, however, the classification task was mainly discussed. In this paper, on the other hand, the authors present how the numerical rounding-off issue could be solved in Latent-Variable Models (LVM).\\n\\nIn order to cope with the numerical rouding-off issue, the authors propose to use integer networks. They consider either quantized ReLU (QReLU) or quantized Tanh (Qtanh). Further, in order to properly train the integer NN, they utilize a bunch of techniques proposed in the past, mainly (Balle, 2018) and (Balle et al., 2018). However, as pointed out in the paper, some methods prevent training instabilities (e.g., Eqs. 18 and 19). All together, the paper tackles very important problem and proposes very interesting solution by bringing different techniques proposed for quantized NNs together .\", \"pros\": [\"The paper is well-written.\", \"The considered problem is of great importance and it is rather neglected in the literature.\", \"The experiments are properly carried out.\", \"The obtained results are impressive.\"], \"cons\": [\"A natural question is whether the problem could be prevented by post-factum quantization of a neural network. As pointed out in the Discussion section, such procedure failed. However, it would be beneficiary to see an empirical evidence for that.\", \"It would be also interesting to see how a training process of an integer NN looks like. Since the NN is quantized, instabilities during training might occur. Additionally, its training process may take longer (more epochs) than a training of a standard (float) NN. An exemplary plot presenting a comparison between an integer NN training process and a standard NN training process would be highly appreciated.\", \"(Minor remark). The paper is well-written, however, it would be helpful to set the final learning algorithm. This would drastically help in reproducibility of the paper.\", \"--REVISION--\", \"After reading the authors response and looking at the new version of the paper I decided to increase my score. The paper tackles very important problem and I strongly believe it should be presented during the conference.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
BkgzniCqY7
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
[ "Kaidi Xu", "Sijia Liu", "Pu Zhao", "Pin-Yu Chen", "Huan Zhang", "Quanfu Fan", "Deniz Erdogmus", "Yanzhi Wang", "Xue Lin" ]
When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example. However, such adversarial attacks perturbing the raw input spaces may fail to capture structural information hidden in the input. This work develops a more general attack model, i.e., the structured attack (StrAttack), which explores group sparsity in adversarial perturbation by sliding a mask through images aiming for extracting key spatial structures. An ADMM (alternating direction method of multipliers)-based framework is proposed that can split the original problem into a sequence of analytically solvable subproblems and can be generalized to implement other attacking methods. Strong group sparsity is achieved in adversarial perturbations even with the same level of Lp-norm distortion (p∈ {1,2,∞}) as the state-of-the-art attacks. We demonstrate the effectiveness of StrAttack by extensive experimental results on MNIST, CIFAR-10 and ImageNet. We also show that StrAttack provides better interpretability (i.e., better correspondence with discriminative image regions) through adversarial saliency map (Paper-not et al., 2016b) and class activation map (Zhou et al., 2016).
[ "better interpretability", "strattack", "adversarial attack", "towards general implementation", "adversarial examples", "deep neural networks", "dnns", "lp norm", "added perturbation", "similarity" ]
https://openreview.net/pdf?id=BkgzniCqY7
https://openreview.net/forum?id=BkgzniCqY7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1gkRB8SeE", "ByxVIQD2yE", "SJgf67ndCX", "HkeSkuLdRQ", "S1x4tLIOR7", "r1gjPN8uRm", "SyerbE8uCX", "ByxX-z6E67", "B1xz04MWaX", "rkl0Xux-aX", "HJl2S2JA3X", "BygM1K4c37" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1545065926748, 1544479563950, 1543189433591, 1543165917035, 1543165563971, 1543165026919, 1543164924577, 1541882362569, 1541641418499, 1541634085596, 1541434436146, 1541191898015 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper689/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper689/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper689/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper689/Authors" ], [ "ICLR.cc/2019/Conference/Paper689/Authors" ], [ "ICLR.cc/2019/Conference/Paper689/Authors" ], [ "ICLR.cc/2019/Conference/Paper689/Authors" ], [ "ICLR.cc/2019/Conference/Paper689/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper689/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper689/Authors" ], [ "ICLR.cc/2019/Conference/Paper689/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper689/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper contributes a novel approach to evaluating the robustness of DNN based on structured sparsity to exploit the underlying structure of the image and introduces a method to solve it. The proposed approach is well evaluated and the authors answered the main concerns of the reviewers.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Accept.\"}", "{\"title\": \"RE: Response\", \"comment\": \"Thanks for incorporating feedback, the additional related work section is helpful and provided better context for this work.\"}", "{\"title\": \"Thanks for the edits\", \"comment\": \"Thank you for revising your paper, the new version seems to be more clear to me in terms if the positioning of your work. I have bumped up the numerical score to 6 in my review.\"}", "{\"title\": \"General response\", \"comment\": \"We thank all reviewers for their insightful and valuable comments. Our paper has been greatly improved based on these comments. The major modifications are summarized as below.\\n\\na) We have enriched our related work and provided a better motivation on StrAttack (See Introduction).\\n\\nb) To strengthen our contribution on the effectiveness of StrAttack, we have added experiments to show the attack performance of StrAttack against robust adversarial trained model [Madry et al. 2018]; see our results in Table 2. Moreover, we have compared the transferability of StrAttack with other attacks on 6 different network models (Table 3).\\n\\nc) We have added more examples to show the better interpretability of StrAttack, where the found sparse adversarial patterns have a better correspondence with class-specific discriminative regions localized by CAM; see Figure 4.\\n\\nThanks!\\nICLR 2019 Conference Paper689 Authors\"}", "{\"title\": \"Response to Reviewer2\", \"comment\": \"As we discussed earlier, the motivation of our research is to seek a more effective attack, which can be as successful as existing attacks (in terms of achieving the same attack success rate and keeping small L1, L2 and L_infty distortion), but only requires to modify a small subset of pixels. We show that StrAttack is indeed the desired adversarial attack.\\n\\n\\nIn the revised paper, we strengthen the potential impacts of StrAttack from the aspects a) performance of attacking robust adversarially trained model, a) attack transferability, and c) interpretability of complex images. \\n\\n\\nFirst, we show the powerfulness of StrAttack to attack the defensive model obtained from robust adversarial training [Madry et al. 2018], which is commonly regarded as the strongest defense on MNIST. As we can see, although StrAttack perturbs much less pixels, its attack success rate does not drop. This implies that we could perturb less but \\u2018right\\u2019 pixels (with better interpretable adversarial patterns) without losing its attack performance. \\n\\nSecond, we compare the transferability of StrAttack to other attacks. Here the transferability is characterized by the attack success rate of adversarial examples (found by one attack generation method against a given network model) transferred to another different network model. We present the transferability of 3 attacks from the model Inception V3 to model Inception V3, Inception V4, ResNet 50, ResNet 152, DenseNet 121 and DenseNet 161. As we can see, StrAttack yields the highest transferability almost at every model. We refer the reviewer to Table 3 for more details. \\n\\n\\nThird, we show more examples to visualize the interpretability of adversarial perturbations on certain complex images. In the \\u2018pug\\u2019-\\u2018street sign\\u2019 example of Fig. 4, objects of the original label (pug) and the target label (street sign) exist simultaneously. As we can see, adversarial perturbations generated from StrAttack are perfectly matched to the most discriminative image regions localized by CAM: the adversary shows suppression on the discriminative region of the original label and promotion on the discriminative region of the target label. By contrast, the CW attack is less interpretable due to the high noise visibility (perturbing too many pixels).\"}", "{\"title\": \"Response to Reviewer1\", \"comment\": \"We thank the reviewer for the positive comments on our work. In addition to technical contributions from the algorithmic perspective, we would like to emphasize that StraAttackt identifies (group-wise) sparse adversarial patterns that make attacks successful, but without incurring extra pixel-level perturbation power compared to other existing attacks such as CW. The resulting sparse adversarial pattern also offers a visual explanation through adversarial saliency map (ASM) and class activation map (CAM). Effectiveness and interpretability of StrAttack reveals the \\u2018right\\u2019 pixels that an attacker should perturb to boost the attack performance. To strengthen this contribution, in the revised version we present the potential impacts of StrAttack from a) performance of attacking robust adversarially trained model (Table 2), b) attack transferability (Table 3), and c) interpretability of complex images (Figure 4).\"}", "{\"title\": \"Response to Reviewer3\", \"comment\": \"We thank the reviewer for the positive comments, and answer your specific questions as below.\\n\\na) In the revised version we have added more related work; see Introduction. Our structure-driven attack is motivated by devising a more efficient attack that takes advantages of two attacks using extremely opposite principles - C\\\\&W attack (or \\\\ell_infty attacks such as I-FGSM) that modifies all pixels, and one-pixel attack (Su et al., 2017) that only modifies a few pixels. The C\\\\&W attack can achieve small \\\\ell_infty perturbations but has to perturb most pixels (large \\\\ell_0 norm). Although the one-pixel attack can achieve extremely small \\\\ell_0 norm but with much higher \\\\ell_infty norm and low attack success rate. Both the above attack methods lead to higher noise visibility due to perturbing too many pixels or perturbing a few pixels too much. Motivated by them, we wonder if there exists a more effective attack that can be as successful as existing attacks but only requires to modify a small subset of pixels, and what the resulting sparse adversarial pattern can tell. To answer these questions, we propose StrAttack which achieves strong group sparsity without losing attack effectiveness including both attack success rate and Lp distortion. Furthermore, we show that the resulting sparse adversarial patterns offer a great interpretability through adversarial saliency map (ASM) and class activation map (CAM).\\n\\nb) The proposed StrAttack problem formulation cannot be solved using standard optimization solvers, e.g., Adam, or proximal gradient algorithm, etc, due to the presence of non-smooth regularizers and hard constraints. To address this technique challenge, we proposed the ADMM solution that splits the original complex problem into neat subproblems, each of which yields an analytical solution.\\n\\nc) We investigated the group sparsity by exploring various mask sizes. Clearly, there is a trade-off between the group size and the representation of local regions. Large mask size tends to make StrAttack insensitive to the structure of local regions. In the experimental evaluation of this paper, the best mask size that we empirically found are 2x2 MNIST/CIFAR-10 and 13x13 for ImageNET respectively.\\n\\nLast but not the least, to strengthen the effectiveness and the interpretability of StrAttack, in the revised version we present the potential impacts of StrAttack from a) performance of attacking robust adversarially trained model (Table 2), b) attack transferability (Table 3), and c) interpretability of complex images (Figure 4).\"}", "{\"title\": \"Thank you for the clarifications\", \"comment\": \"Thank you for the clarifications, in particular for the item (a), that explains better why this research is important. I will take a look at the revision when you upload it and I will consider reevaluating your paper.\"}", "{\"title\": \"Interesting technical contribution\", \"review\": \"This paper proposes a method for adversarial attacks on DNNs (StrAttack), designed to exploit the underlying structure of the images. Specifically, incorporating group-sparsity regularization into the generation of the adversarial samples and using an ADMM based implementation to generate the adversarial perturbations.\\n\\nThe paper is structured and written well, with clear articulation of technical details. The experiments and reported results are comprehensive, and clearly showcase the efficacy of the proposed solution. I'm not enough of an expert on the subject matter to comment about the novelty of this proposed approach. However, it would help to elaborate more on the related work (section.7) with clear contrasting of current method esp. using structural information for adversarial samples - theoretical implications, underlying rationale and importantly calling out the benefit over the previous lp - norm based approaches?\\n\\nRegarding group sparsity - it is unclear as to the assumed structural constraints, is the sliding mask expected to be only 2x2, 13x13 (for MNIST/CIFAR-10, ImageNET respectively) ? impact of larger/smaller or skewed sizes ? sensitivity to image types?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Some clarification on our motivation, contributions and potential impacts\", \"comment\": \"We really thank the reviewer for the insightful comments. As a prompt response, we would like to use this opportunity to reiterate and clarify our motivation, contributions and their potential impacts. Meanwhile, we are also preparing a revision to better address the reviewer's comments.\\n\\na) The first contribution \\\"Structure-driven attack\\\" actually indicates the existence of a more stealthy pixel-level adversarial attack under the same norm-bounded threat model, which has not been entirely explored in existing attacks. The motivation of our research stems from devising a more efficient attack that takes advantages of two attacks using extremely opposite principles - C\\\\&W attack (or \\\\ell_infty attacks such as I-FGSM) that modifies all pixels, and one-pixel attack (Su et al., 2017) that only modifies a few pixels. The C\\\\&W attack can achieve small \\\\ell_infty perturbations but has to perturb most pixels (large \\\\ell_0 norm), while the one-pixel attack can achieve extremely small \\\\ell_0 norm but with much higher \\\\ell_infty norm.\\n\\nBoth attack methods may lead to higher noise visibility due to perturbing too many pixels or perturbing a few pixels too much. Motivated by these attack methods and under the same threat model (e.g., \\\\ell_infty constraint), we wonder if there exists a more effective attack that can be as successful as existing attacks but only requires to modify a small subset of pixels. We show that StrAttack is indeed the desired adversarial attack. It is also worth mentioning that one pixel attack has much lower attack success rate on ImageNet than CW and ours. \\n\\nConsequently, the impacts of StrAttack include (i): understanding why the identified regions in the image are vulnerable to adversarial attacks; and (ii) investigating how the identified attack sparse patterns can benefit adversarial attacks/defenses\\n\\nb) The second and the third contributions are our technical contributions from the algorithmic perspective. The results indicate that powerful attacks could be derived from more advanced optimization techniques. Note that the proposed StrAttack problem formulation cannot be solved using standard optimization solvers, e.g., Adam, or proximal gradient algorithm, etc, due to the presence of non-smooth regularizers and hard constraints. To address this technique challenge, we proposed the ADMM solution which is quite new for finding adversarial perturbations and enjoys the benefit of having an analytical solution at every ADMM subproblem.\\n\\nc) We thank R2 for acknowledging interpretability as an impactful contribution. The proposed idea indeed helps researchers to better explain and visualize the effect of adversarial perturbations. Our experimental results, e.g., Figure 1 and 3, clearly show that why we could perturb less but `right` pixels (with group-sparse patterns) to fool DNNs. Those `right` pixels are the most sensitive pixels to affect the output of classifiers, checked by adversarial saliency analysis in Sec. 6. They also correspond to the most discriminative region of a class activation map, which demonstrates the interpretability of the proposed structured attack. Also, we would like to clarify that \\\"The mechanisms of adversarial perturbations \\\" meant the above findings. Based on the feedback, we now realize that 'mechanisms' might not be the best word to describe our contribution, and thus we will rephrase our claim and make it clearer and more accurate. Note that many adversarial attack methods were proposed in the literature, however, few of them linked interpretability with adversarial examples.\"}", "{\"title\": \"An interesting but not entirely novel contribution\", \"review\": \"The paper proposes a novel approach to generate adversarial examples based on structured sparsity principles. In particular the authors focus on the intuition that adversarial examples in computer vision might benefit from encoding information about the local structure of the data. To this end, lp *group* norms can be used in contrast to standard global lp norms when constraining or penalizing the optimization of the adversarial example. The authors propose an optimization strategy to address this problem. The authors evaluate the proposed approach on real data, comparing it against state-of-the-art competitors, which do not leverage the structured sparsity idea.\\n\\nThe paper is well written and easy to follow. The presentation of the algorithms for i) the non-overlapping and ii) overlapping groups as well as iii) the proposed refinement are clear. The experimental evaluation is interesting and convincing (the further experiments in the supplementary material add value to the overall discussion). \\n\\nThe main downside of the paper is that the proposed idea essentially consists in replacing the standard \\\\ell_p norm penalty/constraints with a group-\\\\ell_p one. While this provides interesting technical questions from the algorithmic perspective, from the point of view of the novelty, the paper does not appear an extremely strong contribution,\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"the paper has good technical qualities, but motivation for the research is not explained\", \"review\": \"The paper proposes a method to find adversarial examples in which the changes are localized to small regions of the image. A group-sparsity objective is introduced for this purpose and it is combined with an l_p objective that was used in prior work to define proximity to the original example. ADMM is applied to maximize the defined objective. It is shown that adversarial examples in which all changes are concentrated in just few regions can be found with the proposed method.\\n\\nThe paper is clearly written and results are convincing. But what I am not sure I understand is what is the purpose of this research. Among the 4 contributions listed in the end of the intro only the last one, Interpretability, seems to have a potential in terms on the impact. Yet am not quite sure how \\u201cobtained group-sparse adversarial patterns better shed light on the mechanisms of adversarial perturbations\\u201d. I think the mechanisms of adversarial perturbations remain as unclear as they were before this paper.\\n\\nI am not ready to recommend acceptance of this paper, because I think the due effort to explain the motivation for research and its potential impacts has not been done in this case.\", \"upd\": \"the discussion and the edits with the authors convinced me that I may have been a bit too strict. I have changed my score from 5 to 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
Syez3j0cKX
Dissecting an Adversarial framework for Information Retrieval
[ "Ameet Deshpande", "Mitesh M.Khapra" ]
Recent advances in Generative Adversarial Networks facilitated by improvements to the framework and successful application to various problems has resulted in extensions to multiple domains. IRGAN attempts to leverage the framework for Information-Retrieval (IR), a task that can be described as modeling the correct conditional probability distribution p(d|q) over the documents (d), given the query (q). The work that proposes IRGAN claims that optimizing their minimax loss function will result in a generator which can learn the distribution, but their setup and baseline term steer the model away from an exact adversarial formulation, and this work attempts to point out certain inaccuracies in their formulation. Analyzing their loss curves gives insight into possible mistakes in the loss functions and better performance can be obtained by using the co-training like setup we propose, where two models are trained in a co-operative rather than an adversarial fashion.
[ "GAN", "Deep Learning", "Reinforcement Learning" ]
https://openreview.net/pdf?id=Syez3j0cKX
https://openreview.net/forum?id=Syez3j0cKX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1gHyxfmgE", "S1xA6c49C7", "ByenCKVcRQ", "rke2DvNcRQ", "Syelph9yp7", "Syg60A2OhQ", "HJeFkJ9D2Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544916956759, 1543289542083, 1543289299922, 1543288676407, 1541545143727, 1541095124829, 1541017312614 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper688/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper688/Authors" ], [ "ICLR.cc/2019/Conference/Paper688/Authors" ], [ "ICLR.cc/2019/Conference/Paper688/Authors" ], [ "ICLR.cc/2019/Conference/Paper688/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper688/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper688/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The manuscript centers on a critique of IRGAN, a recently proposed extension of GANs to the information retrieval setting, and introduces a competing procedure.\\n\\nReviewers found the findings and the proposed alternative to be interesting and in one case described the findings as \\\"illuminating\\\", but were overall unsatisfied with the depth of the analysis, and in more than one case complained that too much of the manuscript is spent reviewing IRGAN, with not enough emphasis and detailed investigation of the paper's own contribution. Notational issues, certain gaps in the related work and experiments were addressed in a revision but the paper still reads as spending a bit too much time on background relative to the contributions. Two reviewers seemed to agree that IRGAN's significance made at least some of the focus on it justifiable, but one remarked that SIGIR may be a better venue for this line of work (the AC doesn't necessarily agree).\\n\\nGiven the nature of the changes and the status of the manuscript following revision, it does seem like a more comprehensive rewrite and reframing would be necessary to truly satisfy all reviewer concerns. I therefore recommend against acceptance at this point in time.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"An interesting contribution, but lacking in depth.\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for carefully reading the paper and understanding the crux even though our writing was not very clear in a few sections. As pointed out by the reviewer, the main motivation of the paper was to point out a few loopholes in IRGAN, which led to the proposal of a co-training based setup. It further shows that co-operative setups might perform as well, if not better, than adversarial setups. The following are a few comments which we hope to address the reviewers concerns.\\n\\n1) W1 \\u2013 We understand why the reviewer feels there is a lack of rigor. But the substitution in point is very similar to the substitution made in the original GANs paper to allow easier flow of gradient. The substitution results in equivalent loss functions because the optimal value does not change, though the convergence speed etc might change. But we are theoretically motivating the substitution. We agree we should have been clearer about it and we have added a comment about it in the section. Log(1-x) is maximized when x=0, and -log(x) is maximized when x=0. This can be seen graphically at this link [https://www.wolframalpha.com/input/?i=plot+log(1-x)+and+-log(x) ].\\n2) W2 \\u2013 We are sorry that the reviewer felt there were missing details in the paper. We did not have results for the co-training model at that time, and we have run the experiments now and have added the results.\\n3) We agree that the details about the model and parameters are not very clear. We have added an Appendix to help clear up a few doubts about the same.\\n\\nAn intuitive explanation for why the co-training model does better than single discriminator is that it decorrelates the mistakes that the models are making. The mistakes made by one model are fed into another model, instead of self-feedback.\\n\\nApart from this, we have made efforts to make the paper more readable. We have also added a section which connects the framework to Conditional GANs, Contextual Multi-Armed Bandits and Actor-Critic Algorithm.\\n\\nWe thank the reviewer for such informative comments and feedback and hope that our revised version addresses some concerns and is more readable.\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for going over the paper carefully and giving very useful feedback. We understand that there are issues the reviewer has with regards to the writing of the paper. We have clarified notation everywhere, reduced the unnecessary background material and added a section about what motivated us to choose the proposed model.\\n\\nThe following summarizes the key idea of the paper, which the reviewer has related to noise constrastive estimation.\\nWe agree that it is important to generate the negative samples correctly. But unlike in [1], the generated negative samples depend on the query directly. In IRGAN, the generator\\u2019s score for each document paired with the given query is used Score(q,d). A document is sampled after normalizing this score and treating the normalized scores as a probability distribution. Therefore, P(d|q) \\\\propto Score(q,d), where Score represents the generator\\u2019s score. In the co-training model that we proposed, instead of using the generator to sample negative data points, we use another discriminator. This makes the setup symmetric. However, this is not just negative sampling, but the two models run in a co-operative setup, with each model feeding in examples which are wrong to the other model. For an intuitive explanation, if Model 1 thinks with a high probability that Document 1 matches Query 1, which is wrong, then the pair (Document 1, Query 1) is fed as a negative sample to Model 2. In other words, query-document pairs which are matched wrongly by one model are fed to the other model. We have included better explanations in the paper as well.\\n\\nThe following are paragraph-wise comments to explain the inaccuracies which we addressed and concerns that the reviewer has pointed out.\\n1) IRGAN has gained a lot of traction recently. Though the formulation has some of the key ingredients of an adversarial framework for information retrieval, there are key issues we need to answer before we can use it. This paper acknowledges the good parts of IRGAN while pointing out potential loopholes and improvements to the same.\\n2) We understand that the comment about the instability of GANs was a little unnecessary, and we have removed it. The use of \\u201c|\\u201d in G is borrowed from [2] which is the Conditional GANs paper, we apologize for the confusion.\\n3) We agree that there was some unnecessary related work and background work. We feel that IRGAN\\u2019s framework is important to understand and have retained that, but we have trimmed down the sections so it is easier to follow now.\\n4) We are sorry that the use of the pipe has led to some confusion and added a comment about it. We wanted to stick to the exact equations used in IRGAN. D(q|d), as you pointed out, is notationally similar to D(q,d). The italics are mainly being used to comment on the IRGAN\\u2019s framework. We wanted to make a distinction between IRGAN paper\\u2019s comments and our comments. All the text without the italics is a paraphrased version of IRGAN. \\u201cr\\u201d is used to denote the rank of the document with respect to that query and is borrowed directly from the IRGAN paper. It can be ignored and the details of the formula don\\u2019t change. But we wanted to use the exact same equations. We have modified them for easier understanding though.\\n5) The crux of the section the reviewer is talking about is that the loss functions that the discriminator and generator are optimizing are exactly opposite. Even though the real joint does not factor in the generator\\u2019s gradient, it is important to see that the sign of the gradient is determined by it, which is in turn determined by the discriminator. REINFORCE algorithms are very sensitive to the sign of the rewards, and this can be seen on Page 15 in [3].\\n\\nApart from this, we have made efforts to make the paper more readable. We have also added a section which connects the framework to Conditional GANs, Contextual Multi-Armed Bandits and Actor-Critic Algorithm.\\n\\nWe thank the reviewer for a careful perusal of the paper and value the feedback given. We sincerely hope that the reviewer modifies the score based on our revised submission. We have also added a section which connects the framework to Conditional GANs, Contextual Multi-Armed Bandits and Actor-Critic Algorithm.\\n\\nWe thank the reviewer for a careful perusal of the paper and value the feedback given. We sincerely hope that the paper is easier to understand now.\\n\\n[1] - Mikolov, Tomas, et al. \\\"Distributed representations of words and phrases and their compositionality.\\\" Advances in neural information processing systems. 2013.\\n[2] - Mirza, Mehdi, and Simon Osindero. \\\"Conditional generative adversarial nets.\\\" arXiv preprint arXiv:1411.1784 (2014).\\n[3] - http://rail.eecs.berkeley.edu/deeprlcourse/static/slides/lec-5.pdf\"}", "{\"title\": \"Response\", \"comment\": \"We thank you for carefully reading and understanding the crux of the paper. Indeed, we feel that IRGAN needs to be further studied before it is used in practice, and the main aim of the paper has been to point out loopholes in the formulation and motivate the proposed co-training model. We have made a few revisions to the paper after taking your suggestions into consideration and we hope they sufficiently address your concerns.\\n\\n1) We have made the notation clearer in the equations and hope that they can be easily understood now.\\n2) We have added a separate section with the proposed models as suggested. We have also added what motivated us to choose those models.\\n3) Though we agree that IRGAN focusses on Information-Retrieval, we feel that the framework itself is more general and can be applied to tasks like content recommendation and Question Answering. In the future, the same framework could also be used for natural language generation (of documents) or image generation (to address the query needs). For us, the application in dialogue generation is the most interesting and we see great potential in this method.\\n4) We have added a section linking IRGAN to other works like Conditional GANs, Contextual Multi-armed bandits and actor-critic algorithms. We feel that these, especially the last one, will help make intuitive connections to the problem at hand.\\nWe hope the paper is easier to follow now. We thank you for your valuable feedback and time.\"}", "{\"title\": \"Interesting findings though depth and rigor could have been better\", \"review\": \"This paper tries to argue that the formulation of IRGAN (a method from 2017 that aimed to use GANs for the standard IR task of estimation query-document relevance) is now well-founded and has inherent weaknesses. Specifically the paper claims that (unlike regular GANs and what was likely intended by the authors or IRGAN) the discriminator and generator are working against each other. The paper then aims to show a couple of more well-founded different (generator-free) setups that perform about as well (if not better) as the original IRGAN work.\\n\\nOverall I found the work to be quite interesting and the findings to be illuminating. That said I think the paper notably lacked rigor and depth which definitely hurt the quality of the paper.\\n\\nBelow are my thoughts on the different facets as well as more detailed strengths / weaknesses breakdown:\", \"quality\": \"Above average\\nAs mentioned I think some of the findings are illuminating and thus overage the paper scores well on this aspect.\", \"clarity\": \"Slightly above average\\nWhile the paper is largely easy to follow, there are certain key sections that are not well explained / have fundamental errors.\", \"originality\": \"Strong\", \"significance\": \"Little below average\\nMy (main) concern here with this work is that the missing rigor and depth of the work is what is needed for readers to have a deeper understanding of the fundamental issue so as to be able to rectify it in future works.\\n\\n---\\n\\nStrengths / Things I liked about the work:\\n\\n+ The topic / theme of the work: I believe as a community we should encourage more such works that take a critical deep dive into recently proposed methods that may have some inherent weaknesses. As the authors noted the IRGAN work has become quite popular despite some of these (previously unknown) issues.\\n\\n+ The experimental results in general do a fair job illustrating the likely issue (though I would have liked to see more rigor and depth here as well as detailed below)\\n\\nWeaknesses / Things that concerned me:\\n\\n- (W1) Lacking rigor / depth: One of my big concerns with this work is that the analysis to demonstrate the inherent flaws of IRGAN if fairly shallow and not detailed enough. For example, Section 5 (which should have been the key section of the work) is quite poorly written and not rigorous enough. Claiming that log(1-z) can be replaced with - log(z) is incorrect -- how can this substitution be made as is?\\n\\nOverall my sense after reading the work is that I understand that the IRGAN formulation is not completely well-formed in terms of discriminator/generator synergy (the pairwise formulation has the additional issue of separating real pairs rather than higher rank-lower rank pairs). However I do not buy that the generator and discriminator directly oppose each other as is claimed in the work (I believe this arises only due to the incorrect claim that log(1-z) can be replaced with -log(z)).\\n\\nThus at the end of the day I feel the reader is willing to buy there is a issue with the formulation, but they do not fully understand it not do they understand enough to understand how to rectify the underlying issues. To me that was unfortunate as the paper would have been an excellent work if it had done so.\\n\\n- (W2) Missing some experimental results / deeper insights : There were some notable empirical results that were missing or not provided, that raised some concerns in my mind. For instance I don't see the co-training approach listed for the MovieLens dataset .. Why so? The authors make a secondary claim that they are able to improve upon IRGANs via their proposed approach but then they do not substantiate these on all the datasets which seems like a notable oversight.\\n\\n - (W3) Missing details: To add to the above I think the authors can clearly be more detailed in describing for instance the models for D, G, p_\\\\psi etc .. Right now I can speculate what they are but I don't think a reader should be expected to speculate in such cases. Likewise empirical details about the datasets and their sizes could easily have been added.\\n\\nAlso the paper presents the IRGAN pairwise approach and mentions pairs in a couple of places but I don't see an approach that can learn from pairs among the ones proposed.\\n\\nAnother example is the two proposed models Fig 2a (Only discriminator) vs Fig 2b (cotraining). I don't see an explanation or intuition for why 2b is expected to be better than 2a. Given the claims of the work I would have wanted to understand this better.\\n\\n- (W4) Significance testing: This is an important experimental process to understand the validity of some of the claims. While I understand it is not the main claim of the paper, understand the significance of these differences helps put things in perspective. I would strongly urge the authors to add this for all of their experimental results not just the ones the proposed models are outperformed by IRGANs.\\n\\n- Lastly I would urge the authors to be rigid and clear in their notations. For example in the equation in section 4.5, \\\"o'\\\" occurs out of nowhere.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"This paper points out the limitation of IRGAN but it is better to submit to SIGIR.\", \"review\": \"This paper is closely related to the SIGIR 2017 paper \\u201cIRGAN: A minimax game for unifying generative and discriminative information retrieval models\\u201d. The SIGIR paper proposed a Generative Adversarial Nets (GAN) model for Information Retrieval (IRGAN). And in this paper, the authors dissect the IRGAN model and figure out that the discriminator and generator of IRGAN are optimizing directly opposite loss functions. They also provide experimental studies and show that the superiority of IRGAN in the experiments are mainly because of the discriminator maximizing the likelihood of the real data and not because of the generator.\", \"strong_points\": \"Considering that the IRGAN becomes popular after it being published, I should say the analyzing of this paper is important for researchers in the IR domain.\", \"concerns_or_suggestions\": \"1.\\tExcept the analyzing on IRGAN, the contribution of this paper is limited. Most of the parts of this paper introduce GAN and IRGAN. Only Section 5 focuses on the analyzing. The methods claimed new proposed, Single Discriminator and Co-training, are good for supporting the analyzing but they are not quite novel.\\n2.\\tIt is strange to introduce the two models, Single Discriminator and Co-training in the experimental setting section. I would suggest to separate them out and introduce them earlier.\\n3.\\tThe topic of this paper is more related to the IR domain. It will be better to publish it in SIGIR, together with the IRGAN paper.\\n4.\\tBesides, if it is possible, I would suggest researchers who have direct experiences on the implement and study on IRGAN give more comments on this paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good idea, but paper suffers on many important points\", \"review\": \"This paper trains an information retrieval (IR) model by contrasting the joint query-document distributions, p(q, d) with negative samples drawn from a resampling of the product of marginals, p(q) x p(d). They use a second discriminator to provide the re-weighting (I believe picking to top negative sample from the other model) and train this other model in a way that mirrors the first. They also attempt to point out some theoretical problems with a competing model, IRGAN, which uses a generator that is trying to model the joint.\\n\\nWhile I like the proposal idea, I think the paper has too many problems to warrant publication. First, the story is very disappointing. The authors phrase most of the paper as a critique of IRGAN, but this critique falls short. Really this is more of a paper about where to get negative samples when training a model of the joint (or the log-ratio in this case). Using negative samples from real data with noise contrastive estimation [1] is found in numerous works in NLP [2][3], and has gained some recent attention in the context of representation learning [4][5]. The first algorithm proposed is essentially doing a sort of ranking loss on negative samples, which mirrors similar works [6]. In fact, the generator in IRGAN could be viewed as just a parametric / adaptive negative sampling distribution in the context of NCE for the ultimate purpose of learning an estimate of the log-ratio. The most interesting thing I think of this work here is the co-training, i.e., using another model to help re-sample, and I think this idea should be explored in more detail.\\n\\nSecond, the paper spends far too much time revisiting prior work than addressing their own model, doing more analysis, providing more insight.\\n\\nThird, the paper is just poorly written. The notation is confusing, some of the equations are unclear (I have no idea how \\\"r\\\" is used in any of this), and the arguments of the baseline in IRGAN don't really doesn't make any sense.\", \"notes\": \"P1\\nI don't really follow why IRGAN is so central to this work. Good ideas aren't difficult to motivate, especially if empirically everything works out.\\nP2\\nI'm having trouble with claims, especially more recently, about GAN instability, particularly since numerous approaches [7][8] seem to have more or less solved the problem.\\n\\nThe use of \\\"|\\\" in G is awfully confusing.\\nP3\\nAlmost 2 pages of unnecessary background\\n\\nP4\\nWhy are we using \\\"|\\\" in functions? What's wrong with \\\",\\\"?\\ntheta = \\\\theta\\nI don't understand the point of the quote (in italics).\\nWhat happened to \\\"r\\\" in all of this?\\nThe last two equations and their relationship could be more clear.\\n\\nYou use italics, so is this supposed to be a quote? But then you have a section which attempts to show this.\\nP5\\nI have no idea what's supposed to be going on in 5). The samples from the real joint don't factor in the generator gradient, or at least it's absolutely not clear that this pops out of the baseline? Then you switch from log (1 - x) to - log x and there's some claim about this violating the adversarial objective?\", \"it_took_me_more_than_a_few_reads_to_figure_out_what_the_equation_at_the_bottom_of_p5_is_doing\": \"is this resampling? It's fairly unclear.\\n\\n[1] Gutmann, Michael U., and Aapo Hyv\\u00e4rinen. \\\"Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics.\\\"\\n[2] Mnih, Andriy, and Koray Kavukcuoglu. \\\"Learning word embeddings efficiently with noise-contrastive estimation.\\\" \\n[3] Mikolov, Tomas, et al. \\\"Distributed representations of words and phrases and their compositionality.\\\"\\n[4] Oord, Aaron van den, Yazhe Li, and Oriol Vinyals. \\\"Representation learning with contrastive predictive coding.\\\" \\n[5] Hjelm, R. Devon, et al. \\\"Learning deep representations by mutual information estimation and maximization.\\\"\\n[6] Faghri, Fartash, et al. \\\"VSE++: Improving Visual-Semantic Embeddings with Hard Negatives.\\\"\\n[7] Miyato, Takeru, et al. \\\"Spectral normalization for generative adversarial networks.\\\"\\n[8] Mescheder, Lars, Andreas Geiger, and Sebastian Nowozin. \\\"Which Training Methods for GANs do actually Converge?.\\\"\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HyVbhi0cYX
Complexity of Training ReLU Neural Networks
[ "Digvijay Boob", "Santanu S. Dey", "Guanghui Lan" ]
In this paper, we explore some basic questions on complexity of training Neural networks with ReLU activation function. We show that it is NP-hard to train a two-hidden layer feedforward ReLU neural network. If dimension d of the data is fixed then we show that there exists a polynomial time algorithm for the same training problem. We also show that if sufficient over-parameterization is provided in the first hidden layer of ReLU neural network then there is a polynomial time algorithm which finds weights such that output of the over-parameterized ReLU neural network matches with the output of the given data.
[ "NP-hardness", "ReLU activation", "Two hidden layer networks" ]
https://openreview.net/pdf?id=HyVbhi0cYX
https://openreview.net/forum?id=HyVbhi0cYX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJeZO-6yeV", "SkgrWz6n2m", "rylXTxh92X", "SkleFFb9nX" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544700264968, 1541358076900, 1541222587331, 1541179767766 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper687/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper687/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper687/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper687/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"Dear authors,\\n\\nAll reviewers agreed that, while the problem considered was of interest, the theoretical result presented in this work was of too limited scope to be of interest for the ICLR audience.\\n\\nBased on their comments, you might want to consider a more theoretically-oriented venue for such a submission.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Restricted theoretical result\"}", "{\"title\": \"Complexity of training ReLU Neural Networks\", \"review\": \"This paper claims results showing ReLU networks (or a particular architecture for that) are NP-hard to learn. The authors claim that results that essentially show this (such as those by Livni et al.) are unsatisfactory as they only show this for ReLU networks that are fully connected. However, the authors fail to criticize their own paper for only showing this result for a network with 3 gates. For the same reason that the Livni et al. results don't imply anything for fully connected networks, these results don't imply anything for larger networks. Conceivably certain gadgets could be created to ensure that the larger networks are essentially forced to ignore the rest of the gates. This line of research isn't terribly interesting and furthermore the paper is not particularly well written.\\n\\nFor learning ReLUs, it is already known (assuming conjectures based on hardness of improper PAC learning) that functions that can be represented as a single hidden layer ReLU network cannot be learned even using a much larger network in polynomial time (see for instance the Livni et al. paper, etc.). Proving NP-hardness results for proper isn't as useful as they usually are very restricted in terms of architectures the learning algorithm is allowed to use. However, if they do want to show such results, I think the NP-hardness of learning 2-term DNF formulas will be a much easier starting point. \\n\\nAlso, I think there is a flaw in the proof of Lemma 4.1. The function f *cannot* be represented by the networks the authors claim to use. In particular the 1/\\\\eta outside the max(0, x) term is not acceptable.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"moderately interesting complexity result but perhaps the wrong venue\", \"review\": \"The main result of this work is to prove that a two-layer neural network with 2 hidden neurons in the first layer and 1 hidden neuron in the second layer is NP-hard to train when all activation functions are ReLU. Similar results with the hard-thresholding activation function were proved by Blum & Rivest in 1988. The main proof consists of a reduction from the 2-hyperplane separability problem which was known to be equivalent to NNs with hard-thresholding activation functions.\", \"quality\": \"Moderate. Good solid formal results but nothing really surprising.\", \"clarity\": \"The manuscript is mostly well-written, and the authors gave proper credit to prior works. The only minor issue is perhaps the abuse of the variable w in introduction and the same variable w (with an entirely different meaning) in the rest of the paper. It would make more sense to change the w's in introduction to d's.\", \"originality\": \"This work is largely inspired by Blum & Rivest's work, and builds heavily on some previous work including Megiddo and Edelsbrunner et al. While there is certainly some novelty in extending prior work to the ReLU activation function, it is perhaps fair to say the originality is moderate.\", \"significance\": \"While the technical construction seems plausible and correct, the real impact of the obtained results is perhaps rather limited. This is one of those papers that it is certainly nice to have all details worked out but none of the obtained results is really surprising or unexpected. While I do agree there is value in formally documenting the authors' results, this conference is perhaps not the right venue.\", \"other_comments\": \"The discussion on page 2 (related literature) seems odd. Wouldn't the results of Livni et al and Dasgupta et al already imply the NP-hardness of fully connected ReLU networks, in a way similar to how one obtains Corollary 3.2? If this is correct, then the contribution of this work is basically a refined complexity analysis where the ReLU network is shrunken to 2 layers with 3 nuerons?\\n\\nI really wish the authors had tried to make their result more general, which in my opinion would make this paper more interesting and novel: can you extend the proof to a family of activation functions? It is certainly daunting to write separate papers to prove such a complexity result for every activation function... The authors also conveniently made the realizability assumption. What about the more interesting non-realizable case?\\n\\nThe construction to prove Theorem 3.4 bears some similarity to a related result in Zhang et al. The hard sorting part appears to be different.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Review\", \"review\": \"This paper shows that training of a 3 layer neural network with 2 hidden nodes in the first layer and one output node\\nis NP-complete. This is an extension of the result of Blum and Rivest'88. The original theorem was proved for \\nthreshold activation units and the current paper proves the same result for ReLU activations. The authors do this\\nby reducing the 2-affine separability problem to that of fitting a neural network to data. The reduction is well \\nwritten and is clever. This is a reasonable contribution although it does not add significantly to the current state of the art.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
Skz-3j05tm
Graph Convolutional Network with Sequential Attention For Goal-Oriented Dialogue Systems
[ "Suman Banerjee", "Mitesh M. Khapra" ]
Domain specific goal-oriented dialogue systems typically require modeling three types of inputs, viz., (i) the knowledge-base associated with the domain, (ii) the history of the conversation, which is a sequence of utterances and (iii) the current utterance for which the response needs to be generated. While modeling these inputs, current state-of-the-art models such as Mem2Seq typically ignore the rich structure inherent in the knowledge graph and the sentences in the conversation context. Inspired by the recent success of structure-aware Graph Convolutional Networks (GCNs) for various NLP tasks such as machine translation, semantic role labeling and document dating, we propose a memory augmented GCN for goal-oriented dialogues. Our model exploits (i) the entity relation graph in a knowledge-base and (ii) the dependency graph associated with an utterance to compute richer representations for words and entities. Further, we take cognizance of the fact that in certain situations, such as, when the conversation is in a code-mixed language, dependency parsers may not be available. We show that in such situations we could use the global word co-occurrence graph and use it to enrich the representations of utterances. We experiment with the modified DSTC2 dataset and its recently released code-mixed versions in four languages and show that our method outperforms existing state-of-the-art methods, using a wide range of evaluation metrics.
[ "Goal-oriented Dialogue Systems", "Graph Convolutional Networks" ]
https://openreview.net/pdf?id=Skz-3j05tm
https://openreview.net/forum?id=Skz-3j05tm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rkl8Gu1Hg4", "r1g3xb9tRX", "BJgSYWnIR7", "Hye1xZ3I0Q", "ryxiXl2UCm", "r1lyUmTgTm", "HJeBukc62m", "r1lCQGAFhm", "SklThhKisX" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1545037838299, 1543246067686, 1543057788630, 1543057639409, 1543057442942, 1541620551224, 1541410669022, 1541165606411, 1540230325187 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper686/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper686/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper686/Authors" ], [ "ICLR.cc/2019/Conference/Paper686/Authors" ], [ "ICLR.cc/2019/Conference/Paper686/Authors" ], [ "ICLR.cc/2019/Conference/Paper686/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper686/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper686/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper686/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Reply\", \"comment\": \"Thanks for the responses & updated write-up.\", \"re\": \"feedback on diagrams: They look good. The figures with and without the BiRNN are very similar (you could simply say that in order to obtain the no-BiRNN case, one must remove the green boxes), but if space allows it can't hurt to have them both. Using two short sentences instead of a longer one (and therefore show two disconnected trees through the dotted arrows) would have been better (since one may find the encoding of multiple trees confusing, see R3 comment), but the current version is OK too.\\n\\nOverall, I think the paper could be interesting for applied groups,\\ndue to the good results and empirical insights. With the improved\\nexperimental results in the newer version, the merits of each proposed modification are highlighted well and that itself could be interesting for an applied (NLP) conference.\\n\\nThat said, I share R2's concern that, in the context of ICLR, the paper\\nis limited in novelty. GCNs have been applied to similar NLP tasks before, including question answering, which limits the contribution of the paper. While the authors highlight various differences w.r.t. that setting, they are all incremental. \\n\\nGiven the new information from the authors and the other reviewers\\nconcerns, I maintain my general score.\"}", "{\"title\": \"More clear paper\", \"comment\": \"Many thanks for adding the figures - they have improved my understading of the paper and I think they make it more easy to understand.\"}", "{\"title\": \"Authors\\u2019 response to Reviewer 3\", \"comment\": \"We would like to thank you for your comments and valuable suggestions on improving the clarity of the paper. Below, we provide updates on some of the improvements that we have been able to do:\\n\\n1) Difference from QA: To the best of our knowledge, the only work on using GCNs for QA (De Cao et. al., 2018) uses Entity graphs as opposed to dependency graphs used in the work. An entity graph essentially draws an edge between same entities which appear in different sentences whereas a dependency graph contains semantic edges. Hence, the graph that we are operating on is very different from the entity graph used in the above paper. Further, in the case of QA, there is a query and a document whereas in our case there is a context/history in addition to the query and KB. This adds some more complexity to our model. For example, our sequential attention mechanism also considers the history while paying attention to the KB. Further, it also computes a query aware representation for the history. Finally, while producing the output, the decoder also pays attention to the history. These differences are not groundbreaking but we just mention them here to make the distinction between the two tasks clear and to highlight the additional components in our model. \\n\\n2) How do we collectively represent all trees as a graph? This was in the context of computing a representation for the dialogue history. The history contains multiple sentences. We first create a dependency tree for each sentence. The final graph for the history is a simple collection of these individual (disconnected) trees. Just to be clear, currently we do not have any edges between words in two different sentences (hence all the individual sentence trees are disconnected from each other).\\n\\n3) Regarding complexity and choosing parameter values: Our final model (RNN+GCN-SeA) has ~4M parameters as compared to the vanilla RNN+attention model which has ~2M parameters. These parameters were learned using ADAM, with a batch size of 32 and initial learning rate of 0.0006. We found that the model trains in ~30 epochs. In addition, we would like to clarify that the hyperparameters of the model were chosen using a validation set.\\n\\n4) Clarity: Indeed, in hindsight and based on similar comments by Reviewer 1, we agree that we could have made things more clear by adding a diagram. We have included 2 diagrams in the updated version of the paper and we hope it clarifies things. Regarding the three models in Section 5.3, they would only differ in the type of parse tree edges (last edge type in the legend) shown in Figure 1. Please give us your feedback on the diagrams and if it can be improved to make things more clear.\\n\\n5) Link to Dataset: We used the dataset released by Banerjee et. al., 2018 which is available at the following URL: https://github.com/sumanbanerjee1/Code-Mixed-Dialog . We plan to include results on the In-Car dataset also. We are hopeful that we will have these results ready in the final version of the paper.\\n\\n6) Thanks for pointing out the typos. We have fixed them in the updated version of the paper.\"}", "{\"title\": \"Authors\\u2019 response to Reviewer 2\", \"comment\": \"We would like to thank you for suggesting additional experiments for improving the paper. We have been able to run these experiments and would like to update you about the results:\\n\\n1) Effects of GCN: We have now added detailed ablation studies (please see Table 8 in Appendix D) including comparisons with basic RNN based models and basic attention models. In particular, we have now compared RNN+GCN-SeA with RNN-SeA. The results indeed suggest that adding GCNs on top of RNNs helps. Our analysis also shows that our sequential attention outperforms the basic (Bahdanau) attention. Please see \\u201cAblations\\u201d part of Section 6 and Table 8 in Appendix D. Also, the code for our model and these ablation studies will be made publicly available.\\n\\n2) Comparative experiments: We have reported Entity F1 scores for all our experiments and again find that w.r.t this metric our model mostly outperforms existing approaches (including some new baselines that we have added for the ablation study). We were not very keen on the bAbI dataset since existing research ( Hybrid Code Networks, Williams et. al., 2017 ) shows that it is possible to achieve 100% performance on this dataset using simple models (not surprising given that this is a synthetic dataset). Hence, there is not much scope for introducing more complex models such as the one proposed in this paper. We plan to include results on the In-Car dataset and we are hopeful that we will have these results ready in the final version of the paper.\\n\\n3) Yes, indeed, Mem2Seq does not outperform seq2seq in all experiments. In that sense, you are correct in saying that it is not a SOTA model. By SOTA, we incorrectly meant that it is the most recent model published on this dataset. \\n\\n4) Copy mechanism: In addition to the Mem2Seq model of (Madotto et. al., 2018), we have now added the comparison with the model of (Eric and Manning, 2017) which uses copy mechanism. Our model outperforms both these models. In principle, we should be able to augment our model with a copy mechanism but this may be a non-trivial extension of our model. This is definitely worth trying but we are not sure if we will be able to add this to the current version of the paper. We apologize for this (we don't want to commit to something that we may not be able to deliver).\"}", "{\"title\": \"Authors\\u2019 response to Reviewer 1\", \"comment\": \"We would like to thank you for some great suggestions on strengthening the paper. We must confess that while we had some of these on our to-do list, there were a few that we hadn't actually thought of. We have now been able to add these experiments and we believe it has definitely helped us improve the quality of the paper. Below we give a pointwise update about the new experiments.\\n\\n1)Passing information across the KB tree and query/history tree by aligning query/history elements with the KB elements: We were able to implement this and did a thorough hyperparameter tuning across all languages. We have included these results in the paper (RNN+CROSS-GCN-SeA in Tables 1, 2) but the short summary is that there was not much change in the BLEU, ROUGE and per response accuracy and only a marginal improvement in the Entity F1-score for En-DSTC2 and Ta-DSTC2. We had expected the entity F1-score to improve significantly across all languages since we are explicitly linking entities in the KB with entities in the query/history but unfortunately this was not the case. Initial analysis suggests that given that the task is relatively simple, even the base model, which does not explicitly pass information across the trees, is still able to capture the relevant information.\\n\\n2)Ablation tests including comparisons with basic RNN based models and basic attention models: This was a bad miss on our part but now we have been able to do a thorough ablation study with the following experiments where we try to evaluate the (i) need for GCNs (ii) need for our sequential attention mechanism and (iii) need for combining RNNs with GCN:\\n\\na)RNN with attention (the basic seq2seq+attention model of Bahdanau et. al. 2015)\\nb)GCN with Bahdanau attention [does not use RNN or our sequential attention]\\nc)RNN+GCN with Bahdanau attention [does not use our sequential attention]\\nd)RNN with our sequential attention [does not use GCNs]\\ne)RNN+GCN with our sequential attention [Our Final Model]\\n\\nWe have included these results for all languages in the updated version of the paper (see Table 8 in Appendix D and \\u201cAblations\\u201d part of Section 6) and the main observations are summarized below:\\n\\ni)GCNs do not outperform RNNs independently: In general, the performance of GCN-Bahdanau attention < RNN-Bahdanau attention\\nii)Our sequential attention outperforms Bahdanau attention: In general, the performance of GCN-Bahdanau attention < GCN-our_seq_attention, RNN-Bahdanau attention < RNN-our_seq_attention and RNN+GCN-Bahdanau attention < RNN+GCN-our_seq_attention. However, note that RNN-Bahdanau attention < RNN-our_seq_attention holds for BLEU and all ROUGE metrics but not for Entity F1 and exact match accuracy. We are analyzing this further and will hopefully be able to add some insights in the final version of the paper. \\niii)Combining GCNs with RNNs helps: In general, RNN-our_seq_attention < RNN+GCN-our_seq_attention\\n\\nOverall, the best results are always obtained by our final model which combines RNN, GCN and sequential attention. Also, the code for our model and these ablation studies will be made publicly available.\\n\\n3)Motivation behind attention: The motivation behind using a sequential attention mechanism was as follows: The current utterance which we refer to as query sets the stage for what comes next (the response). Hence we use this query to attend to only important parts in the history (essentially, the history can be long and we just want to focus on things which are relevant for the last utterance). Once, we have identified relevant portions of the history and computed an attention weighted representation for the history we are now ready to identify the important concepts from the KB. To achieve this effect we use the sequential attention mechanism.\\n\\n4)GCN on top of an established pipeline: experiment c in point 2 above.\\n\\n5)Better notations and figures: Indeed, in hindsight, we agree that some of our choices were not very intuitive. We have added 2 diagrams which hopefully makes things clear. It would be great if you can give us a feedback on the diagrams.\\n\\n6)Clarity on code-mixing: The statistics about the level of code mixing, level of structure, etc are mentioned in the original paper (Banerjee et. al. 2018) which introduced the dataset. As suggested, to make the paper self-contained we have added the important statistics in this paper and some examples of code mixed conversations from the dataset (Appendix A). Note that there is a lot of work on processing code mixed text (for example, POS tagging of code mixed text, sentiment analysis of code mixed text, information retrieval using code mixed queries, etc). However, there is not much work on code mixed dialogues because this dataset was only released recently (COLING 2018). To the best of our knowledge, there is no work on building parsers for code mixed languages which produce parse trees.\\n\\n7)We have fixed the typos and added the relevant reference.\"}", "{\"metareview\": \"This paper describes a graph convolutional network (GCN) approach to capture relational information in natural language as well as knowledge sources for goal-oriented dialogue systems. Relational information is captured by dependency parses, and when there is code switching in the input language, word co-occurrence information is used instead. Experiments on the modified DSTC2 dataset show significant improvements over baselines.\\nThe original version of the paper lacked comparison to some SOTA baselines as also raised by the reviewers, these are included in the revised version.\\nAlthough the results show improvements over other approaches, it is arguable BLEU and ROUGE scores are not good enough for this task. Inclusion of human evaluation in the results would be very useful.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Good performance but not much novelty\"}", "{\"title\": \"Interesting topic, requires a somewhat better analysis\", \"review\": \"The current paper proposes using Graph Convolutional Networks (GCN) to explicitly represent and use relational data in dialog modeling, as well an attention mechanism for combining information from multiple sources (dialog history, knowledge base, current utterance). The work assumes that the knowledge base associated with the dialog task has en entity-to-entity-relationship format and can be naturally expressed as a graph. The dependency tree of dialog utterances can also be expressed as a graph, and the dialog history as a set of graphs. To utilize this structure, the proposed method uses GCNs whose lowest layer embeddings are initialized with the entity embeddings or via outputs of standard RNN-like models. The main claim is that the proposed model outperforms the current state-of-the-art on a goal-oriented dialog task.\\n\\nThe idea of explicitly modeling the relational structure via GCNs is interesting. However, the use of GCNs independently per sentence and per knowledge-base is a bit disappointing, since it does not couple these sources of information in a structured way. Instead, from my current understanding, the approach merely obtains better representations for each of these sources of information, in the same way it is done in the related language tasks. For instance, have you considered passing information across the trees in the history as well? Or aligning the parsed query elements with the KB elements?\\n\\nThe results are very good. That said, a source of concern is that the model is only evaluated as a whole, without showing which modification brought the improvements. The comparison between using/not using RNNs to initiate the first GCN layer is promising, but why not compare to using only RNN also? Why not compare the various encoders within an established framework (e.g. without the newly introduced attention mechanism)? Finally, the attention mechanism, stated as a contribution, is not motivated well.\", \"clarity\": \"The notation is described well, but it's not terribly intuitive (the query embedding is denoted by c, the history embedding by a, etc.), making section 4.4. hard to follow. A figure would have made things easier to follow, esp. due to the complexity of the model. A clearer parallel with previous methods would also improve the paper: is the proposed approach adding GCN on top of an established pipeline? Why not?\\n\\nMore discussion on code-mixed language, e.g. in section 4.6, would also improve clarity a bit (make the paper more self-contained). While the concept is clear from the context, it would be helpful to describe the level of structure in the mixed language. For instance, can dependency trees not be obtained code-mixed languages? Is there any research in this direction? (or is the concept very new?) Maybe I am just missing the background here, but it seems helpful in order to asses how appropriate the selected heuristic (based on the co-occurence matrix) is.\", \"relevant_reference\": \"Learning Graphical State Transitions, Johnson, ICLR 2017 also uses graph representations in question answering, though in a somewhat different setting.\", \"typos\": \"\", \"section_4\": \"\\\"a model with following components\\\"\", \"section_5\": \"\\\"the various hyperparameters that we conisdered\\\"\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good performance but less clear novelty\", \"review\": \"The paper proposes a Graph Convolutional Network-based encoder-decoder model with sequential attention for goal-oriented dialogue systems, with the purpose of exploiting the graph structures in KB and sentences in conversation. The model consists of three encoders for a query, dialogue history, and KB, respectively, and a decoder with a sequential attention mechanism. The proposed model attains state-of-the-art performance on the modified DSTC2 dataset of (Bordes et al., 2017). For the experiments with graphs constructed from word co-occurrence matrix, code-mixed versions of modified DSTC2 released by (Banerjee et al., 2018) are used.\\n\\nPros and Cons\\n(+) SOTA performance on the DSTC2 dataset.\\n(+) Without dependency parser when it is not possible\\n(-) Limited novelty\\n(-) Limited convincing the advantage of GCN itself\\n\\nDetailed comments\\nThe paper incorporates the graph structures in sentences and KB to make richer representations of conversation and achieves a state-of-the-art performance on the DSTC2 dataset. The paper is clearly written, and the results seem promising. However, as the paper combines existing mechanisms to design a model for dialog, the novelty seems to be relatively weak.\\nIn particular, I felt that some experimental results are required to verify some of the arguments put forward by the authors. We listed two issues as below.\\n\\n1. Effects of GCN\\nThe authors show that RNN-GCN-SeA can make state-of-the-art performance, but not how much GCN makes effects on improving the performance on the dialog task. \\nI think the authors need to compare the results of RNN-GCN-SeA with a model without GCN (i.e. RNN-SeA) in order to show that exploiting the structural information of dependency and contextual graphs do play an important role.\\nThe random graph experiments (Table 3) show the effect of good structure in GCN, but I felt that it is not enough to demonstrate an improvement by GCNs. \\n\\n2. Comparative Experiments\\nI think that some experiments, which is reported in the previous papers (including Mem2Seq), would make the author\\u2019s experimental argument strong.\\n- Entity F1 score for the modified DSTC2 dataset\\n- Results on bAbI dialog dataset (task1~5 and its OOV variants) and In-Car Assistant dataset\\n\\nMinor issues\\n1. Authors described that Mem2Seq is one of the state-of-the-art models in this field, including in the abstract. However, Mem2Seq does not outperform seq2seq model in all experiments. From what point of view is this model state-of-the-art? \\n2. Recent studies have focused on copy mechanism in task-oriented dialog systems. Could you explain how the copy mechanism could be incorporated into the proposed model? I am also interested in the comparative results between seq2seq + attn + copy (per-resp-acc of 47.3) and its entity F1 measure (Eric and Manning, 2017; Madotto et al. 2018).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A dialogue system that is novel in using graph convolutional networks as part of an encoder-decoder dialogue system with attention\", \"review\": \"This is a well-written paper (especially the introduction) with fairly extensive experimentation section. It'a very possitive for me that you resort to more than one set of figures of merit.\", \"my_concerns_are\": \"You mention that GCNs have been used for question-anwering already. It would be infomative to furhter describe this work and clearly state how you handle things differenclty, since a Q&A system is quite close to a dialogue one.\\n\\nThere are some parts that could be made more clear. For example, when you mention that you collectively represent all trees as a single graph. How do you do that?\\n\\nThe model has a great number of parameters. It is not clear to me how you concluded to the specific parameter values. \\n\\nIt would be nice to add the complexity of the model and also be more specific about how you choose the parameter values.\", \"my_proposals_are\": \"I think that the paper would greatly benefit if you additionally to the equations you also presented the model in a graphical way as well. Additionally, although the paper is very well mathematically defined, is not so easy to follow from a practical perspective. For example, regarding section 5.3 I would prefer to see the 3 models you present in a graphical way as well.\\n\\nMaybe add the links to the datasets you are using? On a related subject, would your models be transferable accross datasets?\\n\\nMinor issues\\nPPMI abreviation is first used and then defined.\\nThere are also some typos, like conisdered (that I suppose was meant to be considered, for example)\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
ByfbnsA9Km
Cross-Entropy Loss Leads To Poor Margins
[ "Kamil Nar", "Orhan Ocal", "S. Shankar Sastry", "Kannan Ramchandran" ]
Neural networks could misclassify inputs that are slightly different from their training data, which indicates a small margin between their decision boundaries and the training dataset. In this work, we study the binary classification of linearly separable datasets and show that linear classifiers could also have decision boundaries that lie close to their training dataset if cross-entropy loss is used for training. In particular, we show that if the features of the training dataset lie in a low-dimensional affine subspace and the cross-entropy loss is minimized by using a gradient method, the margin between the training points and the decision boundary could be much smaller than the optimal value. This result is contrary to the conclusions of recent related works such as (Soudry et al., 2018), and we identify the reason for this contradiction. In order to improve the margin, we introduce differential training, which is a training paradigm that uses a loss function defined on pairs of points from each class. We show that the decision boundary of a linear classifier trained with differential training indeed achieves the maximum margin. The results reveal the use of cross-entropy loss as one of the hidden culprits of adversarial examples and introduces a new direction to make neural networks robust against them.
[ "Cross-entropy loss", "Binary classification", "Low-rank features", "Adversarial examples", "Differential training" ]
https://openreview.net/pdf?id=ByfbnsA9Km
https://openreview.net/forum?id=ByfbnsA9Km
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1xXh4_HxN", "B1gj2lXwkV", "BJxJVvt8JN", "rklW51EUy4", "S1l0f9m8JE", "BkgLGizLJV", "ByxobfCsT7", "r1lrQuRxpX", "S1l15DRl67", "HJgZ7PCga7", "BylxpBRx6Q", "ryx8tGiohX", "rJegWQkshX", "rJeCZjJ52m", "H1gqEmdSim" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1545073834577, 1544134834808, 1544095527156, 1544073097018, 1544071702024, 1544067853563, 1542345219040, 1541625885191, 1541625734534, 1541625624861, 1541625271787, 1541284477737, 1541235447602, 1541171974293, 1539830578494 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper685/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper685/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper685/AnonReviewer5" ], [ "ICLR.cc/2019/Conference/Paper685/Authors" ], [ "ICLR.cc/2019/Conference/Paper685/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper685/AnonReviewer5" ], [ "ICLR.cc/2019/Conference/Paper685/Authors" ], [ "~Angus_Galloway1" ], [ "ICLR.cc/2019/Conference/Paper685/Authors" ], [ "ICLR.cc/2019/Conference/Paper685/Authors" ], [ "ICLR.cc/2019/Conference/Paper685/Authors" ], [ "ICLR.cc/2019/Conference/Paper685/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper685/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper685/AnonReviewer2" ], [ "~Ignacio_Arroyo-Fernández1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper challenges claims about cross-entropy loss attaining max margin when applied to linear classifier and linearly separable data. This is important in moving forward with the development of better loss functions.\\n\\nThe main criticism of the paper is that the results are incremental and can be easily obtained from previous work. \\n\\nThe authors expressed certain concerns about the reviewing process. In the interest of dissipating any doubts, we collected two additional referee reports. \\n\\nAlthough one referee is positive about the paper, four other referees agree that the paper is not strong enough.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Insufficient novelty\"}", "{\"title\": \"Response\", \"comment\": \"Sorry for my late reply! I've read the response, but I'm not convinced to change the rating.\\n\\nFor 2a) and 2b), I apologize for not making my previous comment on Section 3 very clear. I did not ignore Theorems 3,4 and Remark 3. In my original comment, I tried to use 'Further theoretical results are given explaining the relation between cross-entropy loss and SVM.' to summarize these results. Because it seems that Theorems 3,4 further quantifies the relationship between margins given by cross-entropy and SVM based on Theorem 2. I'm still not convinced that these results are significant enough. Since the authors claimed that these are their main contributions, more explanation on the significance of these results should be added.\\n\\nFor 1a), I'm still not convinced that it is appropriate to claim that the papers the authors mentioned are erroneous. It is very common that papers focusing on theoretical analysis make certain assumptions that do not exactly match what people do in practice. Normalizing the data and neglecting the bias terms can both be considered as such assumptions. When these assumptions are not satisfied, it is not surprising that most of the results won't hold. Also, as the authors and other reviewers have pointed out, Theorems 1,2 are already covered in existing works. Therefore, even if the papers the authors mentioned are indeed 'erroneous', it can hardly be considered as a contribution of this paper.\\n\\nFor 1b), the authors argued that even if the data are normalized, the features of neural networks are still not normalized. This is true, but the current results on 'cross-entropy loss can lead to poor margins' are only shown for linear models. Without further results proving that neural networks with cross-entropy loss can give poor margins, it is still not very convincing.\\n\\nBecause of the concerns above, I believe '5: Marginally below acceptance threshold' is an appropriate rating for this paper.\"}", "{\"title\": \"Significance of Theorem 3-4 and Remark 3\", \"comment\": \"The authors feel I ignored the results in Theorem 3,4 and remark 3. Specifically, as I understand, in these results the authors claim:\\nI)\\tFrom the mathematical perspective, one can find datasets (on near-affine subspaces) where the margin of the solution of cross-entropy minimization can be quite poor.\\nII)\\tFrom a practical perspective, neural networks tend to behave similarly to these examples and therefore have poor margins.\\n\\nI would like to clarify that I feel that \\u201cclaim I\\u201d is again a simple demonstration of point (1) in my previous response (i.e. as I said \\u201cFor me, all the numerical demonstrations and examples of this simple issue did not add much\\u201d). Specifically, since both the optimization problems in (1) are different, it feels clear to me that one can find examples where the solutions are very different, i.e. datasets where the in max margin solutions b>>||w||. It is straightforward to generate such datasets by strongly shifting the classes so a separator coming from the origin would have a poor margin, as done in Theorem 3. This is why I consider Theorem 3+4 as another example of (1).\\n\\nMoreover, I feel \\u201cclaim II\\u201d is not sufficiently supported by evidence. Specifically, the authors demonstrate that the representation in CIFAR10 lies near an affine subspace as in theorem 3+4, but it is not clear if this B^2 sum_k \\\\Delta_k^2 is indeed sufficiently large to hurt the margin. Remark 3 argues that this term B^2 sum_k \\\\Delta_k^2 should be large in practice, in comparison to 1/gamma^2 but I don't see why this should be true, as both may scale with dimensions. To establish this claim, I think these quantities should have to been measured directly in the last layer of the network. The authors could have also directly measured the margin in the last layer and compared it to the max margin. Without these measurements, I do not feel that the authors indeed demonstrated \\\"claim II\\\".\\n\\nLastly, I would like to clarify that in \\u201cissue c\\u201d in my previous response, I mainly wanted to point out to the authors that *some* of their phrasings were confusing (not all of them). For example, the statement \\u201cthe solution obtained by cross-entropy minimization is different from the SVM solution\\u201d is wrong under some common interpretations (as SVM can also be defined for the class homogenous linear classifiers). Therefore, I feel they should be adjusted (\\u201cSVM\\u201d-> \\u201cSVM for linear predictors with bias\\u201d) to avoid further confusions.\"}", "{\"title\": \"Main results are Theorem 3-4 and Remark 3: They are completely, and probably intentionally, ignored\", \"comment\": \"We repeatedly and clearly stated in our response to Reviewer 2 and Reviewer 3: Our most critical results are Theorem 3, Theorem 4 and Remark 3. Anyone who has read the list of our contributions on page 2 would not miss this. Anyone who has read the discussion section would understand that Theorem 3 is our most critical result -- just like Reviewer 1 did.\\n\\nWe understand from the review that Reviewer 5 was able to see the previous reviews and our responses. Given this fact, Reviewer 5 must have seen in our responses that our most critical results are Theorem 3, Theorem 4 and Remark 3. Therefore, it seems extremely absurd that Reviewer 5 tried to summarize our contributions in two points and not mention any of our most critical results. As a result, we strongly question the objectivity and the fairness of their evaluation.\\n\\nOur paper is the first work that finds a connection between the existence of adversarial examples and the specific choice of training loss function (cross-entropy with soft-max) and the low dimensionality of the features of the training dataset. Anyone who thinks this result is insignificant should reconsider their level of expertise in the field and possibly give themselves confidence 1 or 2 -- not 5.\\n\\nWe were very careful in our choice of words when making statements about what is correct and what is not correct in (Soudry et al., 2018). We stated **their conclusion was incorrect** due to neglecting the bias term; we did not say their proof was incorrect. Nevertheless, we appreciate the great effort Reviewer 5 did in praising the work (Soudry et al., 2018) and trying to remove the taint we could potentially bring to it while writing a review for our paper.\"}", "{\"title\": \"I do not think the proposed approach can be better than the cross-entropy loss in practice.\", \"review\": \"This paper presents a very specialized example to show that gradient descent on the cross-entropy loss WITHOUT REGULARIZATION leads to poor margin, which is very unrealistic. Moreover, I have the following concerns:\\n\\n1. In the two points classification example shown in Section 2, I want to see the plot of iteration versus cross-entropy loss during the gradient descent.\\n\\n2. Whether it makes sense to use cross-entropy loss to quantify loss for two-class classification problem with one point in each class? Statistically, it seems not reasonable at all.\\n\\n3. In Corollary 1, the authors made a further assumption, x^Ty=1, which is very unnatural.\\n\\n4. In the numerical results section, I want to see some results on some benchmark dataset. The presented numerical results are too weak to support the proposed differential training.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Insufficient novelty and significance. Also, the phrasing of the results is somewhat misleading.\", \"review\": \"Due to the large variance in reviewer scores, I was asked to give this additional review.\", \"background\": \"[Soudry et al. 2018] showed that the iterates of gradient descent, when optimizing logistic regression on separable data, converge to the L2 max-margin (SVM) solution for homogeneous linear separators (without bias). These results were later extended to other models and optimization methods.\", \"this_paper_has_two_main_results\": \"1)\\tIt clarifies that the results of [Soudry et al. 2018] do not apply to logistic regression when the linear separator has a bias term (\\u201cb\\u201d). This is because the homogenous max margin solution in the extended [x,1] space is not the same as non-homogeneous max margin solution in the original space: the first has a penalty on the size of the bias term, i.e.\\nmin_{w,b} ||w||^2 + b^2 s.t. y_n (w\\u2019x_n+b) >= 1\\n, while the latter does not: \\nmin_{w,b} ||w||^2 s.t. y_n(w\\u2019x_n+b) >= 1\\n2)\\tIt suggests using differential training to correct this issue.\\n\\n\\nHowever, I do not believe these contributions are enough for a publication in ICLR. First, (2) is simply a combination of two known results, as mentioned by Reviewer 2. Second, though I commend the authors for pointing out (1), I do not feel this by itself warrants a publication, for the following reasons:\\na) It is very simple to explain (1) in only a few lines (as I did above). Therefore, it would be more informative just to write (1) as a comment on the original paper (the ICLR 2018 forum is still open), not as a completely new publication. For me, all the numerical demonstrations and examples of this simple issue did not add much.\\nb)\\tRegularizing the bias term usually does not make a significant difference to the sample complexity (see the end of section 15.1.1 in the textbook \\u201cUnderstanding Machine Learning: From Theory to Algorithms\\u201d by Shai Shalev Shwartz.). Furthermore, the main motivation behind [Soudry et al. 2018] was to explain implicit bias and generalization in deep networks, where there such max-margin results (which penalize all the parameters) could be used to derive generalization bounds (e.g., https://arxiv.org/abs/1810.05369).\\nc)\\tLastly, the authors here say that \\u201cthe solution obtained by cross-entropy minimization is different from the SVM solution\\u201d. This (as well as the title and abstract) may mislead the readers to think there is something wrong in the proofs of [Soudry et al. 2018] and later papers, and that logistic regression does not converge to the max-margin solution for homogeneous linear separators. However, the max-margin solution for homogeneous linear separators is also called the \\u201cmax margin\\u201d or SVM solution (just for a different family). For example, see the previous paper on the topic [\\u201cMargin Maximizing Loss Functions\\u201d, Rosset et al. 2004] or section 15.1.1 in the textbook \\u201cUnderstanding Machine Learning: From Theory to Algorithms\\u201d by Shai Shalev Shwartz. As I see it, the only issue in [Soudry et al. 2018] is the sentence \\u201cA bias term could be added in the usual way, extending x_n by an additional \\u20191\\u2019 component.\\\" which is confusing since it cannot be applied directly to the SVM solution. The authors should aim to pinpoint this issue, and clarify their phrasing to avoid such confusions.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Paper has been updated\", \"comment\": \"1) We changed the titles of Section 2 and Section 3 to reflect their importance.\\n\\n2) We added citations to (Ishibashi et al., 2008) and one of its references, (Keerthi et al., 2000), in the first paragraph of Section 4.\"}", "{\"comment\": \"Could the authors compare their new loss function to *SGD* with cross entropy loss and (L2) weight decay with large regularization penalties?\", \"title\": \"weight decay baseline\"}", "{\"title\": \"Response to Reviewer 2: Main result is not Theorem 5\", \"comment\": \"Dear Reviewer 2,\\n\\nThank you for your review, and thanks for pointing out this reference. We were not aware of this past work, and it certainly deserves a reference.\\n\\nNevertheless, our main technical result is Theorem 3 and Remark 3 -- not Theorem 5. As the title of our submission reflects, and as the list of our contributions on page 2 describes, differential training is not the heart of our work. As we stated in our response to Reviewer 1, differential training was introduced in this paper only to open a door for further research and not to finish this paper with a negative result. \\n\\nPlease note that Theorem 3 and Theorem 4, along with the related remarks, are original. We would appreciate if you have any suggestions to further highlight that Section 3 is the critical part of our work.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"Dear Reviewer 3,\\n\\nThanks for reviewing our paper.\\n\\n1a) The goal of our submission is not to make further positive claims about the use of cross-entropy minimization; it is the opposite. As Reviewer 1 also stated, we wanted to challenge the faith of the community in the use of cross-entropy loss, and we wanted to show that minimizing this loss function on low-dimensional datasets such as images can lead to extremely poor margins. For this reason, the title of our submission is very accurate. We updated Figure 1 to highlight the drastic difference between the SVM solution and the solution obtained by the cross-entropy minimization.\\n\\n1b) As we clearly stated in Remark 1, normalizing a dataset in the input space does not correspond to normalizing the features of the points if the feature mapping is nonlinear. In particular, we will not have normalized features if we use neural networks. If we want to get a right intuition about the effect of cross-entropy minimization on neural network training, we cannot simply assume the features of the training points will be normalized. This is why we strictly avoid the assumption of a normalized dataset, as explained in Remark 1.\\n\\n2a) It is unfortunate, and somewhat curious, that our results in Section 3 (Theorem 3 and the remarks following it) were completely neglected. Section 3 clarifies why the conclusions of the works [1,2,3,4,5] are erroneous and shows that the reality is drastically different from their conclusions. Showing that there was a critical error in a line of previous works, which leads to a drastic change in the conclusion, is not an \\\"incremental contribution\\\". In fact, given [1] appeared in ICLR last year, it is essential that the ICLR community be given the correction this year.\\n\\n2b) Theorem 3 and Remark 3 are the most critical results of our paper. Please make sure you have understood them. The last paragraph of Section 5 verifies that the assumptions of Theorem 3, the low-dimensionality of the features, indeed arises in practice. In other words, the assumptions of Theorem 3 are not an edge case, and the conclusion of Theorem 3 has critical implications for practice.\\n\\n3) Our paper starts with the question \\\"Is cross-entropy loss really the right cost function to use with gradient descent algorithm?\\\". We use linear classifier and linearly separable dataset to answer this question on a simple setting. By doing so, our work gives intuition that the cross-entropy loss function has responsibility in the poor margin of the decision boundaries. We introduce differential training as a method to improve the margin **while still using gradient descent algorithm**. As we stated in the Discussion section, this allows the feature mapping to remain trainable while ensuring a large margin, and therefore, it provides an initial attempt to combine the benefits of neural networks and the SVM. And please note that when [1,2,3,4,5] claimed that cross-entropy loss finds the same solution with the SVM, they did not suggest that the ML community stop using cross-entropy minimization and replace it with SVM.\\n\\n[1] Daniel Soudry, Elad Hoffer, and Nathan Srebro. The implicit bias of gradient descent on separable data. In International Conference on Learning Representations, 2018.\\n[2] D. Soudry, E. Hoffer, M. Shpigel Nacson, S. Gunasekar, and N. Srebro. The Implicit Bias of Gradient Descent on Separable Data. ArXiv e-prints, 2018.\\n[3] M. Shpigel Nacson, J. Lee, S. Gunasekar, P. H. P. Savarese, N. Srebro, and D. Soudry. Convergence of Gradient Descent on Separable Data. ArXiv e-prints, 2018a.\\n[4] M. Shpigel Nacson, N. Srebro, and D. Soudry. Stochastic Gradient Descent on Separable Data: Exact Convergence with a Fixed Learning Rate. ArXiv e-prints, 2018b.\\n[5] Ziwei Ji and Matus Telgarsky. Risk and parameter convergence of logistic regression. CoRR, abs/1803.07300, 2018.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"Dear Reviewer 1,\\n\\nThank you for reading our submission closely, and thanks for appreciating our results.\\n\\nAs you have also noticed, Section 3 of our paper, and Theorem 3 in particular, is the punch line of our work. The algorithm, differential training, was introduced in this paper only to open a door for further research and not to finish this paper with a negative result. That is, we wanted to show that there could be a solution for the problem we have identified. We agree that further study of differential training for neural networks is necessary and important, and that is our ongoing work.\"}", "{\"title\": \"A set of nice results that is insightful and clarifies some controversy\", \"review\": \"The paper challenges recent claims about cross-entropy loss attaining max margin when applied to linear classifier and linearly separable data. Along the road, it presents a couple of nice results that I find quite interesting and I believe they provide useful insights. Finally it presents a simple modification to the cross-entropy loss, which the authors refer to as differential training, that alleviates the problem for the case of linear model and linearly separable data.\", \"cons\": \"I find the paper useful and interesting mainly because of its insightful results rather than the final algorithm. The algorithm is evaluated in a very limited setting (linear model, synthetic data, binary classification); it is not clear if similar benefits would carry over to nonlinear models such as deep networks. In fact, I strongly encourage the authors to do a generalization comparison by comparing the **test accuracy** obtained by their modified cross-entropy against: 1. Vanilla cross-entropy as well as 2. A deep model large margin loss function (e.g. as in \\\"Large Margin Deep Networks for Classification\\\" by Elsayed). Of course on a realistic architecture and non-synthetic datasets (e.g. CIFAR-10).\", \"pros\": \"Putting the algorithm aside, I find the theorems interesting. In particular, Theorem 3 shows that some earlier claims about cross-entropy's ability to attain large margin (in the linearly separable case) is misleading (due to neglecting a bias term). This is important as it changes the faith of the community in cross-entropy and more importantly creates hope for constructing new loss functions with improved margin.\\nI also find the connection between the dimension of the subspace that contains the points and quality of margin obtained by cross-entropy insightful.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"interesting work, but slightly incremental\", \"review\": \"This paper studies the cross-entropy loss for binary classification problems. The authors show that if the norms of samples in two linear separable classes are different, gradient descent based methods minimizing cross-entropy loss may give a linear classifier that gives small margin.\\n\\nPros\\n\\n1. The paper is clearly written and very easy to follow. \\n\\n2. The authors show that for two point classification problems, if the norms of the points are very different then gradient descent will give a very small margin.\\n\\n3. Further theoretical results are given explaining the relation between cross-entropy loss and SVM.\\n\\n4. A new loss function called differential training is proposed, which is guaranteed to give SVM solution.\\n\\nCons\\n\\n1. My biggest concern is that, the paper, especially the title, may be slightly misleading in my opinion. Although the authors keep claiming that cross-entropy loss can lead to poor margins in certain circumstances (which I agree), in fact Theorem 1 and Theorem 2 have already clearly shown the connection between the cross-entropy solution and the maximum margin direction. For example, Theorem 1 literally proves that when the two points have the same norm (normalized data?), cross-entropy loss leads to maximum margin. Theorem 2 also clearly states that cross-entropy loss and SVM are closely related. Based on these two theorems, perhaps \\u2018cross-entropy loss is closely related to maximum margin\\u2019 is a more convincing statement.\\n\\n2. The theoretical results given in this paper is slightly incremental. As the authors mentioned, Theorem 1 and Theorem 2 are essentially already proved in previous works. The other results are not very significant either.\\n\\n3. The authors do not clearly state the advantages of the differential training method compared to SVM. It seems that one can just use SVM if the goal is maximum margin classifier.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The technical results can be obtained by a simple combination of previous work.\", \"review\": \"Summary:\\nThis paper investigates the properties of minimizing cross-entropy of linear functions over separable data (looks like logistic loss). The authors show a simple example where the minimizer of the cross-entropy loss leads to maximum margin hyperplane where the bias term is regarded as an extra dimension, which is different from the standard max. margin solution of SVMs with bias not regarded as an extra dimension. The authors then propose a method to obtain the latter solution by minimizing the cross-entropy loss.\", \"comments\": \"\", \"there_is_a_previously_known_result_quite_related_to_this_paper\": \"Ishibashi, Hatano and Takeda: Online Learning of Approximate Maximum p-Norm Margin Classifiers with Bias, COLT2008. \\n\\nTheorem 2 of Ishibashi et al. shows that the hard margin optimization with linear classifier with bias is equivalent to those without bias over pairs of positive and negative instances. \\n\\nCombined with Theorem 3 of (Soudry et al., 2018)), I am afraid that the main result Theorem 5 can be readily derived. \\n\\nFor this reason, I am afraid that the main technical result is quite weak.\", \"after_rebuttal\": \"I read the authors' comments. I understand more the technical contribution of the paper and raised my score. But I also agree with Reviewer 3.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"comment\": \"The proposed approach is very interesting as it revisits notions on margin-based and\\ndiscriminant classification, and brings those notions to model-based and Information-\\nTheoretic learning. However, there are two main concerns with the approach the authors \\nshoud seriously consider. The Cross entropy is a form of Mutual information, which is\\nin turn computed from two entropies. As this suggests, you have two probability measures\\ninteracting. The events drawn from from one measure can occur given (or jointly\\nwith) the occurrence of events drawn from the other. If the authors do not\\nconsider these basic aspects, the convexity of the Mutual information measure can\\nbe violated. Thus the measure does not converge. This is a possible cause for parameter \\ndivergence. I suggets to consider these issues in order to improve the quality \\nof this paper, which provides a very interesting approach.\", \"title\": \"Improve implementation of conditional nature of mutual information\"}" ] }
rkgZ3oR9FX
Learning to Refer to 3D Objects with Natural Language
[ "Panos Achlioptas", "Judy E. Fan", "Robert X.D. Hawkins", "Noah D. Goodman", "Leo Guibas" ]
Human world knowledge is both structured and flexible. When people see an object, they represent it not as a pixel array but as a meaningful arrangement of semantic parts. Moreover, when people refer to an object, they provide descriptions that are not merely true but also relevant in the current context. Here, we combine these two observations in order to learn fine-grained correspondences between language and contextually relevant geometric properties of 3D objects. To do this, we employed an interactive communication task with human participants to construct a large dataset containing natural utterances referring to 3D objects from ShapeNet in a wide variety of contexts. Using this dataset, we developed neural listener and speaker models with strong capacity for generalization. By performing targeted lesions of visual and linguistic input, we discovered that the neural listener depends heavily on part-related words and associates these words correctly with the corresponding geometric properties of objects, suggesting that it has learned task-relevant structure linking the two input modalities. We further show that a neural speaker that is `listener-aware' --- that plans its utterances according to how an imagined listener would interpret its words in context --- produces more discriminative referring expressions than an `listener-unaware' speaker, as measured by human performance in identifying the correct object.
[ "Referential Language", "3D Objects", "Part-Awareness", "Neural Speakers", "Neural Listeners" ]
https://openreview.net/pdf?id=rkgZ3oR9FX
https://openreview.net/forum?id=rkgZ3oR9FX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HketMd9EeE", "rkgk-fZ63X", "BygDtFc9nQ", "Ske3XSpdhm" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1545017360789, 1541374454792, 1541216639251, 1541096740155 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper684/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper684/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper684/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper684/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"Paper develops a dataset and model for learning to refer to 3D objects. Reviewers raised concerns about lack of novelty. Fundamentally, it seems unclear what (if any) the take-away for an ML-audience would be after reading this paper. We encourage the authors to incorporate reviewer feedback and submit a stronger manuscript at a future (perhaps a more applied) venue.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}", "{\"title\": \"Well executed paper, however weak contributions\", \"review\": \"Update: I have read author's response (sorry for being super late). The response better indicates and brings out the contributions made in the paper, and in my opinion is a strong application paper. But as before, and in agreement with R1 I still do not see technical novelty in the paper. For an application driven conference, I think this paper will make a great contribution and will have a large impact. I am slightly unsure as to what the impact will be at ICLR. I leave this judgement call to the AC. I won't fight on the paper in either direction.\\n\\nThe paper studies the problem of how to refer to 3D objects with natural language. It collects a dataset for the same, by setting up a reference game between two people. It then trains speaker and listener models that learn how to describe a shape, and how to identify shapes given a discriminative referring expression. The paper seems to follows state-of-the-art in the design of these models, and investigates different choices for encoding the image / 3D shape, use if attention in the listener, and context and listener aware models.\", \"strengths\": \"1. Overall, I think this is a very well executed paper. It collects a dataset for studying the problem of interest, trains state-of-the-art models for the tasks, and conducts interesting ablations and tests insightful hypothesis.\\n\\nWeaknesses\\n1. I am not sure what is the technical contribution being made in the paper? Contrastive referential expressions have been used to collect datasets for referring to objects in images. Use of listeners and speakers have been used in NLP (Andreas et al.) as well as in vision and language (Fried et al.). Thus, while I like the application, I am not sure if there is any novel contributions being made in the paper.\\n\\nOverall, this is a well executed paper, however I am not sure about the novelty of contributions made in the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"an interesting and creative paper\", \"review\": \"The paper investigates how chairs are being described \\\"in the context of other similar or not-so-similar chairs\\\", by humans and neural networks. Humans perceive an object's structure, and use it to describe the differences to other objects \\\"in context\\\". The authors collected a corresponding \\\"chairs in context\\\" corpus, and build models that can describe the \\\"target\\\" chair, that can be used to retrieve the described object, and that can create more discriminative language if given information about the listener.\\n\\nThe paper is well written, in particular the appendix is very informative. The work seems novel in combination with the dataset, and an interesting and well executed study with interesting analysis that is very relevant to both situated natural language understanding and language generation. The \\\"3D\\\" aspect is a bit weak, given that all chairs seem to have essentially been pictured in very similar positions.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Paper review\", \"review\": \"#update: I've read the authors comments but unfortunately my main concerns about the contributions and novelty of this work are not answered. As such, I cannot increase my score.\\n\\n------------------ \\n\\nThe authors provide a study on learning to refer to 3D objects. The authors collect a dataset of referential expressions and train several models by experimenting with a number of architectural choices.\\n\\nThis is an interesting study reporting results on the effect that several architectural choices have generating referential expressions. Overall, while I appreciate all the experiments and results, I don't really feel I've learned something from this paper. \\n\\nFirst and foremost, the paper, from the title already starts to build up expectations about the 3d nature of the study, however this is pretty much ignored at the rest of the paper. I would expect the paper to provide some results and insights regarding the 3D nature of the dataset and how this affects referential expressions, however, there is no experiment that has used this 3d-ness in any way. Even the representations of the objects are stripped down to essentially 2D (a single-view of a 3D object used to derived VGG features is as 3D as any image dataset used for similar studies, right?).\", \"my_major_question_is_then\": \"why should all this research take place in a 3D dataset? Is it to validate that research like this is at all possible with 3D objects?\\n\\nMoreover, all interesting aspects of referential expressions are stripped out since the authors experiment only with this geometric visual property (which has again nothing to do with 3d-ness, you could totally get that out of images). An interesting study would be to have all objects in the same image and have referential expressions that have to do with spatial expressions, something that the depth or a different view of the of the object could play a role.\\n\\nGiven the fact that there are no technical innovations, I can't vouch for accepting this paper, since there has been quite a lot of research on generating referential expressions on image datasets (e.g., Kazemzadeh., 2014 and related papers). However, learning to refer to 3D objects is a very interesting topic, and of great importance given the growing interest of training agents in 3D virtual environments, and I would really encourage the authors to embrace the 3d-ness of objects and design studies that highlight the challenges and opportunities that the third dimension brings.\\n\\n\\nKazemzadeh et al.: ReferIt Game: Referring to Objects in Photographs of Natural Scenes\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HyEl3o05Fm
Stochastic Adversarial Video Prediction
[ "Alex X. Lee", "Richard Zhang", "Frederik Ebert", "Pieter Abbeel", "Chelsea Finn", "Sergey Levine" ]
Being able to predict what may happen in the future requires an in-depth understanding of the physical and causal rules that govern the world. A model that is able to do so has a number of appealing applications, from robotic planning to representation learning. However, learning to predict raw future observations, such as frames in a video, is exceedingly challenging—the ambiguous nature of the problem can cause a naively designed model to average together possible futures into a single, blurry prediction. Recently, this has been addressed by two distinct approaches: (a) latent variational variable models that explicitly model underlying stochasticity and (b) adversarially-trained models that aim to produce naturalistic images. However, a standard latent variable model can struggle to produce realistic results, and a standard adversarially-trained model underutilizes latent variables and fails to produce diverse predictions. We show that these distinct methods are in fact complementary. Combining the two produces predictions that look more realistic to human raters and better cover the range of possible futures. Our method outperforms prior works in these aspects.
[ "video prediction", "GANs", "variational autoencoder" ]
https://openreview.net/pdf?id=HyEl3o05Fm
https://openreview.net/forum?id=HyEl3o05Fm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Syg-Y_e4eE", "BygkjldqC7", "SkgLa7Hc0X", "ryghv7rqAX", "Bygl7mS5CX", "Bkx8_JcuCX", "H1gm0W0X0X", "rJexhWRXCm", "rygvTCLQRX", "BkgZKhUX07", "rkxwCK8chQ", "Hyen_JS9nX", "r1gzmeTDnm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544976505448, 1543303319249, 1543291838333, 1543291747868, 1543291672047, 1543180141630, 1542869451173, 1542869415643, 1542839998831, 1542839416573, 1541200334818, 1541193587844, 1541029914197 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper682/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper682/Authors" ], [ "ICLR.cc/2019/Conference/Paper682/Authors" ], [ "ICLR.cc/2019/Conference/Paper682/Authors" ], [ "ICLR.cc/2019/Conference/Paper682/Authors" ], [ "ICLR.cc/2019/Conference/Paper682/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper682/Authors" ], [ "ICLR.cc/2019/Conference/Paper682/Authors" ], [ "ICLR.cc/2019/Conference/Paper682/Authors" ], [ "ICLR.cc/2019/Conference/Paper682/Authors" ], [ "ICLR.cc/2019/Conference/Paper682/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper682/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper682/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper shows that combining GAN and VAE for video prediction allows to trade off diversity and realism. The paper is well-written and the experimentation is careful, as noted by reviewers. However, reviewers agree that this combination is of limited novelty (having been used for images before). Reviewers also note that the empirical performance is not very much stronger than baselines. Overall, the novelty is too slight and the empirical results are not strong enough compared to baselines to justify acceptance based solely on empirical results.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Significance of applying a GAN - VAE combination to video is too limited\"}", "{\"title\": \"Revised plot (Fig. 14) for the BAIR action-free dataset shows that our method achieves similar LPIPS accuracy and better diversity than the VAE models, including prior SVG method.\", \"comment\": \"We have updated the revised plot in Figure 14 to include updated metrics from all of our methods evaluated on the BAIR action-free dataset. Aside for the first two predicted frames, our SAVP model achieves similar LPIPS distances as the VAE models, both our VAE ablation and the SVG model from Denton & Fergus (2018). In addition, not only our SAVP method substantially improve sample diversity compared to the GAN-only model, but it also produces more diverse samples than both of the VAE models. We conducted a coarse search over hyperparameters on the validation set, this time varying the weightings of the KL divergence and the GAN loss with values 1, 0.1, and 0.01. We found that a lower weighting of 0.1 (instead of 1) for the GAN loss led to better accuracy. Unfortunately, we did not have time to rerun the user study to evaluate on realism, but we plan to do it in a future revision. Qualitatively, the predictions look at least as realistic as before.\"}", "{\"title\": \"Revised plot (Fig. 15) for the KTH dataset shows that our method substantially outperform prior SVG method in terms of accuracy.\", \"comment\": \"We have included a revised plot in Figure 15 at the end of the Appendix (which will be incorporated to Figure 7) that fixes the KTH dataset preprocessing. Our VAE-only model now achieves substantially higher accuracy and diversity than SVG (Denton & Fergus, 2018). As before, the GAN-only model mode-collapses and generates samples that lack diversity. Our SAVP method, which incorporates the variational loss, improves both sample diversity and similarities, compared to the GAN-only model. Our SAVP model also achieves higher accuracy than SVG. The experiments from our original submission (1) cropped the videos into a square before resizing, and thus discarded information from the sides of the video, and (2) did not filter out the empty frames, and thus our models were trained on uninformative frames. We fixed those issues to match the preprocessing used by Denton & Fergus (2018). In addition, we have also included experiments where we condition on only 2 frames instead of 10 frames, in order to test on a setting with more stochasticity.\"}", "{\"title\": \"Revised plot for the KTH dataset shows that our method substantially outperform prior SVG method in terms of accuracy.\", \"comment\": \"We have included a revised plot in Figure 15 at the end of the Appendix (which will be incorporated to Figure 7) that fixes the KTH dataset preprocessing. Our VAE-only model now achieves substantially higher accuracy and diversity than SVG (Denton & Fergus, 2018). As before, the GAN-only model mode-collapses and generates samples that lack diversity. Our SAVP method, which incorporates the variational loss, improves both sample diversity and similarities, compared to the GAN-only model. Our SAVP model also achieves higher accuracy than SVG. The experiments from our original submission (1) cropped the videos into a square before resizing, and thus discarded information from the sides of the video, and (2) did not filter out the empty frames, and thus our models were trained on uninformative frames. We fixed those issues to match the preprocessing used by Denton & Fergus (2018). In addition, we have also included experiments where we condition on only 2 frames instead of 10 frames, in order to test on a setting with more stochasticity.\"}", "{\"title\": \"Revised plot for the KTH dataset shows that our method substantially outperform prior SVG method in terms of accuracy, and responses to other comments.\", \"comment\": \"We have included a revised plot in Figure 15 at the end of the Appendix (which will be incorporated to Figure 7) that fixes the KTH dataset preprocessing. Our VAE-only model now achieves substantially higher accuracy and diversity than SVG (Denton & Fergus, 2018). As before, the GAN-only model mode-collapses and generates samples that lack diversity. Our SAVP method, which incorporates the variational loss, improves both sample diversity and similarities, compared to the GAN-only model. Our SAVP model also achieves higher accuracy than SVG. The experiments from our original submission (1) cropped the videos into a square before resizing, and thus discarded information from the sides of the video, and (2) did not filter out the empty frames, and thus our models were trained on uninformative frames. We fixed those issues to match the preprocessing used by Denton & Fergus (2018). In addition, we have also included experiments where we condition on only 2 frames instead of 10 frames, in order to test on a setting with more stochasticity.\\n\\nWe thank the reviewer for clarifying the challenges of the Moving MNIST dataset. We are currently running experiments on that dataset, but unfortunately they won't be done by the rebuttal deadline. We will update the website once we have those results ready.\\n\\nRegarding the second comment, we expect the GAN and the SVAP models to produce qualitatively similar images. We uploaded the predictions of the test set to the supplementary anonymous website [3]. In our predictions, we have noticed implausible object motions in the form of object deformations (e.g. samples 4, 6) or sometimes transformation into new objects (e.g. samples 16, 35). A pure GAN and our SAVP method produce roughly equally plausible images, as evidenced by the realism results in Figure 5. The main gain of the variational loss with respect to a pure GAN is in the diversity of the samples.\\n\\nWhen these models are used in downstream tasks, it depends on the exact nature of the downstream task whether blurry but physically sound predictions are preferred over sharper predictions that occasionally produces imagined interactions. For example, a visual servoing controller might benefit from smoother and blurry predictions, whereas a controller that uses a goal-image classifier might benefit from predictions that are in the image manifold. Our paper (a) contributes to the Pareto frontier in this space and (b) characterizes this tradeoff for future reference.\\n\\n[3] https://video-prediction.github.io/video_prediction/index_files/tables/bair_action_free_all/index.html\"}", "{\"title\": \"Requests not fully addressed\", \"comment\": \"------------------------------------------------\\n* The purpose of adding adversarial losses to a pure VAE is to improve on blurry predictions where the latent variables alone cannot capture the uncertainty of the data. However, that is typically not the case of synthetic datasets. In early experiments, we trained our pure VAE model on the stochastic shape movement dataset from Babaeizadeh et al. (2018), and our pure VAE was able to model the dataset without any blur and with perfect separation of the possible futures.\\n\\nWhile it's true that both are synthetic datasets, there are two significant differences between Stochastic Moving MNIST and the stochastic shape movement dataset from Babaeizadeh et al. (2018) - in Moving MNIST the objects have a greater variety of shapes as they are digits and not simple polygons, and in Moving MNIST the crossings between the digits are significantly hard to model because of the uncertainty in the resulting shape. Most VAE models including SVG struggle to not produce blurry frames when digits cross. Given this, I still believe that, if indeed the proposed SAVP model removes the blurriness of VAE predictions while still producing plausible interactions, showing the performance of SAVP on Stochastic Moving MNIST would be a useful experiment that would make it easier to evaluate the model - a middle ground between too simple toy tasks and more complex datasets.\\n\\n------------------------------------------------\\n* We agree that plausibility is indeed important, and that's what our human subject studies try to capture. Since we provide predictions of the whole sequence to the human evaluator, we are not only evaluating for image realism but also for plausibility of the dynamics. Unlike the VAE models that implausibly erase the small objects that are being pushed in the BAIR dataset, our SAVP model moves those objects in a more plausible way\\n\\nMy experience is that video GAN models for the BAIR Pushing Dataset don't produce blurry results but instead they imagine new objects/shapes when there are object interactions. Usually these predictions look 'more natural', specially for low resolution predictions (64x64 pixels), but upon closer inspection it is easy to spot new objects, implausible motions, etc. My original comment was meant to ask whether this still happens with SAVP or the combination of VAE + GAN solves this issue. In my opinion this is an important question - imagine we want to use a video prediction model for planning. Then, it is unclear to me whether it would be better to use blurry but physically sound predictions (VAE) compared to sharper predictions with imagined objects/interactions (GAN). While I understand that the user study was meant to answer this question, I believe regular users would prefer the sharper predictions even when some of the interactions are not physically possible. My original comment was meant to ask the authors whether SAVP doesn't suffer from these imagined objects and interactions.\"}", "{\"title\": \"References for the previous post\", \"comment\": \"[1] Yochai Blau and Tomer Michaeli. The perception-distortion tradeoff. In Conference on Vision and Pattern Recognition (CVPR), 2018. https://arxiv.org/abs/1711.06077\\n\\n[2] Yochai Blau, Roey Mechrez, Radu Timofte, Tomer Michaeli, and Lihi Zelnik-Manor. 2018 PIRM Challenge on Perceptual Image Super-resolution. In Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018. https://arxiv.org/abs/1809.07517\"}", "{\"title\": \"We address the questions and clarify a few details, which are also reflected in the updated draft\", \"comment\": \"We thank reviewer 1 for the detailed feedback. In this response, we clarify the accuracy-realism trade-off, revise the accuracy metrics, indicate reruns and new experiments, and address the individual questions.\\n\\nWe updated Section 4.4 to indicate that it is to be expected that, although our SAVP model improves on diversity and realism, it also performs worse in accuracy compared to pure VAE models (both our own ablation and SVG). A recent result [1] proves that there is a fundamental tradeoff between accuracy and realism, for all problems with inherent ambiguity. In fact, a recent challenge held at ECCV 2018 in such a problem [2] evaluates all algorithms on both of these axes, as neither adequately captures performance.\\n\\nAlthough the SVG generator is simpler than ours, ours is just a simple variation from Ebert et al. (2017). Since proposing a strong generator architecture is not the goal of this paper, \\nany video generator (including the one from Denton & Fergus (2018)) could be used with our losses. We added this clarification to Section 3.4. Instead, we provide a systematic analysis of the effect of the loss function on this task (which could be applied to any generator). It's also worth noting that with a simpler feed-forward posterior and a unit Gaussian prior, our VAE ablation and SVG achieve similar performance on various metrics. We added Section 3.5 to point out the differences between the VAE component of our model and prior work.\\n\\nWe have included a revised plot in Figure 14 (note that this temporary plot will be incorporated into Figure 6), where we use the official implementation of SSIM and replace the VGG metric with the LPIPS metric (Zhang et al., 2018). LPIPS linearly calibrates AlexNet feature space to better match human perceptual similarity judgements. Aside for the first two predicted frames, our VAE ablation and the SVG model both achieve similar SSIM and LPIPS.\\n\\nAfter examining the KTH results further, we realized that our results are likely weaker than they should have been, because we did not use the same preprocessing as prior work. The experiments from our original submission cropped the videos into a square before resizing, and thus discarded information from the sides of the video. We are currently rerunning the KTH experiments and we plan to update the results in the paper. We also didn't choose particular hyperparameters to ensure diversity for our models, and we expect some improvement in diversity in the new sets of experiments.\\n\\nAlthough the combination of VAEs and GANs have been explored recently for conditional image generation (Zhang et al. 2018), the video prediction task is substantially different, with unique challenges, due to spatiotemporal relationships and inherent compounding uncertainty of the future.\\n\\nFurthermore, while the individual components have indeed been known for video prediction, their combination is novel and not present in prior work, and we demonstrate that this produces state-of-the-art results in terms of diversity and realism. In addition, this work provides a detailed comparison of the effect of the losses on the various metrics. Furthermore, we are currently running experiments for various weightings of the KL loss and the adversarial loss, and we plan to include additional results that illustrate the trade-offs based on these hyperparameters.\\n\\nAlthough MoCoGAN performs well for videos with a single frame-centered actor, it struggles with multiple simultaneously moving entities. The authors of MoCoGAN also mentioned in personal correspondence that the conditional version (i.e. video prediction) was significantly harder to train. We noticed the same in earlier iterations of our model. In our case, we found that the model would degenerate to static videos or videos with a cyclic flickering artifact, which are issues that aren't a problem in conditional image generation. We added details to Section 3.4 describing the importance of a few components, such as spectral normalization and not conditioning the discriminator in the ground-truth context frames.\\n\\nThe purpose of adding adversarial losses to a pure VAE is to improve on blurry predictions where the latent variables alone cannot capture the uncertainty of the data. However, that is typically not the case of synthetic datasets. In early experiments, we trained our pure VAE model on the stochastic shape movement dataset from Babaeizadeh et al. (2018), and our pure VAE was able to model the dataset without any blur and with perfect separation of the possible futures.\\n\\nWe agree that plausibility is indeed important, and that's what our human subject studies try to capture. Since we provide predictions of the whole sequence to the human evaluator, we are not only evaluating for image realism but also for plausibility of the dynamics. Unlike the VAE models that implausibly erase the small objects that are being pushed in the BAIR dataset, our SAVP model moves those objects in a more plausible way.\"}", "{\"title\": \"We address the questions and add clarifications, which are also reflected in the updated draft\", \"comment\": \"We thank reviewer 2 for the detailed feedback. We are glad that the reviewer found the VAE-GAN model to be a natural extension for the problem and that our work provides a good baseline for future work. We address the individual questions below.\\n\\nWe changed Section 3.1 to explain that the posterior dependence on pairs of adjacent frames is to have temporally local latent variables that capture the ambiguity for only that transition, a sensible choice when using i.i.d. Gaussian priors. Another choice is to use temporally correlated latent variables, which would require a stronger prior (e.g. as in Denton & Fergus (2018)). For simplicity, we opted for the former.\\n\\nThe blurriness in a VAE can indeed be attributable to a weak inference model. Note that our VAE variant and both SVG variants are able to predict sharp robot arms in the BAIR dataset, but often blur out the small objects being pushed. We tried recurrent posteriors and learned priors with our models, and the results were similar. We are now running additional experiments with a deeper encoder and with more filters. Although in principle a strong inference model could produce sharper images, an alternative approach is to use better losses, which is the approach we chose in this work.\\n\\nIt is an interesting suggestion to experiment with the effect of the hyperparameters on the trade-off between realism and diversity. We are currently running experiments for various weightings of the KL loss and the adversarial loss, and we plan to include results that illustrate the trade-offs based on these hyperparameters. We also plan to include results on the trade-offs between accuracy and realism. In fact, a recent result [1] proves that this is a fundamental trade-off for all problems with inherent ambiguity.\\n\\nThe statement that \\u201cGANs prioritize matching joint distributions of pixels over per-pixel reconstruction\\\" is a criticism of per-pixel losses, and not of VAEs in general. We clarified in the introduction that VAEs can indeed model joint distributions of pixels.\\n\\n[1] Yochai Blau and Tomer Michaeli. The perception-distortion tradeoff. In Conference on Vision and Pattern Recognition (CVPR), 2018. https://arxiv.org/abs/1711.06077\"}", "{\"title\": \"We address the questions and add clarifications and details to the paper\", \"comment\": \"We thank reviewer 3 for the detailed feedback. We are glad that the reviewer found the extensive evaluation appropriate, and that our model behaves well for the realistic and diversity measures. We now address all the individual questions.\\n\\nWe added Section 3.5 to point out the differences between the VAE component of our model and the SV2P and SVG models from prior work. In Section 3.4, we clarified what frames the discriminator takes, and in Section 4.3 we added a description of the deterministic version of our model. In Section A.1.1, we provided a better description of how frames are predicted at each time step. In Section 3.5 and A.1.2, we clarified that the latent variables are sampled at every time step.\\n\\nWe updated Section 4.4 to indicate that it is to be expected that although our SAVP model improves on diversity and realism, it also performs worse in accuracy compared to pure VAE models (both our own ablation and SVG from Denton & Fergus (2018)). A recent result [1] proves that there is a fundamental tradeoff between accuracy and realism, for all problems with inherent ambiguity. In fact, a recent challenge held at ECCV 2018 in such a problem [2] evaluates all algorithms on both of these axes, as neither adequately captures performance.\\n\\nNote that proposing a generator architecture is not the goal of this paper. Instead, we provide a systematic analysis of the effect of the loss function on this task (which could be applied to any generator). We use a warping-based generator, from prior work (Ebert et al. 2017), and include a comparison to SVG for completeness. In the updated draft, we clarify in Section 3.4 that the warping component assumes that videos can be described as transformation of pixels, but that any generator (including the one from Denton & Fergus (2018)) could be used with our losses. Since evaluating generator architectures is not the emphasis of this paper, we did not test the importance of the warping component nor test on videos where this hypothesis is less suitable.\\n\\nWe have included a revised plot in Figure 14 at the end of the Appendix (note that this temporary plot will be incorporated to Figure 6), where we use the official implementation of SSIM and replace the VGG metric with the Learned Perceptual Image Patch Similarity (LPIPS) metric (Zhang et al., 2018). LPIPS linearly calibrates AlexNet feature space to better match human perceptual similarity judgements. Aside for the first two predicted frames, our VAE ablation and the SVG model both achieve similar SSIM and LPIPS performance.\\n\\n[1] Yochai Blau and Tomer Michaeli. The perception-distortion tradeoff. In Conference on Vision and Pattern Recognition (CVPR), 2018. https://arxiv.org/abs/1711.06077 \\n\\n[2] Yochai Blau, Roey Mechrez, Radu Timofte, Tomer Michaeli, and Lihi Zelnik-Manor. 2018 PIRM Challenge on Perceptual Image Super-resolution. In Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018. https://arxiv.org/abs/1809.07517\"}", "{\"title\": \"the paper proposes an extension of VAE based video prediction models and produces an extensive evaluation. The model seems to perform well, the originality and the improvement w.r.t. baselines are somewhat limited.\", \"review\": \"The paper introduces a generative model for video prediction. The originality stems from a new training criterion which combines a VAE and a GAN criteria. At training time, the GAN and the VAE are trained simultaneously with a shared generator; at test time, prediction conditioned on initial frames is performed by sampling from a latent distribution and generating the next frames via an enhanced conv LST . Evaluations are performed on two movement video datasets classically used for benchmarking this task - several quantitative evaluation criteria are considered.\\n\\nThe paper clearly states the objective and provides a nice general description of the method. The proposed model extends previous work by adding an adversarial loss to a VAE video prediction model. The evaluation compares different variants of this model to two recent VAE baselines. A special emphasis is put on the quantitative evaluation: several criteria are introduced for characterizing different properties of the models with a focus on diversity. w.r.t. the baselines, the model behaves well for the \\u201crealistic\\u201d and \\u201cdiversity\\u201d measures. The results are more mitigated for measures of accuracy. As for the qualitative evaluation, the model corrects the blurring effect of the reference SV2P baseline, and produces quite realistic predictions on these datasets. The difference with the other reference model (SVG) is less clear.\\n\\nWhile the general description of the model is clear, details are lacking. It would probably help to position the VAE component more precisely w.r.t. one of the two baselines, by indicating the differences. This would also help to explain the difference of performance/ behavior w.r.t. these models (Fig. 5).\\n\\nIt seems that the discriminator takes a whole sequence as input, but some precision on how this done could be added. Similarly, you did not indicate what the deterministic version of your model is.\", \"the_generator_model_with_its_warping_component_makes_a_strong_hypothesis_on_the_nature_of_the_videos\": \"it seems especially well suited for translations or for other simple geometric transformations characteristics of the benchmarking videos . Could you comment on the importance of this component? Did you test the model on other types of videos where this hypothesis is less relevant? It seems that the baseline SVG makes use of simpler ConLSTM for example.\\n\\nThe description of the generator in the appendix is difficult to follow. I missed the point in the following sentence: \\u201cFor each one-step prediction, the network has the freedom to choose to copy pixels from the previous frame, used transformed versions of the previous frame, or to synthesize pixels from scratch\\u201d .\\nAlso, it is not clear from the discussion on z, whether sampling is performed once for each video of for each frame.\\n\\nOverall, the paper proposes an extension of VAE based video prediction models and produces an extensive evaluation. While the model seems to perform well, the originality and the improvement w.r.t. baselines are somewhat limited.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Straightforward paper and simple extension of VAE-GANs\", \"review\": \"This paper proposes to extend VAE-GAN from the static image generation setting to the video generation setting. It\\u2019s a well-written, simple paper that capitalizes on the trade-off between model realism and diversity, and the fact that VAEs and GANs (at least empirically) tend to lie on different sides of this spectrum.\\n\\nThe idea to extend the use of VAE-GANs to the video prediction setting is a pretty natural one and not especially novel. However, the effort to implement it successfully is commendable and will, I think, serve as a good reference for future work on video prediction. \\n\\nThere are also several interesting design choices that I think are worth of further exposition. Why, for example, did the authors only perform variational inference with the current and previous frames? Did conditioning on additional frames offer limited further improvement? Can the blurriness instead be attributable to the weak inference model? Please provide a response to these questions. If the authors have any ablation studies to back up their design choices, that would also be much appreciated, and will make this a more valuable paper for readers.\\n\\nI think Figure 5 is the most interesting figure in the paper. I would imagine that playing with the hyperparameters would allow one to traverse the trade-off between realism and diversity. I think having such a curve will help sell the paper as giving the practitioner the freedom to select their own preferred trade-off. \\n\\nI don\\u2019t understand the claim that \\u201cGANs prioritize matching joint distributions of pixels over per-pixel reconstruction\\u201d and its implication that VAEs do not prioritize joint distribution matching. VAEs prioritize matching joint distributions of pixels and latent space: min KL(q(z, x) || p(z, x)) and is a variational approximation of the problem min KL(q(x) || p(x)), where q(x) is the data distribution. The explanation provided by the authors is thus not sufficiently precise and I recommend the retraction of this claim.\", \"pros\": [\"Well-written\", \"Natural extension of VAE-GANs to video prediction setting\", \"Establishes a good baseline for future video prediction work\"], \"cons\": [\"Limited novelty\", \"Limited analysis of model/architecture design choices\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"VAE-GAN model for video prediction\", \"review\": \"\", \"summary\": \"The authors present a video prediction model called SAVP that combines a Variational Auto-Encoder (VAE) model with a Generative Adversarial Network (GAN) to produce more realistic and diverse future samples.\\n\\nDeterministic models and certain loss functions such as Mean Squared Error (MSE) will produce \\nblurry results when making uncertain predictions. GAN predictions on the other hand usually are more visually appealing but often lack diversity, producing just a few modes. The authors propose to combine a VAE model with a GAN objective to combine their strengths: good quality samples (GAN) that cover multiple possible futures (VAE).\", \"strengths\": \"[+] GANs are notoriously unstable to train, especially for video. The authors formulate a VAE-GAN model and successfully implement it.\", \"weaknesses\": \"[-] The combination of VAEs and GANs, while new for videos, had already been proposed for image generation as indicated in the Related Work section and its formulation for video prediction is relatively straightforward given existing VAE (Denton & Fergus 2018) and GAN models (Tulyakov et al. 2018).\\n\\n[-] The results indicate that SAVP offers a trade-off between the properties of GANs and VAEs, but does not go beyond its individual parts. For example, the experiment of Figure 5 does not show SAVP being significantly more diverse than GANs for KTH (as compared to VAEs). Furthermore, Figure 6 and Figure 7 in general show SAVP performing worse than SVG (Denton & Fergus 2018), a VAE model with a significantly less complex generator, including for the metric (VGG cosine similarity) that the authors introduce arguing that PSNR and SSIM do not necessarily indicate prediction quality.\\n\\nWhile the use of a GAN in general will make the results less blurry and visually appealing, it does not necessarily mean that the samples it generates are going to be plausible or better. Since a direct application of video prediction is model-based planning, it seems that plausibility might be as important as sample quality. This work proposes to combine VAEs and GANs in a single model to get the benefits of both models. However, the experiments conducted generally show that SAVP offers only a trade-off between the visual quality of GANs and the coverage of VAEs, and does not show a clear advantage over current VAE models (Denton & Fergus, 2018) that with simpler architectures obtain similar results. While the presentation is clear and the evaluation of the model is thorough, I am unsure of the significance of the proposed method.\\n\\nIn order to better assess this model and compare it to its individual parts and other VAE models, could the authors:\\n\\n1) Compare SAVP to the SVG-LP/FP model on a controlled synthetic dataset such as Stochastic Moving MNIST (Denton & Fergus, 2018)?\\n2) Comment on the plausibility of the samples generated by SAVP? Do some samples show imagined objects \\u2013 implausible interactions for the robotic arm dataset? If so, what would be the advantage over blurry but plausible generations of a VAE?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
SkVe3iA9Ym
Beyond Winning and Losing: Modeling Human Motivations and Behaviors with Vector-valued Inverse Reinforcement Learning
[ "Baoxiang Wang", "Tongfang Sun", "Xianjun Sam Zheng" ]
In recent years, reinforcement learning methods have been applied to model gameplay with great success, achieving super-human performance in various environments, such as Atari, Go and Poker. However, those studies mostly focus on winning the game and have largely ignored the rich and complex human motivations, which are essential for understanding the agents' diverse behavior. In this paper, we present a multi-motivation behavior modeling which investigates the multifaceted human motivations and models the underlying value structure of the agents. Our approach extends inverse RL to the vectored-valued setting which imposes a much weaker assumption than previous studies. The vectorized rewards incorporate Pareto optimality, which is a powerful tool to explain a wide range of behavior by its optimality. For practical assessment, our algorithm is tested on the World of Warcraft Avatar History dataset spanning three years of the gameplay. Our experiments demonstrate the improvement over the scalarization-based methods on real-world problem settings.
[ "losing", "modeling human motivations", "behaviors", "methods", "gameplay", "agents", "behavior", "inverse reinforcement", "inverse reinforcement learning", "recent years" ]
https://openreview.net/pdf?id=SkVe3iA9Ym
https://openreview.net/forum?id=SkVe3iA9Ym
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJxGAy3bl4", "BkxdGEkwaX", "Byl9CJ1Da7", "rJeG4mqH6Q", "SkeDO_x1am", "r1xcwnpq3Q", "SkgHFfw52X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544826826183, 1542022160102, 1542021073573, 1541935914323, 1541503087223, 1541229665699, 1541202557168 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper681/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper681/Authors" ], [ "ICLR.cc/2019/Conference/Paper681/Authors" ], [ "ICLR.cc/2019/Conference/Paper681/Authors" ], [ "ICLR.cc/2019/Conference/Paper681/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper681/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper681/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": [\"Pros:\", \"new multi-objective approach to IRL\", \"new algorithm\", \"strong results\", \"real-world dataset\"], \"cons\": [\"straightforward theoretical extensions\", \"unclear motivation\", \"inappropriate empirical assessment metrics\", \"weak rebuttal\", \"All the reviewers feel that the paper needs further improvements, and while the authors comment on some of these concerns, their rebuttal and revised paper does not address them sufficiently. So at this stage it is a (borderline) reject.\"], \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Meta-review\"}", "{\"title\": \"Response to the review\", \"comment\": \"Thank you for the review! The most important clarification that we would like to make, is that \\\"Pareto dominance is a rather weak relation\\\" makes the model rather strong. That is because the dominance relation is the assumption of the IRL models, and weak assumptions are desired. We believe that justifies our motivation of studying the Pareto dominance in the IRL regime.\\n\\nOn the empirical study, we agree that presenting the algorithm on only the real-world environments may depend on the rational assumption. In fact, we are aware of that the diversity in action originates from both the diversity of the agents' objective and their optimality (or even rationality). There is not too much one can resolve in the real-world dataset, but we can test the algorithm on the well-known RL environment and show the performance. We are also working on improving the writing quality and thanks for pointing those out.\"}", "{\"title\": \"Response to the review\", \"comment\": \"Thanks a lot for your detailed review! A quick note is that pg. 5 \\\"If otherwise all elements in \\\\phi are generative\\\" the \\\\phi is the existence specification by the separation theorem. It is not what mentioned in Section 3.2 and is not restricted to the simplex. We are updating Section 3.2 to avoid the possible ambiguity. We also note that the model is trained and tested on disjoint subsets of WoWAH. The spider map was calculated by the entire population though.\\n\\nFor the contribution of the paper, we believe it is indeed worth investigating combining IRL with the diversity of preferences among agents. In fact, the problem of IRL with scalar-valued reward has been long open. The assumption (that is used by almost every IRL algorithm) is too strong to complex agents (such as humans). We developed Theorem 3.1 which significantly weaker the assumption. We agree that armed with the theorem, Section 3.2 and Section 4 was not aiming at a clear objective as you may expect. To make the objective more clear, it is more reasonable to reproduce the policy (or the set of policy) that was used to generate the trajectory dataset. That is because we already have the reward vector and also the Pareto optimal relationship assumption, and the policy is the only unknown element. Some updates on the algorithm will be necessary then, which is currently undergoing.\\n\\nFor experiments, we agree that it can be confusing to demonstrate the real-world problem. There are several constraints to run the algorithm on the real-world dataset, such as the query of the state transition function. That makes the experiments itself more dependent on its context. As a solution, we find it better to add some well-known experiments such as openAI gym or gridworld with vector-reward, which provide a more intuitive understanding of the empirical performance of the algorithm. We are currently working on that.\\n\\nWe have updated the writing in the revised version of the paper. Thanks a lot for pointing them out!\"}", "{\"title\": \"Response to the review\", \"comment\": \"Thanks very much for the review! For the algorithm, we agree that it is indeed straightforward. We would like to note, though, that dealing with the scalarized reward function has long been an open problem in inverse reinforcement learning. We have tried other (more complex) approaches but finally found out that the lower bound introduced in Theorem 3.1 is the most appropriate one. We believe that the estimation in Theorem 3.1 is a reasonable solution to the problem. On the other hand, there are rooms for other subtle methods related to distance measurement, in Section 3.2. We are working on employing that into the algorithm for better performance.\\n\\nWe thank the reviewer for the comments on the strong experiments. In fact, involving some real-world problem gives the implication beyond the typical simulator-based environments. It is important that the reviewer mentions \\\"No performance analysis for the proposed algorithm\\\". Keeping that in mind, we are working on adding some results on openAI gym, which includes benchmarked tasks and quantitative evaluations.\\n\\nWe have corrected the notation typos in the updated draft and updated some other writings for its clarity.\"}", "{\"title\": \"interesting paper with some issues\", \"review\": [\"This paper studies inverse reinforcement learning with a vector-valued setting. A key motivation of the paper, as suggested by its title, is to incorporate and analyze the complex human motivations.\", \"The proposed setting seems new to me, although vectored-valued rewards and Pareto optimality have been studied in the context of RL. The biggest issue of this paper, in my opinion, is it doesn't properly support its claim that it improves the understanding of the agents' motivations and the reward functions. Details comments / questions are listed below.\", \"Pareto dominance is a rather weak relation. When the number of criteria increases, it is less likely one alternative dominates another. In this case, the binary comparisons defined in Sec. 2.1 becomes less discriminative. Is this a problem to the proposed method?\", \"Pareto dominance and vector-valued rewards have been studied in preference-based reinforcement learning, such as F\\u00fcrnkranz et al. 2012 @ MLJ and Cheng et al. 2011 @ ECML.\", \"Please fix the citation style in the paper and use \\\\citep and \\\\citet properly.\", \"The empirical study in this paper doesn't properly support the authors' claim. (1) It's questionable to assume the actions of a player in an online game are optimal or even rational. (2) The results presented in Figure 2 is hard to read and the differences look minor. (3) Maybe I miss it, but has Table 2 been referenced and explained in the paper?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting work, but need further improvement\", \"review\": \"This paper presents NMBM, a general inverse reinforcement learning (IRL) model that considers multifaceted human motivations. The authors have motivated and proposed the algorithm (Section 2 and 3), and demonstrated some experiment results based on a real-world dataset (WoWAH, Section 4).\\n\\n-- Originality and Quality --\\n\\nTo the best of my knowledge, the proposed NMBM algorithm is new. However, I feel that the derivation of this algorithm is relatively straightforward based on existing literature. Specifically, this algorithm is based on (1) Theorem 3 and (2) the linear program defined in equation 9. My understanding is that both Theorem 3 and the derivation of the linear program in equation 9 are relatively straightforward based on existing literature.\\n\\nOn the other hand, the experiment results in Section 4 are very strong and interesting. It is the main strength of this paper.\\n\\n-- Clarity --\\n\\nMy understanding is that the writing of Section 3 and 4 can be (and should be) further polished.\", \"some_key_notations_in_the_paper_seem_to_be_wrong\": \"(1) In Theorem 3, how can the value function v^\\\\pi(s) be in the convex hull of policies? Also, e_i is not a set.\\n\\n(2) In equation 9, the linear program, \\\\eta should be another decision variable. \\n\\n-- Pros and Cons --\", \"pros\": \"1) Strong experiments.\", \"cons\": \"1) Insufficient novelty for algorithm design.\\n\\n2) No performance analysis for the proposed algorithm.\\n\\n3) Clarity needs to be further improved.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review of \\\"Beyond Winning and Losing...\\\"\", \"review\": \"======== Summary ============\\n\\nThe authors consider a setup where there is a set of trajectories (s_t, a_t, r_t) where r_t is a *vector* of rewards. They assume that each agent is trying to maximize \\\\sum_t \\\\gamma^t (\\\\phi . r_t) where \\\\phi is a preference vector that lives on the simplex. Their goal is to calculate \\\\phi (and maybe also an optimal policy under \\\\phi?). The \\n\\nThe authors first prove that this problem can be decomposed into finding Q functions for optimal policies for each component of r_t individually, and then solving for \\\\phi that rationalizes the trajectory of actions in terms of these Q functions. Given the entire collection of trajectories, they perform off-policy Q-learning on each component of r_t in order to learn the Q function for that component, and then use linear programming to solve for \\\\phi based on these Q function.\\n\\n========== Comments =============\\n\\nI think it's a worthwhile direction to combine IRL with modeling a diversity of preferences among agents. I can imagine several reasons you might want to do this, but the authors are not clear what their goal is besides \\\"to propose methods that can help to understand the intricacy and complexity of human motivations and their behaviors\\\". Is the goal to do better policy prediction? To do better policy prediction conditional on \\\\phi? To infer \\\\phi to understand people's preferences from a social science perspective? These all seems reasonable but not sufficiently teased out in the work. (For comparison, IRL is typically - although not always - interested in learning the reward function in order to construct robust policies that maximize it). The authors also don't seem to solve a particular task of importance on the WoW dataset.\\n\\nThe theoretical approach seems sound, and I liked the way their algorithm was motivated and the way the problem was decomposed into off-policy Q-learning and then solving for \\\\phi.\\n\\nHowever, I found myself quite confused in the experimental section (4.3). The authors evaluate their approach by action prediction. Given the trajectories, is \\\\phi computed for each player and then compute actions based on that value of \\\\phi? Is \\\\phi computed on the same trajectory data used for evaluation or a different subset? Or is action prediction performed in aggregate across the entire population? The experimental setup was never clarified for this (main) experiment.\\n\\nI was also confused about the motivation for Figure 2 and Appendix D. The authors are showing that their predictions about which reward is motivating the players is consistent with external factors. But wouldn't you see the same thing if you just plotted the observed *rewards* themselves? E.g. players in a guild will achieve more Relationship reward. \\nThe proposed approach takes the vector of reward, learns which actions are consistent with achieving each reward, then infers from the actions which reward is trying to be achieved. What advantages does this have vs. just looking at the empirical trajectory of rewards for each player/group?\\nI can certainly imagine that the IRL approach has certain advantages over looking at the empirical reward stream, but the authors have not talked about this nor compared against it experimentally.\\n\\nThe writing could also use some improvement for a future iteration, I've listed a few points below:\\n\\npg.1, Neither Brown & Sandholm nor Moravcik et al use \\\"RL algorithms\\\"\\npg.1, Finn et al unmatched )\\npg.1, \\\"a scalar reward despite observed or not\\\" -> \\\"a scalar reward whether observed or not\\\"\\npg.2, \\\"Either the range of\\\" -> \\\"Both the range of\\\" (and this sentence needs further cleanup)\\npg.2, \\\"which records the pathing of players\\\" ??\", \"theorem_3\": \"\\\"each of the set e_i has an unique element...\\\" This isn't clear. I think you mean \\\"For each e_i there is a unique vector v^\\\\pi(s) for all \\\\pi \\\\in \\\\Pi_{e_i} . The equality holds if these vectors are distinct for each e_i\\\".\\npg. 5 \\\"If otherwise all elements in \\\\phi are generative\\\" how can they be negative if they are on the simplex?\\npg.5 \\\"we do not perform any scalarization on the reward...the model assumption is easier to be satisfied\\\" I think this is a strange comparison to IRL because in IRL you're trying to find a (possibly parametric) function (s,a) -> R, whereas here you're *given* the vector R and are trying to find \\\\phi. So while you have more degrees of freedom by adding \\\\phi, you lose the original degrees of freedom in the reward function.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HJMghjA9YX
Model Comparison for Semantic Grouping
[ "Francisco Vargas", "Kamen Brestnichki", "Nils Hammerla" ]
We introduce a probabilistic framework for quantifying the semantic similarity between two groups of embeddings. We formulate the task of semantic similarity as a model comparison task in which we contrast a generative model which jointly models two sentences versus one that does not. We illustrate how this framework can be used for the Semantic Textual Similarity tasks using clear assumptions about how the embeddings of words are generated. We apply information criteria based model comparison to overcome the shortcomings of Bayesian model comparison, whilst still penalising model complexity. We achieve competitive results by applying the proposed framework with an appropriate choice of likelihood on the STS datasets.
[ "model comparison", "semantic similarity", "STS", "von Mises-Fisher", "Information Theoretic Criteria" ]
https://openreview.net/pdf?id=HJMghjA9YX
https://openreview.net/forum?id=HJMghjA9YX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1eppfFgeE", "HJev9Da014", "rye5KHf6J4", "ryx0pDkayE", "rkx405TKAX", "S1e1AGj_pm", "Bkx7YziO67", "SJeTGbYup7", "r1eamKOupm", "HyI7safM6X", "rJlFVJbo3X", "S1gnfMKc3Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544749765236, 1544636302998, 1544525185984, 1544513477723, 1543260875637, 1542136519205, 1542136442964, 1542127893349, 1542125861206, 1541709211082, 1541242673469, 1541210644099 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper680/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper680/Authors" ], [ "ICLR.cc/2019/Conference/Paper680/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper680/Authors" ], [ "ICLR.cc/2019/Conference/Paper680/Authors" ], [ "ICLR.cc/2019/Conference/Paper680/Authors" ], [ "ICLR.cc/2019/Conference/Paper680/Authors" ], [ "ICLR.cc/2019/Conference/Paper680/Authors" ], [ "ICLR.cc/2019/Conference/Paper680/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper680/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper680/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper presents a novel family of probabilistic approaches to computing the similarities between two sentences using bag-of-embeddings representations, and presents evaluations on a standard benchmark to demonstrate the effectiveness of the approach. While there seem to be no substantial disputes about the soundness of the paper in its current form, the reviewers were not convinced by the broad motivation for the approach, and did not find the empirical results compelling enough to serve as a motivation on its own. Given that, no reviewer was willing to argue that this paper makes an important enough contribution to be accepted.\\n\\nIt is unfortunate that one of the assigned reviewers\\u2014by their own admission\\u2014was not well qualified to review it and that a second reviewer did not submit a review at all, necessitating a late fill-in review (thank you, anonymous emergency reviewer!). However, the paper was considered seriously: I can attest that both of the two higher-confidence reviewers are well qualified to review work on problems and methods like these.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Motivation and major contribution not clear.\"}", "{\"title\": \"Ethayarajh, 2018 does not perform a fair comparison to Arora et al., 2016. (uSIF is not better than SIF)\", \"comment\": \"We thank the reviewer again for their suggestion to look into the uSIF method presented in Ethayarajh, 2018. We have carefully reviewed the paper and reproduced results using the source code that the reviewer kindly provided. However, upon inspection of the source code, we can observe that preprocessing (which is not part of the random walk model proposed) on the sentences and word vectors has been carried out selectively on the uSIF method and not on the baseline compared against - Arora et al., 2016\\u2019s SIF.\", \"the_preprocessing_includes\": \"- Removal of all non-alphanumeric characters (https://github.com/kawine/usif/blob/d5bbaab750da644815ce8d84404b67b5a6c710c9/usif.py#L116 )\\n- Custom tokenization on top of NLTK tokenize (the sentEval framework uses a different Tokeniser, likewise with Arora et al. which made it impossible to reproduce these results under sentEval) (https://github.com/kawine/usif/blob/d5bbaab750da644815ce8d84404b67b5a6c710c9/usif.py#L119-L125 )\\n- L_2 normalization along word vector dimensions within a sentence (https://github.com/kawine/usif/blob/d5bbaab750da644815ce8d84404b67b5a6c710c9/usif.py#L133 ). Please note that this heuristic is not part of the modelling used to derive uSIF and does not directly address the issues that uSIF claims to solve, it is introduced as a way of standardising the variance along each dimension (without clear theoretical motivations as to why). This preprocessing can and should be applied to SIF and other methods in order to guarantee a fair comparison.\\n\\nIn order to fairly compare to SIF in Arora et al., 2016, Ethayarajh, 2018 should have run the SIF baseline under their customised STS_Test framework (https://github.com/kawine/usif/blob/master/usif.py ) and should have applied the same sentence processing techniques to SIF. We show in Table 1 that the huge gain in performance demonstrated in Ethayarajh, 2018 is completely due to the combinations of the text preprocessing and the L_2 normalisation along dimensions. Normalisation alone is responsible for up to an absolute increase of 0.07 (7%) of Pearson correlation in some STS years. The punctuation and symbol filtering is on average responsible for an absolute increase of 0.025-0.03 (2.5% - 3%) in Pearson correlation.\\n\\nOverall, from the table below, we can see that the SIF weighting introduced in Arora et al., 2016 is on-par with uSIF when applying the same text normalisation techniques and the L_2 norm on word vector dimensions. Thus we can conclude that our method would be competitive to uSIF under a fair comparison. \\n\\n+----------------------------+-----------+----------+----------+-----------+\\n| Method | STS12 | STS13 | STS14 | STS15 |\\n+----------------------------+-----------+----------+----------+-----------+\\n| uSIF - PCA5 - norm | 0.6043 | 0.6063 | 0.6700 | 0.6361 |\\n| SIF - PCA - norm | 0.6014 | 0.6016 | 0.6646 | 0.6286 |\\n+----------------------------+-----------+----------+----------+-----------+\\n| uSIF - PCA5 + norm | 0.6272 | 0.6765 | 0.7240 | 0.7260 |\\n| SIF - PCA + norm | 0.6275 | 0.6732 | 0.7226 | 0.7215 |\\n+----------------------------+-----------+----------+----------+-----------+\\n| uSIF + PCA5 + norm | 0.6493 | 0.7174 | 0.7439 | 0.7612 |\\n| SIF + PCA + norm | 0.6459 | 0.7089 | 0.7366 | 0.7517 |\\n| SIF + PCA5 + norm | 0.6492 | 0.7183 | 0.7440 | 0.7631 |\\n+----------------------------+-----------+----------+----------+-----------+\\nFair comparison of SIF and uSIF, using glove.840B.300d. The values are Pearson correlations. PCA is the principle component removal as per Arora et al. 2016 and PCA5 is the weighted principle component removal Ethayarajh, 2018.\"}", "{\"title\": \"uSIF uses PCA on the test corpus, thus it would replace SIF+PCA , not SIF itself. (Please read algorithm 1 in (Ethayarajh, 2018) ) .\", \"comment\": \"In Ethayarajh, 2018 (lines 12-19 of Algorithm 1) you will notice that the uSIF sentence embedding uses PCA (for removing the common discourse vector) on the STS sentence corpus making it an offline algorithm, thus it should be compared in the offline methods section. These methods require having the sentence corpus a priori in order to function at their best, you can observe a notable performance difference between SIF + PCA and just SIF. In short, uSIF is related to SIF + PCA (as opposed to just SIF), and is not applicable in online information retrieval cases. Ethayarajh, 2018 does not provide comparisons with an alternate method that works in online settings (not carrying out the discourse vector removal). Thus, in order to run a valid comparison with online methods we would have to modify the implementation provided in Ethayarajh, 2018 and remove the principal component subtraction element from it, as done in Arora et al. 2017 (this amounts to setting m=0 in https://github.com/kawine/usif). This test would be required to show that the online version of uSIF significantly outperform our method in online scenarios.\\n\\nIt is worth noting that in Ethayarajh, 2018 a different correlation metric (Pearson instead of Spearman) is used, and thus the results are not directly comparable (although conclusions can be drawn due to its relative performance to SIF+PCA). Moreover, the intention of this work is not to outperform all existing baselines or to achieve state of the art, but to propose a probabilistic framework for deriving similarity measures. We never claim state of the art in the paper, we just state that our method remains strongly competitive to existing online baselines which still holds true. \\n\\nFinally, Ethayarajh, 2018 was published as a workshop paper in July 2018, which was close to the submission deadline. It is difficult to keep up with every new paper, especially if it was published within 2 months before the ICLR deadline. We were unable to find another STS-related submission in ICLR 2018 that cites this paper, so we hope we have convinced the reviewer that the omission is justified.\"}", "{\"comment\": \"I think using uSIF (Ethayarajh, 2018) would be a better comparison for your method than SIF. uSIF fixes some of the problems with SIF and as a result does much better on the STS tasks, achieving state-of-the-art.\\n\\nThe GloVe version of SIF does better than your approach for STS'12, STS'13, STS'14, and STS'15 (STS'16 is not given in the uSIF paper).\\n\\n[1] https://github.com/kawine/usif\", \"title\": \"uSIF instead of SIF?\"}", "{\"title\": \"Paper draft updated; Modifications summary\", \"comment\": [\"We would like to again thank the reviewers for their time. Taking their feedback into account, we have implemented the following changes:\", \"Provided a more thorough motivation for our work in the Introduction and Background sections of the paper.\", \"Clarified the difference between online and offline scenarios, as well as given examples for each of them.\", \"Moved most of the mathematical work into the Appendix, in order to improve ease of reading.\", \"Added an additional section introducing the Gaussian likelihood, to illustrate a second example likelihood under our framework.\", \"Presented only results with stopwords, to address the concerns of unfair comparison with other methods.\", \"Conducted more experiments using Word2Vec and FastText embeddings, in addition to the original GloVe embeddings.\", \"Matched the results in Arora et al. (with principal component removal) more closely using the newly introduced Gaussian likelihood, using purely word embedding information, and no external inverse frequencies weights like in Arora et al.\", \"Showed how our framework can be used to test assumptions about properties of word embeddings.\", \"Provided further experiments illustrating the poor performance of the Bayes Factor and BIC in Appendix E.\", \"With thanks,\", \"The authors of Paper680\"]}", "{\"title\": \"Clarifications and updated results (Gaussian likelihood function) [Part 1]\", \"comment\": \"We want to thank the reviewer for the suggested directions on motivating this work. We will try to address the main concerns below and will deffer the details for the next version of the manuscript.\\n\\n\\\"The paper proposes a Bayesian model comparison based approach for quantifying the semantic similarity between two groups of embeddings (e.g., two sentences).\\\"\\n\\nWe would like to clarify a point here that didn't come across clearly enough in our submission. Unlike prior work [1, 2], we are not carrying out Bayesian model comparison - we cover this approach in depth since it is the most relevant prior work to our framework. We carefully review why Bayesian model comparison may not be well suited to this application due to the Bayes Factor's sensitivity to the prior [3, 4]. In order to overcome this we propose model comparison criteria that minimise KL divergence across a candidate set of models. This results in a penalized likelihood ratio test which gives competitive results. We will further clarify this difference in the next version of the manuscript.\\n\\nWe have carried out additional experiments using the Bayes Factor and Bayesian Information Criteria (BIC). The results given by these approaches under-perform significantly compared to the information theoretic based criteria. We have provided empirical [5] and theoretical [3, 4] justifications as to why these two techniques are not well suited for the STS task. We will also add the experimental evidence that both the Bayes Factor under a vague prior and BIC perform poorly on the STS task.\\n\\n[1] P. Marshall et al. Bayesian evidence as a tool for comparing datasets. Physical Review D, 2006.\\n[2] Z. Ghahramani and K. Heller. Bayesian sets. In Advances in neural information processing systems, 2006.\\n[3] M. Bartlett. A comment on D. V. Lindley\\u2019s statistical paradox. Biometrika, 1957\\n[4] H. Akaike et al. Likelihood of a model and information criteria. Journal of econometrics, 1981\\n[5] J. Dziak et al. Sensitivity and specificity of information criteria. The Methodology Center and Department of Statistics, The Pennsylvania State University, 2012.\\n\\n\\\"What are the advantages compared to widely used learned models (say, a learned CNN that takes as input two sentences and outputs the similarity score)?\\\"\\n\\nA supervised approach such as the one suggested by the reviewer would definitely be an interesting research direction. It is often argued that generative models, such as the one proposed, are less susceptible to over-fitting on small training sets than discriminative models. Discriminative models are likely to fit noise in small training sets, such as the STS data set, which has in the order of thousands of labeled pairs. For this reason, common competitive approaches in the domain mainly rely on either semi-supervised or unsupervised learning procedures.\\n\\nSemi-supervised approaches (used in STS) do not use human labelled similarity pairs to train on, but instead train a supervised objective on a different task with plenty of data (such as aligned paraphrases) and then use the learned representations from these as sentence embeddings. The general focus of the STS task is in the unsupervised or low-resource setting.\\n\\nIt may be very costly to obtain a large enough labelled dataset for some of the supervised methods to be able to generalize in domain specific applications. This gives a practical motivation for the unsupervised approaches. We will discuss these comparisons and motivations in more detail in the updated version of the manuscript.\"}", "{\"title\": \"Clarifications and updated results (Gaussian likelihood function) [Part 2]\", \"comment\": \"\\\"The latter can fit the ground-truth labels given by humans, while it's unclear the model comparison leads to good correlation with human judgments. Some discussion should be provided.\\\"\\n\\nSTS provides a test set in order to evaluate how the methods correlate with human scores, which we have used to benchmark our proposed models. That is, performing well on the test set suggests there's a correlation between human judgment and the model's prediction of similarity score. We will clarify this in the new manuscript.\\n\\nWe will discuss the relation of our approach to the one presented in Equation (9) (Tversky's contrast model) of [6] - a work that analyses what a good similarity is from a cognitive science perspective.\\n\\n[6] J.B. Tenenbaum, and T.L. Griffiths, 2001. Generalization, similarity, and Bayesian inference. Behavioral and brain sciences.\\n\\n\\\"Have you considered using other models? In particular, more sophisticated ones may lead to better performance.\\\"\\n\\nThe motivation for this paper is to introduce a framework in which different probabilistic models can be assessed on the STS task. This is done such that a practitioner, through specifying the likelihood function, can encode suitable assumptions and constraints that may be favourable to the application of interest. The primary goal of this work is not to find the most accurate model, however we hope that this framework could be a stepping stone in using more complex and accurate generative models of text to asses semantic similarity. In the next draft of the paper we will add two different likelihoods which allow for non-unit normed vectors, unlike the vMF distribution.\\n\\n\\\"The experiments are just too simple and incomplete to make reasonable conclusions.\\\"\\n\\nWe are unsure if the reviewer has concerns about the STS task in particular, or the variety of experiments ran. \\n\\nTo address the former, we provide our argument for why we think STS is an adequate task to assess performance on. Our focus is on the setting where one has word level embeddings that contain semantic information about individual words, but no labelled corpus with examples of similar and dissimilar pairs (an unsupervised setting). Furthermore, we assume sentences arrive in an 'online' fashion meaning that we don't have access to the whole sentence corpus a priori. An example use-case like this is a chat-bot application.\\n\\nTo address the latter, we will extend our experiments by considering other word vectors usually used to assess performance on STS such as fasttext and word2vec. We will also include experimental results using other likelihoods and information criteria within our framework. Below we provide a preliminary set of results using a Gaussian likelihood with a diagonal covariance matrix.\\n\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n| | Method | STS12 | STS13 | STS14 | STS15 | STS16| W. A.* |\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n| fasttext | Ours | 0.6193 | 0.6335 | 0.6721 | 0.7328 | 0.7518| 0.6765|\\n| | SIF+PCA | 0.5893 | 0.7121 | 0.6790 | 0.7498 | 0.7142| 0.6810|\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n| glove | Ours | 0.6031 | 0.6132 | 0.6445 | 0.7171 | 0.7346| 0.6564|\\n| | SIF+PCA | 0.5681 | 0.6844 | 0.6546 | 0.7166 | 0.6931| 0.6552|\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n| word2vec| Ours | 0.5630 | 0.5799 | 0.6291 | 0.6951 | 0.6701| 0.6265|\\n| | SIF+PCA | 0.5324 | 0.6486 | 0.6510 | 0.7031| 0.6609| 0.6347|\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n* W.A. stands for weighted average\\n\\nWould experiments along these lines address the simplicity concern of the reviewer?\"}", "{\"title\": \"Clarifications and updated results (Gaussian likelihood function)\", \"comment\": \"We would like to thank the reviewer for their in-depth feedback. Below, we present preliminary results, as well as clarifications on the conceptual questions that were posed.\\n\\n\\\"My concern with this paper however, is that I feel the paper lacks a motivation...\\\"\\n\\nThe main focus of this paper is the introduction of a framework that allows for clear assumptions to be made about the distribution of word vectors in a sentence via a choice of likelihood and for these assumptions to be tested on the STS benchmark - any likelihood will fit into this general framework and produce a similarity measure. This allows practitioners to design likelihoods that encode suitable properties for their application.\\n\\nThe more practical motivation for this paper is that the online* setting is key for many real-world use-cases such as information retrieval for dialogue systems (i.e. chatbots) where new queries will arrive in an online fashion and methods like SIF+PCA will not be as applicable as they are in the STS task. Whilst the method is derived as an online method it can be used in applications that have offline components, and we have shown it remains competitive to offline methods such as SIF+PCA (see the next section of this response).\\n\\n*We thank the reviewer for helpfully clarifying the definitions of an \\\"online\\\" and \\\"offline\\\" setting; we will include this definition in the next version of the manuscript.\\n\\n\\\"...namely because embeddings must have unit norm in their model.\\\"\\n\\nThe reviewer has helpfully pointed out an implicit assumption that we made - namely, we assumed that the magnitude of word embedding was noise rather than useful information. To test this assumption, we are running experiments with a multivariate Gaussian likelihood with diagonal covariance. This does not require unit norming the vectors and a set of preliminary results are presented below.\\n\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n| | Method | STS12 | STS13 | STS14 | STS15 | STS16| W. A.* |\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n| fasttext | Ours | 0.6193 | 0.6335 | 0.6721 | 0.7328 | 0.7518| 0.6765|\\n| | SIF+PCA | 0.5893 | 0.7121 | 0.6790 | 0.7498 | 0.7142| 0.6810|\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n| glove | Ours | 0.6031 | 0.6132 | 0.6445 | 0.7171 | 0.7346| 0.6564|\\n| | SIF+PCA | 0.5681 | 0.6844 | 0.6546 | 0.7166 | 0.6931| 0.6552|\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n| word2vec| Ours | 0.5630 | 0.5799 | 0.6291 | 0.6951 | 0.6701| 0.6265|\\n| | SIF+PCA | 0.5324 | 0.6486 | 0.6510 | 0.7031| 0.6609| 0.6347|\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n* W.A. stands for weighted average\\n\\n\\\"...I do find the results to be lackluster.\\\"\\n\\nAs we can see, the Gaussian distribution seems to be a better fit than the vMF one, matching SIF+PCA on the three word embeddings we tested on. We hope this addresses the concern of the reviewer that methods which depend on embedding magnitude won't be applicable with this framework. We will include a more thorough set of results in the next version of the manuscript.\\n\\n\\\"What mechanism was used to identify the stop words and does removing these help the other methods...\\\"\\n\\\"What happens to all methods when stop words are not removed?\\\"\\n\\n[EDIT] We have decided to report only results of experiments without any stopword removal. As the reviewers suggested, stopword removal heavily benefited the vMF likelihood, while the newly introduced Gaussian likelihood proves to be more robust. This decision also heavily reduces the clutter of the paper and the amount of care that needs to be taken to reproduce our results.\"}", "{\"title\": \"Clarifications and updated results (Gaussian likelihood function)\", \"comment\": \"We would like to thank the reviewer for their comments. We have tried to address their concerns below.\\n\\n\\\"I could not understand what were the differences between the online and offline settings...\\\"\\n\\nWe apologise for the lack of definition of these terms in the paper. This will be remedied in the next version. The difference between an online and offline setting is whether one has access to the entire dataset on which the methods will be evaluated at once. Information retrieval is an example setting in which one cannot perform the PCA on the query dataset as seen in (Arora et al. 2017), since new queries will arrive in an online fashion.\\n\\n\\\"...the paper could do a much better job at explaining the motivation behind the work...\\\" \\n\\nSimilarity measures are often not theoretically justified - for example, cosine similarity is preferred to dot product similarity based purely on empirical results. It is difficult for a practitioner to utilize word vectors efficiently if the underlying assumptions in the similarity measure are not well understood. Our framework addresses these issues by explicitly deriving the similarity through the likelihood of the chosen generative process, instead of empirically motivating the similarity measure. Via designing the likelihood the practitioner can encode suitable assumptions and constraints that may be favourable to the application of interest. Furthermore, this framework proposes a new research direction that could help understand semantic similarity, in which practitioners can study suitable distributions and see how these perform. We will elaborate on this further in the updated version of the manuscript.\\n\\nThe second motivation of this work is to derive a simple but competitive similarity measure that would perform well in online settings (as defined above). Online settings are both practical and key to use-cases that involve information retrieval in dialogue systems. For example, in a chat-bot application new queries will arrive in an online fashion and methods such as SIF+PCA will not perform as strongly as they do on STS. This is because the method itself (in this case the PCA part) was fitted on the reported test set, which will not be available a priori in online settings.\\n\\n\\\"I am not convinced of the practicality of the algorithm...\\\" \\n\\nWe were unsure of what the reviewer meant by practicality of the algorithm. The presented algorithm requires no more than 30 lines of code to implement once the derivatives for the chosen likelihood have been calculated. Furthermore, the derivatives can be computed automatically with frameworks such as autograd, TensorFlow, PyTorch, and others.\\n\\nBelow we provide the results when using a multivariate Gaussian distribution with diagonal covariance as a likelihood.\\n\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n| | Method | STS12 | STS13 | STS14 | STS15 | STS16| W. A.* |\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n| fasttext | Ours | 0.6193 | 0.6335 | 0.6721 | 0.7328 | 0.7518| 0.6765|\\n| | SIF+PCA | 0.5893 | 0.7121 | 0.6790 | 0.7498 | 0.7142| 0.6810|\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n| glove | Ours | 0.6031 | 0.6132 | 0.6445 | 0.7171 | 0.7346| 0.6564|\\n| | SIF+PCA | 0.5681 | 0.6844 | 0.6546 | 0.7166 | 0.6931| 0.6552|\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n| word2vec| Ours | 0.5630 | 0.5799 | 0.6291 | 0.6951 | 0.6701| 0.6265|\\n| | SIF+PCA | 0.5324 | 0.6486 | 0.6510 | 0.7031| 0.6609| 0.6347|\\n+--------------+-------------+-----------+----------+----------+----------+----------+----------+\\n* W. A. stands for weighted average\\n\\nAs can be seen from the table, the results for this likelihood are on par with the ones presented in (Arora et al. 2017).\\n\\n\\\"The approach needs to remove stop-words, which is reminiscent of good old feature engineering.\\\"\\n\\n[EDIT] We have decided to report only results of experiments without any stopword removal. As the reviewers suggested, stopword removal benefits some approaches more than it does others. This decision also heavily reduces the clutter of the paper and the amount of care that needs to be taken to reproduce our results.\\n\\n\\\"...discussing whether the algorithm is faster for common ranges of d (the word embedding dimension)...\\\"\\n\\nWe appreciate the suggestion of grounding the complexity analysis with values for N and D in the ranges experienced in the STS dataset. We will provide an analysis with the next version of the manuscript.\"}", "{\"title\": \"Interesting idea but somewhat incomplete study\", \"review\": \"The paper proposes a Bayesian model comparison based approach for quantifying the semantic similarity between two groups of embeddings (e.g., two sentences). In particular, it proposes to use the difference between the probability that the two groups are from the same model and the probability that they are from different models.\\n\\nWhile the approach looks interesting, I have a few concerns: \\n-- Using the Bayesian model comparison framework seems to be an interesting idea. However, what are the advantages compared to widely used learned models (say, a learned CNN that takes as input two sentences and outputs the similarity score)? The latter can fit the ground-truth labels given by humans, while it's unclear the model comparison leads to good correlation with human judgments. Some discussion should be provided.\\n-- The von Mises-Fisher Likelihood is a very simplified model of actual text data. Have you considered using other models? In particular, more sophisticated ones may lead to better performance. \\n-- Different information criteria can be plugged in. Are there comparisons? \\n-- The experiments are just too simple and incomplete to make reasonable conclusions. For example, it seems compared to SIF there is not much advantage even in the online setting.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting model, but would like to see some more motivation\", \"review\": \"The authors propose a probabilistic model for computing the sentence similarity between two sets of representations in an online fashion (that is, they do not need to see the entire dataset at once as SIF does when using PCA). They evaluate on the STS tasks and outperform competitive baselines like WMD, averaging embeddings, and SIF (without PCA), but they have worse performance that SIF + PCA.\\n\\nThe paper is clearly written and their model is carefully laid out along with their derivation. My concern with this paper however, is that I feel the paper lacks a motivation, was it derive an online similarity metric that outperforms SIF(without PCA)?\\n\\nA few experimental questions/comments:\\n\\nWhat happens to all methods when stop words are not removed? How far does performance fall? I think one reason it might fall (in addition to the reasons given in the paper) is that all vectors are set to have the same norm. For STS tasks, often the norms of these vectors are reduced during training which lessens their influence. What mechanism was used to identify the stop words and does removing these help the other methods (I know in the paper, stop words were removed in the baseline, did this unilaterally improve performance for these methods)?\\n\\nOverall I do like the paper, however I do find the results to be lackluster. There are many papers on combining word embeddings trained in various ways that have much stronger numbers on STS, but these methods won't be effective with this type of similarity (namely because embeddings must have unit norm in their model). Therefore, I think the paper needs some more motivation and experimental evidence of its superiority over related methods like SIF+PCA in order for it to be accepted.\\n\\nPROS\\n- Probabilistic model with clear design assumptions from which a similarity metric can be derived.\\n- Derived similarity metric doesn't require knowledge of the entire dataset (in comparison to SIF + PCA)\\n\\nCONS\\n- Performance seems to be slightly better than SIF, WMD, and averaging word embeddings, but below that of SIF + PCA \\n- Unclear motivation for the model, was it derive an online similarity metric that outperforms SIF(without PCA)?\\n- Requires the removal of stop words, but doesn't state how these were defined. Minor point, but tuning this could be enough to cause the improvement over related methods.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting paper but lacking both context and comprehensive analyses\", \"review\": \"Main contribution: devising and evaluating a theoretically-sound algorithm for quantifying the semantic similarity between two pieces of text (e.g., two sentences), given pre-trained word embeddings (glove).\", \"clarity\": \"The paper is generally well-written, but I would have liked to see more details regarding the motivation for the work, description of the prior work and discussion of the results. As an example, I could not understand what were the differences between the online and offline settings, with only a reference to the (Arora et al. 2016) paper that does not contain neither \\\"online\\\" nor \\\"offline\\\". The mathematical derivations are detailed, which is nice.\", \"originality\": \"The work looks original. It proposes a method for quantifying semantic similarity that does not rely on cosine similarity.\", \"significance\": \"I should start by saying I am not a great reviewer for this paper. I am not familiar with the STS dataset and don't have the mathematical background to fully understand the author's algorithm.\\nI like to see theoretical work in a field that desperately needs some, but overall I feel the paper could do a much better job at explaining the motivation behind the work, which is limited to \\\"cosine similarity [...] is not backed by a solid theoretical foundation\\\".\", \"i_am_not_convinced_of_the_practicality_of_the_algorithm_either\": \"the algorithm seems to improve slightly over the compared approaches (and it is unclear if the differences are significant), and only in some settings. The approach needs to remove stop-words, which is reminiscent of good old feature engineering. Finally, the paper claims better average time complexity than some other methods, but discussing whether the algorithm is faster for common ranges of d (the word embedding dimension) would also have been interesting.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
Hkxx3o0qFX
High Resolution and Fast Face Completion via Progressively Attentive GANs
[ "Zeyuan Chen", "Shaoliang Nie", "Tianfu Wu", "Christopher G. Healey" ]
Face completion is a challenging task with the difficulty level increasing significantly with respect to high resolution, the complexity of "holes" and the controllable attributes of filled-in fragments. Our system addresses the challenges by learning a fully end-to-end framework that trains generative adversarial networks (GANs) progressively from low resolution to high resolution with conditional vectors encoding controllable attributes. We design a novel coarse-to-fine attentive module network architecture. Our model is encouraged to attend on finer details while the network is growing to a higher resolution, thus being capable of showing progressive attention to different frequency components in a coarse-to-fine way. We term the module Frequency-oriented Attentive Module (FAM). Our system can complete faces with large structural and appearance variations using a single feed-forward pass of computation with mean inference time of 0.54 seconds for images at 1024x1024 resolution. A pilot human study shows our approach outperforms state-of-the-art face completion methods. The code will be released upon publication.
[ "Face Completion", "progressive GANs", "Attribute Control", "Frequency-oriented Attention" ]
https://openreview.net/pdf?id=Hkxx3o0qFX
https://openreview.net/forum?id=Hkxx3o0qFX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJxdcNSleN", "H1lHPv8iAm", "Skex8Gg067", "BkxC-fgA6X", "ByeFPbgATX", "rylzbX3ThX", "ryl5fByTjQ", "Syl6qnpoj7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544733840111, 1543362397188, 1542484551547, 1542484486318, 1542484320739, 1541419769676, 1540318482010, 1540246677155 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper678/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper678/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper678/Authors" ], [ "ICLR.cc/2019/Conference/Paper678/Authors" ], [ "ICLR.cc/2019/Conference/Paper678/Authors" ], [ "ICLR.cc/2019/Conference/Paper678/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper678/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper678/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"All reviewers gave a 5 rating.\\nThe author rebuttal was not able to alter the consensus view of reviewers.\\nSee below for details.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"All reviewers assess the paper as being marginally below acceptance threshold\"}", "{\"title\": \"The rebuttal is not convincing\", \"comment\": \"The authors claimed that the results in the paper are some \\\"typical cases\\\", neither completely random, nor cherry-picked bad results for CTX. However, \\\"typical cases\\\" are still very vague. The author still did some kind of selection. Since there are user studies, why not showing some top selected results and least selected results by the users when comparing with CTX. It is weird to say that one method is much better than the other, while the images look very similar.\\n\\nThe response is also self-contradictory. The author stated that \\\"it is not fair to compare the performance of two GAN approaches by looking at only a few samples\\\", but only a few examples are selected to demonstrate the benefit of FAM in Fig. 8. Since FAM is one of the claimed novel parts of the paper, why not including the results without FAM in the user study.\"}", "{\"title\": \"Response to Reviewer Three\", \"comment\": \"\\u201cThe results did not significantly outperform previous methods such as CTX in terms of visual quality\\u2026In figure 6, the results compared to CTX look similar. And the figure is too small to see the details.\\u201d\\n\\nWhen we chose the sample images (Figure 6), we did not intentionally choose bad samples of CTX and good samples of our approach. Instead, we want to demonstrate some typical cases when each of these approaches failed or succeeded. Since it is not fair to compare the performance of two GAN approaches by looking at only a few samples, we used the results of a user study, which is known as the \\u201cgolden standard\\u201d to evaluate GANs, to show the overall performance of different approaches, which we think should be more convincing. \\n\\n\\u201cThe visualization of the attention features look like normal feature in a neural network\\u201d\\n\\nThe filters (Figure 1) showed clear and regular patterns as we expected. For instance, while the resolution increased from 8x8 to 1024x1024, the model attended on higher frequency information. Regions with rich details (e.g. eyes) got more attention, especially at high resolutions. It is unlikely that they are simply some normal features in a neural network.\\n\\n\\u201cin Figure 8, the quality of results with and without FAM look very similar. These 4 images were selected from 3000 test images, but the difference is too small to show the benefit of FAM.\\u201d\\n\\nThe FAM is designed to enhance details. If we look closely at the third and fourth row of Figure 8, the results without FAM are blurrier, especially at regions with rich details (e.g. eye regions). Also, results with FAM usually have less distortions. \\n\\nAgain, we demonstrated the typical performances of models with and without FAM, instead of intentionally choosing images that showed the worst cases of images without FAM. \\n\\n\\u201cIn figure 8, it is unclear how the performance changes with each loss. Probably the results without L_bdy, L_rec, L_feat should be analyzed separately.\\u201d\\n\\nThis is a good suggestion, it would be better to do a more thorough ablation study. However, the effects of many losses (e.g. L_rec, L_feat) have been well studied in previous literatures (e.g. Li et al., 2017) and thus they are not the focus of our work.\\n\\n\\u201cHow many images were used in the user study? Did each subjects evaluate the entire test set 3009 images?\\u201d\\n\\nFor session 1 where the experiment directly compares our method with context encoder, each subject evaluates 100 chosen randomly images. For session 2, 3 and 4 where each method compares with ground truth, each subject evaluates another random 100 images. The total coverage rate over the entire test set is about 86%.\"}", "{\"title\": \"Response to Reviewer Two\", \"comment\": \"Thanks for the professional reviews. We would like to make some clarification to better demonstrate our work.\\n\\n\\u201cAlso the experimental results did not demonstrate better performance of the proposed approach. Why is that?\\u201d\\n\\nCould you please explain which part of the results you are referring to? \\nBoth the results of the quantitative evaluation and user study showed our model performed better. In Table 1, for L1 and L2, the smaller value is better. For PSNR, the larger value is better. In Figure 6, the larger value is better. \\n\\n\\u201cWhat are the major novelty compared with these existing works? (Progressive GAN and Wang et al.)\\u201d\\n\\nThe Progressive GAN is an image GENERATION network and the work of Wang et al. is an image TRANSLATION network. They are both different from the image COMPLETION networks (e.g. Pathak et al., 2016, Li et al., 2017, Iizuka et al., 2017, Yu et al., 2018, Liu et al., 2018, etc.) in terms of goals, network structures, training methods and loss functions, and are not directly comparable with our model. Neither of the Progressive GAN nor the work of Wang et al. can be applied to the image completion task directly, though some of their can be adopted to design completion models (e.g. the progressive training methodology in Progressive GAN). \\n\\nThe input of an image generation model (e.g. Progressive GAN) is noise, and the output is a random realistic image. The image completion task is more challenging because it not only requires generating plausible content, but also expects the generated content to match the contextual information perfectly. \\n\\nThe input of an image translation model is a complete image from one domain (e.g. segmentation labels), and the output is a transformed image in another domain, such as a realistic photo or a painting of another style (e.g. Zhu et al., 2017). The key difference is that some information is missing in the input of an image completion network, and the completion model needs to infer plausible content conditioned on contextual information.\\n\\nTherefore, it is more reasonable to compare our work with other completion models, rather than a generation or translation model. As we discussed in the response to R1, we have adopted many ideas from networks outside the image completion area and successfully integrate them to obtain an effective completion model. We have also designed novel structures, pipelines and loss functions so that our model can work appropriately as a whole. To our knowledge, our method is quite unique comparing to other image completion networks. \\n\\n\\u201cwhy the frequency attention module will yield better results?\\u201d\\n\\nTraditionally, researchers use the attention mechanism in spatial domain. Instead of learning to generate/complete the whole image at once, the model is encouraged to focus on a small region in one step. For instance, the DRAW model (Gregor et al., 2015) learns to read and write a small region of image in each timestep, and the whole image can be produced after many iterations. CTX (Yu et al., 2018) uses a contextual attention layer to help the model borrow contextual information from distant locations while filling in missing \\u201choles\\u201d. \\n\\nLike these spatial-attention-based methods, we design an attention mechanism in frequency domain. Instead of generating image features at different level of details in a single step, our model is encouraged to learn the structures in a coarse-to-fine manner. The detailed design of FAM is described in Section 3.2.1. The results (Figure 1) shows that our model performed as we expected: it focused on coarse structures when the resolution was low and switched its attention to finer details (e.g. hair or eye regions) as the resolution increased. This attention mechanism works because the complex problem of completing high-resolution images is divided into many sub-problems. \\n\\n\\u201cImprove the experiment, compared with stronger baselines: consider at least one or two of these state-of-the-art approaches\\u201d\\n\\nCTX (Yu et al., cvpr18) is considered state-of-the-art. When we ran the user study and it was the only approach that worked for 256x256 images. We also included the comparison with another state-of-the-art approach GL (Iizuka et al., siggraph17) in the quantitative comparison (Table 1).\"}", "{\"title\": \"Response to Reviewer One\", \"comment\": \"Thanks for the professional reviews. We would like to make some clarification to better demonstrate our work.\\n\\n\\u201cit is not clear to me what aspect of their GAN is particularly new\\u201d\\n\\nWe agree that some building blocks of our model, such as the Context Encoder structure (Pathak et al., 2016), Progressive Training Methodology (Progressive GAN, Karras et al., 2017), Conditional GAN (Mirza et al., 2014) etc., are based on existing approaches. But it is a challenging task to integrate these methods to obtain an effective completion model. On the top of these existing approaches, we have also designed new structures (e.g. our novel Frequency-Oriented Attentive Module), novel pipeline (Figure 2) and loss functions (e.g. boundary loss) to significantly improve the performance.\\n\\nPlease note that most of these building blocks are not originally designed for image completion and are seldomly used in completion models. For instance, the Progressive GAN is an image GENERATION model whose input is noise and the output are random realistic images. However, image COMPLETION is a more challenging task. Conditioned on corrupted images (i.e. the input), we not only need to generate plausible content, but also need to make sure that the content matches the contextual information perfectly. In sum, our network structure is significantly different from any of the existing approaches we built on. Additionally, to our knowledge, our method is also unique in the image completion area. \\n\\nBecause of the novel architecture/method, our model achieves significantly better performance than state-of-the-art approaches. First, our model is the first one that can complete face images at 1024x1024 while state-of-the-art (CTX, Yu et al., 2018) can only handle 256x256 images. By running a user study, which is currently the \\u201cgolden standard\\u201d to evaluate GANs, we found our model outperformed CTX in terms of visual quality at 256x256 resolution. Second, our model can control multiple attributes of synthesized content (including subtle facial expressions) while other completion models can only produce random content images. Third, our model does not need post-processing and can generate completed images directly while other approaches often have to post-process images (e.g. Lizuka et al., 2017) or paste synthesized content to original context (e.g. Yu et al., 2018).\\n\\n\\u201cDetailed experimental comparisons with more state-of-the-arts (e.g., RLA, Zhao et al., TIP 2018, 3D-PIM, Zhao et al., IJCAI 2018) are needed to justify the superiority of the proposed method\\u201d\\n\\nThanks. We will include these literatures in our reference and compare with them in the future experiments.\\n\\n\\u201cMore in-the-wild qualitative and quantitative experiments on recent benchmarks with large occlusion variations are needed to verify the efficacy of the proposed method.\\u201d\\n\\nAgreed, this is a good suggestion. But our current experiments followed the standard of experiments in state-of-the-art works (Pathak et al., 2016, e.g. Iizuka et al., 2017, Yu et al., 2018, etc.) and tested the performance of our model for various challenging mask types including center squared, random rectangular and arbitrary hand-drawn masks.\\n\\n\\u201cHow did authors update each component and ensure stable yet fast convergence while optimizing the whole GAN-based framework?\\u201d\\n\\nWe started with empirical parameters of existing approaches and updated them with trial and error . \\n\\n\\u201cCan the proposed method solve other challenging in-the-wild facial variations except occlusion? e.g., pose, expression, lighting, noise, etc.\\u201d\\n\\nThis is an interesting idea. We focused on solving the face completion (or the \\u201cinpainting\\u201d) problem in this paper. But it would be great if we could apply our model to other tasks. This is left for our future work.\"}", "{\"title\": \"This work uses GANs to recover clean faces from occluded counterparts. The effectiveness of the proposed method is verified qualitatively and quantitatively on CelebA-HQ. The proposed framework can be generalized to several face-related tasks, such as unconstrained face recognition. Although the novelty of the method is not really impressive, the proposed method seems to be useful for face-related applications and the experimental results are convincing to me.\", \"review\": \"This work uses GANs to recover clean faces from occluded counterparts. The effectiveness of the proposed method is verified qualitatively and quantitatively on CelebA-HQ. The proposed framework can be generalized to several face-related tasks, such as unconstrained face recognition. Although the novelty of the method is not really impressive, the proposed method seems to be useful for face-related applications and the experimental results are convincing to me.\", \"pros\": [\"This method is simple, apparently effective and is a nice use of GANs for a practical task. The paper is written clearly and the English is fine.\"], \"cons\": [\"My main concern with this paper is regarding the novelty. The authors seem to claim a novel GAN architecture by using an adversarial auto-encoder-based architecture. However, it is not clear to me what aspect of their GAN is particularly new.\", \"Missing experimental comparisons with state-of-the-arts. Detailed experimental comparisons with more state-of-the-arts (e.g., RLA, Zhao et al., TIP 2018, 3D-PIM, Zhao et al., IJCAI 2018) are needed to justify the superiority of the proposed method.\", \"Missing more in-the-wild comparisons in the Experiment section. This paper mainly performed experiments on CelebA-HQ. More in-the-wild qualitative and quantitative experiments on recent benchmarks with large occlusion variations are needed to verify the efficacy of the proposed method.\"], \"additional_comments\": [\"How did authors update each component and ensure stable yet fast convergence while optimising the whole GAN-based framework?\", \"Can the proposed method solve other challenging in-the-wild facial variations except occlusion? e.g., pose, expression, lighting, noise, etc.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Review\", \"review\": \"The paper proposes a complex generative framework for image completion (particularly human face completion). It aims at solving the following challenges: 1) complete the human face at both low and high resolution; 2) control the attribute of the synthetic content; 3) without the need of complex post-processing. To achieve so, this paper proposes a progressively attentive GAN to complete face image at high resolution with multiple controllable attributes in a single forward pass without post-processing. Particularly it introduces a frequency-oriented attentive module (FAM) to attend on finer details.\\n\\nThe method seems interesting, however it seems to make slight change based on ProGAN (ICLR' 18 https://arxiv.org/abs/1710.10196). Also similar idea could be found in many other papers, e.g., Wang et al. High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs, CVPR' 18. \\n\\nThe authors should \\n1) clarify why this paper makes non-incremental contribution? What are the major novelty compared with these existing works? \\n2) why the frequency attention module will yield better results?\\n3) Improve the experiment, compared with stronger baselines: consider at least one or two of these state-of-the-art approaches. Also in my opinion model size and training time needs to be compared as well.\\n\\nAlso the experimental results did not demonstrate better performance of the proposed approach. Why is that?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"High quality results but limited novelty. Need more evidence of improvements over previous methods\", \"review\": \"This paper proposed a new method for face completion using progressive GANs. The novelty seems very limited compared with previous methods. The results did not significantly outperform previous methods such as CTX in terms of visual quality. In addition, some of the features for the proposed method were not evaluated properly.\\n\\n1. The frequency attention module is not convincing. The visualization of the attention features look like normal feature in a neural network. Also, in Figure 8, the quality of results with and without FAM look very similar. These 4 images were selected from 3000 test images, but the difference is too small to show the benefit of FAM. \\n\\n2. In figure 8, it is unclear how the performance changes with each loss. Probably the results without L_bdy, L_rec, L_feat should be analyzed separately. \\n\\n3. In figure 6, the results compared to CTX look similar. And the figure is too small to see the details. For example, from row 1, the result by CTX seems even better. \\n\\n4. How many images were used in the user study? Did each subjects evaluate the entire test set 3009 images?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
rkle3i09K7
Robust Determinantal Generative Classifier for Noisy Labels and Adversarial Attacks
[ "Kimin Lee", "Sukmin Yun", "Kibok Lee", "Honglak Lee", "Bo Li", "Jinwoo Shin" ]
Large-scale datasets may contain significant proportions of noisy (incorrect) class labels, and it is well-known that modern deep neural networks poorly generalize from such noisy training datasets. In this paper, we propose a novel inference method, Deep Determinantal Generative Classifier (DDGC), which can obtain a more robust decision boundary under any softmax neural classifier pre-trained on noisy datasets. Our main idea is inducing a generative classifier on top of hidden feature spaces of the discriminative deep model. By estimating the parameters of generative classifier using the minimum covariance determinant estimator, we significantly improve the classification accuracy, with neither re-training of the deep model nor changing its architectures. In particular, we show that DDGC not only generalizes well from noisy labels, but also is robust against adversarial perturbations due to its large margin property. Finally, we propose the ensemble version ofDDGC to improve its performance, by investigating the layer-wise characteristics of generative classifier. Our extensive experimental results demonstrate the superiority of DDGC given different learning models optimized by various training techniques to handle noisy labels or adversarial samples. For instance, on CIFAR-10 dataset containing 45% noisy training labels, we improve the test accuracy of a deep model optimized by the state-of-the-art noise-handling training method from33.34% to 43.02%.
[ "Noisy Labels", "Adversarial Attacks", "Generative Models" ]
https://openreview.net/pdf?id=rkle3i09K7
https://openreview.net/forum?id=rkle3i09K7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJg8srvHgV", "H1gwRCWxJ4", "H1gpKAWxJV", "S1eqi1XYRm", "rkgNYrWt07", "rklKNDqrAQ", "Skx-bI5HCX", "H1xd_45SCm", "B1elymcSCX", "HJe1O6BchX", "B1gcyfVOn7", "SyxhCjde3Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545069982473, 1543671502554, 1543671429505, 1543217058151, 1543210363815, 1542985520518, 1542985209499, 1542984815656, 1542984408338, 1541197158538, 1541059041748, 1540553684505 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper677/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper677/Authors" ], [ "ICLR.cc/2019/Conference/Paper677/Authors" ], [ "ICLR.cc/2019/Conference/Paper677/Authors" ], [ "ICLR.cc/2019/Conference/Paper677/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper677/Authors" ], [ "ICLR.cc/2019/Conference/Paper677/Authors" ], [ "ICLR.cc/2019/Conference/Paper677/Authors" ], [ "ICLR.cc/2019/Conference/Paper677/Authors" ], [ "ICLR.cc/2019/Conference/Paper677/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper677/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper677/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"While the paper contains interesting ideas, the reviewers agree the experimental study can be improved.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"reject\"}", "{\"title\": \"After First Revision\", \"comment\": \"Dear AnonReviewer2,\\n\\nWe hope that you found our rebuttal/revision for you and other reviewers in common. \\n\\nIf you have any remaining questions/concerns, please do not hesitate to let us know and we would be happy to answer.\\n\\nThank you very much,\\nAuthors\"}", "{\"title\": \"After First Revision\", \"comment\": \"Dear AnonReviewer1,\\n\\nWe hope that you found our rebuttal/revision for you and other reviewers in common. \\n\\nIf you have any remaining questions/concerns, please do not hesitate to let us know and we would be happy to answer.\\n\\nThank you very much,\\nAuthors\"}", "{\"title\": \"Response for VAT\", \"comment\": \"Dear AnnoReviewer3,\\n\\nThank you very much again for your clarification and suggestion.\\n\\nTo address multiple reviewers\\u2019 concerns in common, we follow the same experimental setups of Co-teaching [1] (the most recent related work), where the authors did not consider VAT [2]. However, your suggested experiments with VAT should be very interesting, and we will add them to the final draft. As evidenced in our heavy experimental results, we strongly believe that our training-agnostic method can also improve the performance of the deep models trained with VAT, e.g., Co-teaching + VAT.\\n\\nSincerely,\\nAuthors\\n\\n[1] Han, B., Yao, Q., Yu, X., Niu, G., Xu, M., Hu, W., Tsang, I. and Sugiyama, M., Co-teaching: robust training deep neural networks with extremely noisy labels. In NIPS. 2018.\\n\\n[2] T. Miyato, S. Maeda, M. Koyama, and S. Ishii. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. ICLR, 2016.\"}", "{\"title\": \"\\\"Comparison with VAT\\\" belongs one of the main directions in deep learning with noisy labels.\", \"comment\": \"Hi Authors,\\n\\nI appreciated your heavy revision. Please keep in mind that \\\"VAT\\\" is previously proposed for semi-supervised learning. However, it can be empirically used for deep learning with noisy labels.\\n\\nThere have three ways to handle noisy labels. First, data perspective (Backward Correction and so on); Second, training perspective (MentorNet, Co-teaching and so on); and Lastly, regularization perspective (VAT, Mean Teacher and so on).\\n \\nWe have already tested that MentorNet [1] + VAT and Co-teaching [2] + VAT will significantly boost the performance of MentorNet and Co-teaching. That is why I mention this. Due to time limits, I can understand you may not compare this baseline.\", \"references\": \"[1] L. Jiang, Z. Zhou, T. Leung, L. Li, and L. Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, 2018.\\n\\n[2] B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, M. Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NeurIPS, 2018.\\n\\nRegards,\\nAnonReviewer3\"}", "{\"title\": \"Common response for all reviewers\", \"comment\": \"We very much appreciate valuable comments, efforts, and time of the reviewers. We first address common concerns of the reviewers and other issues for each individual one separately. Revised parts in the new draft are colored by red (in particular, we updated or newly added the abstract, Section 1, 2.1 and 3, Appendix B, E, F, and Table 2, 3, 4, 5, 6, 7, 8, 10, 11, 12 and 13).\\n\\nQ1. New results for comparison with more state-of-the-art training methods.\\n\\nFollowing AnonReviewer1/3\\u2019s suggestions, we added more experimental results on other training methods including D2L [1], Co-teaching [2] and MentorNet [5] which have been achieved the state-of-the-art performance on noisy labeled datasets (see Table 3 and Table 4 of our revised draft). As expected, the new results also confirm that our inference method is training-agnostic, i.e., it can improve the performance of any prior training methods. Here, we remark that Table 3 only considers the methods training a single network (e.g., D2L [1] and Forward/Backward [3]), while those in Table 4 train multiple networks, i.e., an ensemble of classifiers (Decoupling [4] and Co-teaching [2]) or a meta-learning model (MentorNet [5]). We consider such two different setups to follow the same experimental setups of prior works [1] and [2], respectively.\\n\\nQ2. New results for class-conditional (or flip) noise.\\n\\nFollowing AnonReviewer 2/3\\u2019s suggestions, we reported the experimental results on class-conditional (called flip) noise setups of [2] (see Table 2 of our revised draft). Our method still outperforms all baseline methods by far even under such asymmetric noise setups. This confirms that our noise-agnostic method should be useful in practice.\\n\\n[1] Ma, X., Wang, Y., Houle, M.E., Zhou, S., Erfani, S.M., Xia, S.T., Wijewickrema, S. and Bailey, J., Dimensionality Driven Learning with Noisy Labels. In ICML, 2018.\\n\\n[2] Han, B., Yao, Q., Yu, X., Niu, G., Xu, M., Hu, W., Tsang, I. and Sugiyama, M., Co-teaching: robust training deep neural networks with extremely noisy labels. In NIPS. 2018.\\n\\n[3] G. Patrini, A. Rozza, A. Menon, R. Nock, and L. Qu. Making deep neural networks robust to label noise: A loss correction approach. In CVPR, 2017.\\n\\n[4] Eran Malach and Shai Shalev-Shwartz. Decoupling\\u201d when to update\\u201d from\\u201d how to update\\u201d. In NIPS, 2017.\\n\\n[5] Jiang, L., Zhou, Z., Leung, T., Li, L.J. and Fei-Fei, L., MentorNet: Regularizing very deep neural networks on corrupted labels. In ICML, 2018.\\n\\nThanks a lot,\\nAuthors\"}", "{\"title\": \"Responses for AnonReviewer2\", \"comment\": \"We very much appreciate your valuable comments, efforts and times on our paper. Our responses for all your questions are provided below. Our major revisions in the new draft are colored by red.\\n\\nQ1. Comparison with [1, 2, 3, 4].\\n\\nThe main difference between our method and [1, 2] is that we do not directly train the Gaussian mixture model, i.e., generative classifier but we post-process it on hidden feature spaces of pre-trained deep models. In addition, we study a robust inference method to handle noisy labels in training samples, while they did not. Next, [3,4] also assume clean training labels, and aim for detecting abnormal test samples after \\u2019clean\\u2019 training. Therefore, a comparison with [1, 2, 3, 4] is not straightforward as our goal is different. We clarified this in Section 2.1 of the revised draft.\\n\\nQ2. Computational cost.\\n\\nAs you expect, estimating the parameters of LDA is very cheap compared to training original deep models like ResNet and DenseNet, since it requires only one forward pass to extract the hidden features.\\n\\nQ3. Version of backward/forward losses.\\n\\nAs mentioned in Appendix B of the previous draft, we use the estimated noise transition matrices for backward/forward losses. We clarified more details of experimental setups in Appendix B of the revised draft.\\n\\nQ4. Updated abstract and performance evaluation.\\n\\nAs AnonReviewer 3 mentioned, our main contribution is developing a new inference method which can be used under any pre-trained deep model. In other words, our goal is not outperforming the performance of prior training methods and complementary to them, i.e., our inference method can improve the performance of any prior training methods (see our common response to all reviewers). Nevertheless, we agree with your comments that it is more meaningful to emphasize our improvement over the state-of-the-art training methods. In the abstract of the revised draft, we report our improvement over Co-teaching [5] which is the most recent and state-of-the-art training method.\\n\\nQ5. Evaluation on adversarial attacks.\\n\\nIn the revised draft, we also consider optimization-based adaptive attacks against our method under the black-box setup (see Table 5) and the white-box setup (see Table 10). In both setups, our inference method is shown to be more robust compared to the softmax inference. We further show that our method further improves the robustness of deep models optimized by adversarial training (see Table 6 and 11). Such experimental results support our claim that the proposed generative classifier can improve the robustness against adversarial attacks as it utilizes multiple hidden features (i.e., harder to attack all of them). We very much appreciate your valuable comments again.\\n\\n[1] Wen, Y., Zhang, K., Li, Z. and Qiao, Y., A discriminative feature learning approach for deep face recognition. In ECCV, 2016.\\n\\n[2] Wan, W., Zhong, Y., Li, T. and Chen, J., Rethinking feature distribution for loss functions in image classification. In CVPR, 2018.\\n\\n[3] Lee, K., Lee, K., Lee, H. and Shin, J., A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks. In NIPS, 2018.\\n\\n[4] Ma, X., Li, B., Wang, Y., Erfani, S.M., Wijewickrema, S., Houle, M.E., Schoenebeck, G., Song, D. and Bailey, J. Characterizing adversarial subspaces using local intrinsic dimensionality. In ICLR, 2018.\\n\\n[5] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: robust training deep neural networks with extremely noisy labels. In NIPS, 2018.\\n\\nThanks a lot,\\nAuthors\"}", "{\"title\": \"Responses for AnonReviewer3\", \"comment\": \"We very much appreciate your valuable comments, efforts and times on our paper. Our responses for all your questions are provided below. Our major revisions in the new draft are colored by red.\\n\\nQ1. More related works\\n\\nWe updated the introduction by including more recent works [1, 2, 3, 4, 5] related to deep learning with noisy labels. In the previous draft, we only included the relevant literature which involves a single network/classifier. The updated related works utilize multiple networks, e.g., an ensemble of classifiers or meta-learning model. We also added new experimental results for them in Table 4 of the revised draft, as we mentioned in our common response to all reviewers. Thank you very much for the suggestions.\\n\\nQ2. Comparison with VAT [6].\\n\\nWe remark that a targeted setting of VAT [6] is different from ours in that it is designed for improving the performance on semi-supervised learning, while our main goal is handling noisy labels in the training dataset. Due to this, we skip the comparison with VAT. Instead, as we mentioned in our common response to all reviewers, we consider more training baselines (such as MentorNet [2] and Co-teaching [3]) focusing on handling noisy labels, and show that our inference method can improve all of them.\\n\\nQ3. L-FBGS adversarial attacks [8].\\n\\nWe remark that L-FBGS [8] is known to fail easily due to the near-zero gradient of loss function [7]. Instead, we consider CW attack [7] which is known to be much stronger.\\n\\n[1] Jacob Goldberger and Ehud Ben-Reuven. Training deep neural-networks using a noise adaptation layer. In ICLR, 2017.\\n\\n[2] Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Regularizing very deep neural networks on corrupted labels. In ICML, 2018.\\n\\n[3] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: robust training deep neural networks with extremely noisy labels. In NIPS, 2018.\\n\\n[4] Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. Learning to reweight examples for robust deep learning. In ICML, 2018.\\n\\n[5] Eran Malach and Shai Shalev-Shwartz. Decoupling\\u201d when to update\\u201d from\\u201d how to update\\u201d. In NIPS, 2017.\\n\\n[6] T. Miyato, S. Maeda, M. Koyama, and S. Ishii. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. ICLR, 2016.\\n\\n[7] C. Nicholas and W. David. Towards evaluating the robustness of neural networks. In IEEE Symposium on SP, 2017.\\n\\n[8] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In ICLR, 2013.\\n\\nThanks a lot,\\nAuthors\"}", "{\"title\": \"Responses for AnonReviewer1\", \"comment\": \"We very much appreciate your valuable comments, efforts and times on our paper. Our responses for all your questions are provided below. Our major revisions in the new draft are colored by red.\\n\\nQ1. Updated proof.\\n\\nTo address your concerns, we provided more detailed explanations of our proof arguments in the revised draft (see Appendix F). We also re-organized our proof completely for better understanding.\\n\\nQ2. Relation to Tandem approach.\\n\\nAs you pointed out, our method is somewhat related to Tandem approach [1] in that both post-process a generative model on top of hidden features extracted by DNNs. However, the main purpose of Tandem is not for handling noisy labels. In particular, the Tandem approaches utilize the EM algorithm that should be highly influenced by outliers, while our method is specialized to be robust against them. We clarified this in Section 2 of the revised draft.\\n\\n[1] Hermansky, H., Ellis, D.P. and Sharma, S., Tandem connectionist feature extraction for conventional HMM systems. In IEEE ICASSP, 2000.\\n\\nThanks a lot,\\nAuthors\"}", "{\"title\": \"Robust Determinantal Generative Classifier for Noisy Labels and Adversarial Attacks\", \"review\": \"Quality: A simple approach accompanied with a theoretical justification and large number of experimental results. The theoretical justification is spread out in the main body and appendices. The proof given in the appendix is overly short and not detailed enough. The large number of experiment although welcoming needs to be properly discussed and related to the state of the art numbers, including any work that the authors are referring themselves in this submission. The approach is not linked to so called Tandem approach that was/is popular in speech recognition where a generative model (GMM) is trained on top of features extracted by a neural network model.\", \"clarity\": \"The simple approach is clearly described. However, the theoretical justification and experimental results are not.\", \"originality\": \"The work is moderately original.\", \"significance\": \"It is hard to assess given the current submission.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good papers but lacking of related works in deep learning with noisy labels and lacking of important baselines.\", \"review\": \"This paper formulates a new inference method called DDGC for noise labels and adversarial attacks. Their main idea is to induce a generative classifer on top of hidden feature spaces of the discriminative deep model. To improve the robustness, their DDGC model leverages the minimum covariance determinant (MCD) estimator. Besides, the author proposes Theorem 1 to justify their MCD-based generative classifer.\", \"pros\": \"1. The authors find a new angle for learning with noisy labels. Motivated by the fact that LDA-like generative classifer assuming the class-wise unimodal distribution might be robust, they introduce a generative classifer on top of hidden feature spaces of the discriminative deep model.\\n\\n2. The authors perform numerical experiments to demonstrate the effectiveness of their framework in benchmark datasets. And their experimental result support their previous claims.\", \"cons\": \"We have two questions in the following.\\n\\n1. Related works: In deep learning with noisy labels, there are three main directions, including small-loss trick [1-3], estimating noise transition matrix [4-6], and explicit and implicit regularization [7-9]. I would appreciate if the authors can survey and compare more baselines in their paper instead of listing some basic ones.\\n\\n2. Experiment: \\n2.1 Baselines: For noisy labels, the authors should add MentorNet [1] as a baseline https://github.com/google/mentornet From my own experience, this baseline is very strong. At the same time, they should compare with VAT [7]. For adversarial attacks, the author should compare with data type from [10], and list L-FBGS [11] as a basic baseline.\\n2.2 Datasets: For datasets, I think the author should first compare their methods on symmetric and aysmmetric noisy data. Besides, the current paper only verifies on vision datasets. The authors are encouraged to conduct 1 NLP dataset.\", \"references\": \"[1] L. Jiang, Z. Zhou, T. Leung, L. Li, and L. Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, 2018.\\n\\n[2] M. Ren, W. Zeng, B. Yang, and R. Urtasun. Learning to reweight examples for robust deep learning. In ICML, 2018.\\n\\n[3] B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, M. Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NIPS, 2018.\\n\\n[4] G. Patrini, A. Rozza, A. Menon, R. Nock, and L. Qu. Making deep neural networks robust to label noise: A loss correction approach. In CVPR, 2017.\\n\\n[5] J. Goldberger and E. Ben-Reuven. Training deep neural-networks using a noise adaptation layer. In ICLR, 2017.\\n\\n[6] S. Sukhbaatar, J. Bruna, M. Paluri, L. Bourdev, and R. Fergus. Training convolutional networks with noisy labels. In ICLR workshop, 2015.\\n\\n[7] T. Miyato, S. Maeda, M. Koyama, and S. Ishii. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. ICLR, 2016.\\n\\n[8] A. Tarvainen and H. Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In NIPS, 2017.\\n\\n[9] S. Laine and T. Aila. Temporal ensembling for semi-supervised learning. In ICLR, 2017.\\n\\n[10] C. Nicholas and W. David. Towards evaluating the robustness of neural networks. In IEEE Symposium on SP, 2017.\\n\\n[11] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In ICLR, 2013.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting idea, but the paper need to improve\", \"review\": \"The paper proposes a new method for robustifying a pre-trained model improving its decision boundaries. The goal is to defend the model from mistakes in training labels and to be more robust to adversarial examples at test time. The main idea is to train a LDA on top of the last-layer, or many layers in its ensemble version, making use of a small set of clean labels after training the main model. Additionally, robustness to outliers is achieved by the minimum covariance determinant estimator for the LDA covariance matrix.\\n\\nWhile I find this idea interesting and of potential practical use, I have concerns about novelty and the experimental results and overall I recommend rejection.\\n\\n== Method\\n\\nAt a high level, the idea of imposing a mixture of gaussian structure in the feature space of a deep neural network classifier is not new. See for example [A, B]. In particular, [B] performs experiments on adversarial examples. Moreover, in spite of the authors writing that their goal is \\u201ccompletely different\\u201d from [Lee at al 18a, Ma et al 18a], I found the two cited papers having a similar intent and approach to the problem, but a comparison is completely missing. Without a proper comparison (formal and experimental) with these lines of work, the paper is incomplete.\\n\\nTheorem 1 well supports the proposed method and it is well explained. I did not check the proofs in appendix.\\n\\nRegarding the presentation, I found odd having some experimental results (page 5) before the Section on experience even have started.\\n\\n== Experiments\\n\\nThe authors did not comment on the computational overhead of training their LDA. But I assume it is very cheap compared to training e.g. the ResNet, correct?\\n\\nI also did not find an explanation of which version backward/forward losses [Patrini et al. 17] is used in the experiments: are the noise transition matrices estimated on the data or assumed to be known (for fair comparison, I would do the former).\", \"i_disagree_on_the_importance_of_the_numbers_reported_on_the_abstract\": \"DenseNet on Cifar10 with 60% goes from 53.34 to 74.72. This is the improvement with the weakest possible baseline, i.e. no method to defend for noise! Looking at Table 3, which is on ResNets, I will make this point clear. Noise 60% on CIFAR10, DDGC improves 60.05-> 71.38, while (hard) bootstrap and forward do better. Even more, it seems that forward does always better than DDGC with noise 60% on every dataset. Therefore, I don\\u2019t find interesting to report how DDGC improve upon \\u201cno baseline\\u201d, because known methods do even better. Yet, it is interesting --- and I find this to be a contribution of the paper --- that DDGC can be used in combination with prior work to boost performance even further.\\n\\nA missing empirical analysis is on class-conditional noise (see for example Patrini et al. 17 for a definition). An additional column on the table showing that the algorithm can also work in this case would improve the confidence that the proposed method is useful in practice. Uniform noise is the least realistic assumption for label noise.\\n\\nRegarding the experiments on adversarial examples, I am not convinced of their relevance at all. There are now dozens of defence methods that work (partially) for improving robustness. I don\\u2019t think it is of any practical use to show that a new algorithm (such at DDGD) provide some defence compared to no defence. A proper baseline should have been compared.\", \"one_more_unclear_but_important_point\": \"is Table 3 obtained by white-box attacks on the Resnet/Denset but oblivious of the MCD? Is so, I don\\u2019t think such an experiment tells the whole story: as the the MCD would arguably also be deployed for classification, the attacker would also target it.\\n\\nAdditionally, the authors state \\u201cwe remark that accessing the parameters of the generative classifiers [\\u2026] is not a mild assumption since the information about training data is required to compute them\\u201d. I don\\u2019t follow this argument: this is just part of the classifier. White box attacks are by definition performed with the knowledge of the model, what is the difference here?\\n\\nTable 8 rises some concerns. I appreciate the idea of testing full white-box adversarial attacks here. But I don\\u2019t understand how it is possible that DDGC is more robust, with higher adversarial test accuracy, than in Table 3.\\n\\n[A] Wen, Yandong, et al. \\\"A discriminative feature learning approach for deep face recognition.\\\" European Conference on Computer Vision. Springer, Cham, 2016.\\n[B] Wan, Weitao, et al. \\\"Rethinking feature distribution for loss functions in image classification.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SyxknjC9KQ
Dense Morphological Network: An Universal Function Approximator
[ "Ranjan Mondal", "Sanchayan Santra", "Bhabatosh Chanda" ]
Artificial neural networks are built on the basic operation of linear combination and non-linear activation function. Theoretically this structure can approximate any continuous function with three layer architecture. But in practice learning the parameters of such network can be hard. Also the choice of activation function can greatly impact the performance of the network. In this paper we are proposing to replace the basic linear combination operation with non-linear operations that do away with the need of additional non-linear activation function. To this end we are proposing the use of elementary morphological operations (dilation and erosion) as the basic operation in neurons. We show that these networks (Denoted as Morph-Net) with morphological operations can approximate any smooth function requiring less number of parameters than what is necessary for normal neural networks. The results show that our network perform favorably when compared with similar structured network. We have carried out our experiments on MNIST, Fashion-MNIST, CIFAR10 and CIFAR100.
[ "Mathematical Morphology", "Neural Network", "Activation Function", "Universal Aproximatimation." ]
https://openreview.net/pdf?id=SyxknjC9KQ
https://openreview.net/forum?id=SyxknjC9KQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BkeaD_iBxV", "ryl9LrpkJV", "H1x7832kJ4", "rkeG3Oj1y4", "Byl-KT0pRQ", "H1eSNCd6p7", "BkeaejOa6Q", "H1gh48_TTX", "r1gIG8_6pQ", "B1gHuIGwpm", "Syxd-gzDTQ", "S1l6Toan3Q", "HJeCtix9hQ", "r1lgUd5v9X", "BygipUQr5Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1545087076956, 1543652689595, 1543650379438, 1543645354155, 1543527800571, 1542454828627, 1542454005146, 1542452788067, 1542452750434, 1542035052792, 1542033408451, 1541360581322, 1541176197930, 1538922567837, 1538762435234 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper676/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper676/Authors" ], [ "ICLR.cc/2019/Conference/Paper676/Authors" ], [ "ICLR.cc/2019/Conference/Paper676/Authors" ], [ "ICLR.cc/2019/Conference/Paper676/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper676/Authors" ], [ "ICLR.cc/2019/Conference/Paper676/Authors" ], [ "ICLR.cc/2019/Conference/Paper676/Authors" ], [ "ICLR.cc/2019/Conference/Paper676/Authors" ], [ "ICLR.cc/2019/Conference/Paper676/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper676/Authors" ], [ "ICLR.cc/2019/Conference/Paper676/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper676/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper676/Authors" ], [ "~Elad_Eban1" ] ], "structured_content_str": [ "{\"metareview\": \"This work presents an interesting take on how to combine basic functions to lead to better activation functions. While the experiments in the paper show that the approach works well compared to the baselines that are used as reference, reviewers note that a more adequate assessment of the contribution would require comparing to stronger baselines or switching to tasks where the chosen baselines are indeed performing well. Authors are encouraged to follow the many suggestions of reviewers to strengthen their work.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting ideas that requires to more adequate baselines\"}", "{\"title\": \"Thank you for the update\", \"comment\": \"Thank you for the update.\\n\\nModifying the network to accept 2D input is straightforward. But the theorem we have proved for the dense single layer case will not hold there. On the other hand, if we use 2D morphological operations in the network a single hidden layer will not be sufficient. So, we have to extend the network to the multi layer case. But then the same problems(case 1. over fitting; case2: slow learning) of the dense multi layer case arises in this situation also. \\n\\nHowever, as I asked reviewer3, It would be nice If we get some dataset names, where dense networks achieve state-of-art performances.\"}", "{\"title\": \"reproducibility challenge\", \"comment\": \"Thank you for implementing DenMo(Morph-Net).\\n\\nBy a quick look at your code I found the following.\\n1. For the initialization of weights(or structuring elements) we use glorot_uniform(https://keras.io/initializers/) \\n2. All the bias are initialized with zeros\\n3. For each dilation or erosion there is a bias, which means of there are n number of dilation/erosion node then there are n number of bias parameter. \\n4. For the optimization use 'adam' optimizer with the following Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)\\n5. Only pre-processing we have is normalized input between [0,1]\\n\\nLet me know if there is any other confusion after changing the code. However, code link will be provided when the review process is over. \\n\\n\\nThank you.\"}", "{\"title\": \"Thank you for the update\", \"comment\": \"Thank you for the update.\\nIt would be nice if we get some dataset names, where dense networks achieve state-of-art performances.\"}", "{\"title\": \"response to answers from authors\", \"comment\": \"Thanks for clarifying this reviewer's questions.\\nI think the novelty of linear combination of erosion/dilation units' outputs, which can approximate any continuous distribution, is novel but not substantially novel enough by its self. Ideally, it would be interesting to see empirical support from strong empirical results.\\n\\nWhile it is completely valid that the proposed method is not necessarily better than CNNs designed for image recognition tasks, would it be possible to compare to tasks where dense fully connected nets achieves state-of-the-art and show the effects (performance) of swapping in the proposed dilation/erosion network?\"}", "{\"title\": \"Revision\", \"comment\": [\"Ablation study on other data set\", \"Discussion on Multi-layer and gradient propagation is added.\", \"[1] Ref added\", \"Title has been changed from \\\"Morph-Net\\\" to \\\"DENSE MORPHOLOGICAL NETWORK\\\"\", \"Our contribution is highlighted in introduction\"]}", "{\"title\": \"Weak but with respect to deep CNNs.\", \"comment\": \"Thank you for reviewing our paper with valuable and detailed comments.\", \"q1\": \"\\\"The paper is a nice read also some specific point could be clarify. For instance it is not clear how the structuring elements of the dilation/erosion are learned? Are the learned simply through backpropagation? \\\"\\n>> You are right, the structuring elements in dilation-erosion layer and the weights of linear combination layer learn through back propagation. we have added a sub section highlighting a gradient calculation and back-propagation on our network.\", \"q2\": \"\\\"Also, it is not clear to me how Morph-Net differs from the previously proposed morphological neurons?\\\"\\n>> Morphological neurons have been defined in the literature in different ways. Although, all of them use dilation and erosion operation, this is usually followed by an additional operation (e.g. activation function [Ritter and Sussner 1996]). For our network we have defined dilation and erosion neurons that perform only dilation and erosion operation respectively. Apart from that, our network also employs an additional linear combination layer. As shown in Theorem 1, these two layers together can approximate any smooth continuous function without requiring additional activation functions. This claim cannot be made if only morphological neurons are used in the network.\", \"q3\": \"\\\"Empirical evaluation of Morph-Net could be improved as well. In particular, authors focus on image classification task. While they show that Morph-Net can outperform other fully connected architecture, the results on CIFAR10/100 seems low compared to convolutional network.It raises the question of the advantages of Morph-Net over convolutional neural networks ?\\\"\\n>> It is true that the convolution networks perform well for images as they are able to extract features based on spatial information. However, in this work we have defined our network for flattened input data and densely connected layers. For this reason our network does not have the advantage of conv type of operation. The main aim is to show that this type of network have capabilities similar to artificial neural networks while using less number of parameters. The advantage of this network over CNNs can possibly be shown after defining the dilation and erosion as 2D operations.\", \"q4\": \"\\\"Authors also limit their exploration to 3-layer networks. Why don\\u2019t you explore deeper network for both baseline and Morph-Net?\\\"\\n>> We have proved that using only 3-layer (considering input, dilation-erosion and linear combination layers) network any continuous function can be approximated. That is why we have shown the results using 3-layer networks only. As for going to the multi-layer case, the layers can be stacked in two ways. \\n[Type-I] Multiple dilation-erosion layer, followed by a single linear combination layer at the end.\\n[Type-II] A layer-unit may be defined as Dilation-Erosion layer followed by a linear combination layer. Then this layer-unit may be repeated desired number of times to realize the multi-layer dense morphological network. \\nFor the network of Type-I, it can be argued that the network is performing some combination of opening and closing operation, and their linear combination. As there are dilation-erosion (DE) layers back to back, the problem of gradient propagation is amplified. As a result it takes much more time to train than single layer architecture (Table 6). \\nSimilar explanation doesn't work for Type-II networks. From Figure 7 we see that the network has tendency to overfit.\", \"q5\": \"\\\"Finally, if I am not mistaken, authors use the same set of hyperparameters for the baselines/Morph-Net? It is not clear to me if the hyperparameters are optimal for all the approach? They might give an unfair advantage to one of the baseline or Morph-Net?\\\"\\n>> Yes, we have used same hyperparamenters for the baseline and Morph-Net, because we want to show that our network is more expressive when using similar hyper-parameters. The hyperparameters may not be optimal for any of the network. This is done for comparison purpose only.\", \"q6\": \"\\\"Overall, this paper present a nice idea. Showing the Morph-Net is an universal approximator is a nice result. However, the empirical evaluation could be improved. It is not clear to me at this point if Morph-Net brings a benefit compare to convolutional net for image classification task\\\"\\n>> Since we have not defined 2D Dilation/Erosion in this paper so we refrain ourselves from commenting on this issue, i.e., whether Morph-Net brings a benefit compare to Convolutional or not. However we believe, this is one of the forerunner work and it opens a many directions of future research. \\n\\nThank you again, please let us know if there are any queries or confusion.\"}", "{\"title\": \"Our work is one of the forerunner work and it opens a many directions of future research[Part 2/2]\", \"comment\": \"Q5: \\\" The main result of the paper is that the structuring element can be learned, but there is no discussion on what it is learned. Also, there is no comparison on related approaches that try to learn the structuring element in an end-to-end fashion such as [1].\\n>> Thank you for the reference. we are learning the parameters of the structuring elements in dilation-erosion layer and the parameters of the weighted combination layer. However, our main contribution is not the learning of structuring element. The main contribution is to design a dense morphological networks with dilation-erosion and their linear combination having similar expressive power as the artificial neural networks. As we are using morphological operations, learning of structuring elements comes into picture and for our case the size of the structuring element is same as that of the input. This is not the case for [1]. In our paper we are flattening the image (cifar-10, cifar-100) and producing the class label.\", \"q6\": \"\\\"Experiments lack a more thorough comparison with state-of-the-art and at least an ablation study to show that the proposed approach is effective and has merit. For example, what is the relative contribution of using dilation and erosion jointly versus either one of them. What is the comparison with a winner-take-all unit over groups of neurons such as max-pooling?\\\"\\n>> Thank you for the suggestion. We have shown the contribution of dilation and erosion neurons for the toy data only. We have updated the manuscript to show this relative comparison in other data sets.\", \"q7\": \"\\\"It seems that extending the work to multiple layers should be trivial but it is not reported and is left to future investigations. This hints at issues with the optimization and should be discussed, is it related to the binarization mentioned above? \\\"\\n>> You are right, it is not that trivial to extend the work for multiple layers. Since our theoretical justification is on single layer, we have not shown the results with multiple layers. \\nHowever, based on the reviewer's suggestion, we have added some results with multiple layers in Section 5 and \\nTable 6.\\nExtension of this network to multiple layers can be done in two ways. \\n[Type-I] Multiple dilation-erosion layer, followed by a single linear combination layer at the end.\\n[Type-II] A layer-unit may be defined as Dilation-Erosion layer followed by a linear combination layer. Then this layer-unit may be repeated desired number of times to realize the multi-layer dense morphological network. \\nFor the network of Type-I, it can be argued that the network is performing some combination of opening and closing operation, and their linear combination. As there are dilation-erosion (DE) layers back to back, the problem of gradient propagation is amplified. As a result it takes much more time to train than single layer architecture (Table 6). \\nSimilar explanation doesn't work for Type-II networks. From Figure 7 we see that the network has tendency to over-fit.\", \"q8\": \"Overall the idea is interesting but the way the structuring element is learned should be discussed in more details and exemplified visually. Experiments need to be improved and overall applicability is uncertain at this stage.\\n>> Some more experimental results and explanation are incorporated in the revised version. \\n\\nThank you again, please let us know if there are any queries or confusion.\"}", "{\"title\": \"Our work is one of the forerunner work and it opens a many directions of future research[Part 1/2]\", \"comment\": \"Thank you for reviewing our paper with valuable and detailed comments.\", \"q1\": \"\\\"The authors introduce Morph-Net, a single layer neural network where the mapping is performed using morphological dilation and erosion.I was expecting something applied to convolutional networks as such operators are very popular in image processing, so the naming is a bit misleading.\\\"\\n>> We will update the name of the paper to \\\"Dense Morphological Network: A Universal Function Approximator\\\" to reduce confusion.\", \"q2\": \"\\\"It is shown that the proposed network can approximate any smooth function, assuming a sufficiently large number of hidden neurons, that is a nice result.Clarity should be improved, for example it is mentioned that the structuring element is learned but never clearly explained how and what difficulties it poses.\\\"\\n>> In our network we learn the structuring element and the weights of the linear combination layer using gradient descent method for minimizing loss. While learning structuring elements there maybe following problem.\\nDilation (Erosion) operation involves max (min) operation. So it implies that, during back propagation, gradient of loss function with respect to components of structuring element is zero except the component corresponding to max (min). That means for each data only one component of structuring element may be updated. As a result, the learning process may be slow. \\nWe did not notice any other difficulties these dilation-erosion operation may give rise to. Note that, in this work, We have focused only on how the network works in general settings.\", \"q3\": \"\\\"In the main text it is written that alpha is {-1, 1}, which would result in a combinatorial search, but never explained how it is learned in practice. This is shown only in the appendix but it is not clear to me that using a binarization with the weights is not prone to degenerate solutions and/or to learn at all if proper initialization is not used. Did the authors experiment with smooth versions or other form of binarization with straight-through estimator or sampling?\\\"\\n>> The parameter alpha is not learned in our method, not even used. It is introduced only for proving the theorems. We only learn the structuring element and the weights of the linear combination layer during the training.\", \"q4\": \"\\\"Review: In the proof for theorem 1 it is not clear if the convergence of the proposed\\nnetwork is faster or slower than that of a classic single layer network.\\\"\\n>> Theorem 1 only shows that our network can approximate any continuous function provided there are enough nodes in dilation-erosion layer. We are not claiming anything regarding the convergence rate. However, it is already mentioned that training our network may be slower at times depending on the dimension of data.\"}", "{\"title\": \"A nice idea but weak empirical results\", \"review\": \"* Update:\\nThanks for you answer and clarification. While the Morph-net appears novel, the authors only report result for image classification task and don't achieve as good performance as standard convolutional baselines. Given the current empirical evaluation, I find hard to assess how significant is the contribution. I would encourage the authors to either compare on a task where dense networks achieve state-of-art performances or extend their approach to 2D inputs.\\n\\n\\n* Review\\n\\nThis paper introduces Morph-Net, a new architecture that intertwine morphological operator such as dilation/erosion with linear layer. Authors first show than Morph-Net are universal approximator. Morph-Net can be expressed as a sum of multi-order hinge functions which can approximate any continuous function. They then validate empirically the Morph-Net on MNIST, FashionMNIST, CIFAR10 and CIFAR100 datasets. In particular, authors investigate a 3 layers fully-connected Morph-Net and shows that it can outperform its Tanh/Relu/Maxout counterparts.\\n\\nThe paper is a nice read also some specific point could be clarify. For instance it is not clear how the structuring elements of the dilation/erosion are learned? Are the learned simply through backpropagation? Also, it is not clear to me how Morph-Net differs from the previously proposed morphological neurons? \\n\\nEmpirical evaluation of Morph-Net could be improved as well. In particular, authors focus on image classification task. While they show that Morph-Net can outperform other fully connected architecture, the results on CIFAR10/100 seems low compared to convolutional network. It raises the question of the advantages of Morph-Net over convolutional neural networks ? Authors also limit their exploration to 3-layer networks. Why don\\u2019t you explore deeper network for both baseline and Morph-Net? Finally, if I am not mistaken, authors use the same set of hyperparameters for the baselines/Morph-Net? It is not clear to me if the hyperparameters are optimal for all the approach? They might give an unfair advantage to one of the baseline or Morph-Net?\\n\\nOverall, this paper present a nice idea. Showing the Morph-Net is an universal approximator is a nice result. However, the empirical evaluation could be improved. It is not clear to me at this point if Morph-Net brings a benefit compare to convolutional net for image classification task.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The main idea is to replace the normal artificial neural networks by using basic morphological operations. Non-requrement of activation functions is a by-product.\", \"comment\": \"Thank you for reviewing our paper with valuable and detailed comments.\", \"q1\": \"\\\"This paper proposes to replace the standard RELU/tanh units with a combination of dilation and erosion operations, arguing for the observation that the new operator creates more hyper-planes and therefore have more expressive power\\\"\\n>> In this paper we propose to build networks with basic morphological operations. This gives us the power to build networks with similar expressive power of the normal artificial neural networks without the need of activation functions, while {\\\\em requiring less number of parameters}. Replacing the standard nonlinear activation function is not the main goal, it may be a by-product of the dilation-erosion operation. However, thanks for pointing this out.\", \"q2\": \"\\\"The paper is interesting and there are encouraging results which show a couple of percentage improvements over relu/tanh units. This paper is also clearly written and easy to understand. However there are two issues:\\n1. It is some what unclear from the paper what is the main novelty here (compared to existing morpho neurons), is it the learning of the structuring elements? is it the combination of the dilation+erosion operations?\\\"\\n>>The main contribution are as follows.\\n1. The use of linear combination operation after dilation-erosion operation. This structure, as shown in Section 3.3, can approximate any continuous function given enough dilation/erosion neurons.\\n2. We have shown that the networks build with such layers do not need activation functions.\\n3. The use of dilation-erosion layer followed by linear combination layer greatly increases number of possible decision boundaries. As a result, complex decision boundaries can be learned using small number of parameters. This is visually shown using a toy dataset in Section 4.1.\\n\\nNote that, in the dilation and erosion layers, we have considered structuring elements only of same size. However, in the training process we learn the values of the structuring element pixels as well as the weights of the linear combination layer. \\nHowever, we will add a paragraph highlighting our contribution in the revised version.\", \"q3\": \"\\\"The second issue is that presumably due to the fact that Conv layers are not used, the accuracy on cifar-10 and cifar-100 are significantly lower than state-of-the-art. It would make the paper extremely strong if the improvement translated to CNNs which are performing near the state-of-the-art. What happens if relu units in CNNs were swapped out for the proposed dilation/erosion operators?\\\"\\n>> It is true that the convolution layers perform well for images as they are able to extract features based on spatial information. \\nHowever, in this work we have defined our network for flattened input data. Our network structure and the operations is totally different than classical neural network. For instance in the first layer we take addition (subtraction) with weights (i.e., values of structuring element) instead of multiplication and then take max (min) instead of addition to implement 1-D dilation (erosion) operation. In the next layer we are taking weighted combination of the output from this layer. \\nSo, we do not and cannot directly use convolution layer in our network, and just swapping the activation function with dilation/erosion layers will not work. For this reason, we have compared our work with neural networks containing dense layers. For harnessing the spatial information, 2D dilation-erosion layer may be defined where the structuring element is much smaller than the input (image). \\n\\nThank you again, please let us know if there are any queries or confusion.\"}", "{\"title\": \"The proposed idea is to replace standard nonlinear activation function with an erosion/dilation operation. The authors report encouraging results but the baseline networks are not state-of-the-art.\", \"review\": \"This paper proposes to replace the standard RELU/tanh units with a combination of dilation and erosion operations, arguing for the observation that the new operator creates more hyper-planes and therefore have more expressive power.\\n\\nThe paper is interesting and there are encouraging results which show a couple of percentage improvements over relu/tanh units. This paper is also clearly written and easy to understand. However there are two issues:\\n1. It is somewhat unclear from the paper what is the main novelty here (compared to existing morpho neurons), is it the learning of the structuring element s? is it the combination of the dilation+erosion operations?\\n2. The second issue is that presumably due to the fact that Conv layers are not used, the accuracy on cifar-10 and cifar-100 are significantly lower than state-of-the-art. It would make the paper extremely strong if the improvement translated to CNNs which are performing near the state-of-the-art. What happens if relu units in CNNs were swapped out for the proposed dilation/erosion operators?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting idea for using morphological operators but too preliminary\", \"review\": \"The authors introduce Morph-Net, a single layer neural network where\\nthe mapping is performed using morphological dilation and erosion.\\nI was expecting something applied to convolutional networks as such operators\\nare very popular in image processing, so the naming is a bit misleading.\\n\\nIt is shown that the proposed network can approximate any smooth function, \\nassuming a sufficiently large number of hidden neurons, that is a nice result.\\n\\nClarity should be improved, for example it is mentioned that the structuring\\nelement is learned but never clearly explained how and what difficulties it poses.\\nIn the main text it is written that alpha is {-1, 1}, which would result in a\\ncombinatorial search, but never explained how it is learned in practice.\\nThis is shown only in the appendix but it is not clear to me that using a binarization\\nwith the weights is not prone to degenerate solutions and/or to learn at all\\nif proper initialization is not used.\\nDid the authors experiment with smooth versions or other form of binarization with\\nstraight-through estimator or sampling?\\n\\nIn the proof for theorem 1 it is not clear if the convergence of the proposed\\nnetwork is faster or slower than that of a classic single layer network.\\n\\nThe main result of the paper is that the structuring element can be learned,\\nbut there is no discussion on what it is learned. Also, there is no comparison\\non related approaches that try to learn the structuring element in an end-to-end\\nfashion such as [1].\\n\\nExperiments lack a more thorough comparison with state-of-the-art and at least\\nan ablation study to show that the proposed approach is effective and has merit.\\nFor example, what is the relative contribution of using dilation and erosion\\njointly versus either one of them.\\nWhat is the comparison with a winner-take-all unit over groups of neurons\\nsuch as max-pooling?\\n\\nIt seems that extending the work to multiple layers should be trivial but it is\\nnot reported and is left to future investigations. This hints at issues with\\nthe optimization and should be discussed, is it related to the binarization\\nmentioned above?\\n\\nOverall the idea is interesting but the way the structuring element is learned\\nshould be discussed in more details and exemplified visually. Experiments need\\nto be improved and overall applicability is uncertain at this stage.\\n\\n=======\\n[1] Masci et al., A Learning Framework for Morphological Operators Using Counter--Harmonic Mean.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Thank you for pointing out\", \"comment\": \"Thank you for pointing out. We will change the title to something else.\\n\\nThank you,\"}", "{\"comment\": \"Hi,\\n\\nIt would be nice and very useful if you consider renaming your paper, as a paper named \\\"MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks\\\", was published in CVPR 2018. I believe this is bad name conflict as the papers topic are related enough to cause confusion. \\n\\nRespectfully,\\n\\nElad Eban\", \"see\": \"\", \"https\": \"//arxiv.org/abs/1711.06798\", \"title\": \"Name conflict\"}" ] }
H1x1noAqKX
Discriminative out-of-distribution detection for semantic segmentation
[ "Petra Bevandić", "Siniša Šegvić", "Ivan Krešo", "Marin Oršić" ]
Most classification and segmentation datasets assume a closed-world scenario in which predictions are expressed as distribution over a predetermined set of visual classes. However, such assumption implies unavoidable and often unnoticeable failures in presence of out-of-distribution (OOD) input. These failures are bound to happen in most real-life applications since current visual ontologies are far from being comprehensive. We propose to address this issue by discriminative detection of OOD pixels in input data. Different from recent approaches, we avoid to bring any decisions by only observing the training dataset of the primary model trained to solve the desired computer vision task. Instead, we train a dedicated OOD model which discriminates the primary training set from a much larger "background" dataset which approximates the variety of the visual world. We perform our experiments on high resolution natural images in a dense prediction setup. We use several road driving datasets as our training distribution, while we approximate the background distribution with the ILSVRC dataset. We evaluate our approach on WildDash test, which is currently the only public test dataset with out-of-distribution images. The obtained results show that the proposed approach succeeds to identify out-of-distribution pixels while outperforming previous work by a wide margin.
[ "out-of-distribution detection", "semantic segmentation" ]
https://openreview.net/pdf?id=H1x1noAqKX
https://openreview.net/forum?id=H1x1noAqKX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJxQ9_hCyV", "H1eDFLhW0X", "HygePKfMTX", "rJg8XtMMa7", "S1g1JuzM6m", "SyeibDGGpQ", "S1lH3uCypm", "SJewfWvLhm", "rygvK39V2m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544632459292, 1542731390612, 1541708119890, 1541708061983, 1541707734936, 1541707523173, 1541560492594, 1540940047231, 1540824190958 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper675/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper675/Authors" ], [ "ICLR.cc/2019/Conference/Paper675/Authors" ], [ "ICLR.cc/2019/Conference/Paper675/Authors" ], [ "ICLR.cc/2019/Conference/Paper675/Authors" ], [ "ICLR.cc/2019/Conference/Paper675/Authors" ], [ "ICLR.cc/2019/Conference/Paper675/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper675/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper675/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper addresses the problem of out-of-distribution detection for helping the segmentation process.\\n\\nThe reviewers and AC note the critical limitation of novelty of this paper to meet the high standard of ICLR. AC also thinks the authors should avoid using explicit OOD datasets (e.g., ILVRC) due to the nature of this problem. Otherwise, this is a toy binary classification problem.\\n\\nAC thinks the proposed method has potential and is interesting, but decided that the authors need more works to publish.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Limited novelty\"}", "{\"title\": \"Revision 2 of the paper\", \"comment\": \"We have revised the paper according to the reviewers' comments.\", \"here_is_the_summary_of_changes\": \"1. Introduction\\nparagraph 3 - shorten uncertainty definitions (Reviewer 1: \\\"text needs a bit more improvement\\\")\\nparagraph 4 - better explain the need to differentiate between the two types of uncertainties (Reviewer 2: \\\"brings into play epistemic and aleatoric uncertainity concepts to justify the simplicity of the approach\\\")\\nparagraph 5 - introduce GAN-based OOD detection approaches (Reviewer 2: \\\"it overlooks a large body of machine learning...\\\")\\nparagraph 6 - clarify advantages of the proposed approach (Reviewer 2: \\\"I do not see any technical novelty\\\")\\n\\n2. Related work\\nParagraph 1 - mention anomaly and novelty detection as related fields (Reviewer 2)\\nParagraph 2 - distilled and moved to introduction paragraph 4 (Reviewer 1: \\\"related work section overlaps a lot with the intro\\\")\\nParagraph 5 - discuss generative models for OOD detection in more depth (Reviewer 2: \\\"It would have strengthened the paper if the approach was compared also to novelty detector\\\")\\nParagraph 6 - rephrased and improved old paragraph 5 (Reviewer 1: \\\"related work section overlaps a lot with the intro\\\")\\nParagraph 7 - removed (Reviewer 1: \\\"related work section overlaps a lot with the intro\\\")\\n\\n3. The proposed discriminative OOD detection approach\\nParagraph 1 - replaced with a short method description (Reviewer 1: \\\"the first two paragraphs of the method seam that should be in the intro\\\")\\nParagraph 2 - shortened (Reviewer 1: \\\"the method of the paper can e explained more straight forward\\\")\\nAdd Figure 1 - (Reviewer 1: \\\"I miss a figure explaining the architecture of the model\\\", Reviewer 2: \\\"it is not clear if the the OOD detector is working on a patch or on the entire image\\\")\\nParagraph 3 - rephrase 1st sentence, define ID (Reviewer 2: \\\"define the ID acronym\\\")\\nAdd paragraph 5 - justify model architecture and use of ILSRVC (Reviewer 2: \\\"if representing the set of unknown classes with ILSVRC is reasonable...\\\", Reviewer 3: \\\"...if using pretrained model, then ILSVRC is not actually pure OOD pixels\\\")\\n\\n4.2. Datasets\\nParagraph 1 - combine Vistas and Cityscapes into same paragraph (Reviewer 1: \\\"...can be explained in less pages\\\")\\n\\nFigure 2 (old Figure 1) - enlarge the font (Reviewer 1: \\\"Figure 1 is impossible to read as the captions are too small\\\")\\n\\n4.6. Results on WildDash test\\nParagraph 5 - add max-traffic-softmax to table 3 and compare max-traffic-softmax with sum-traffic-softmax on the ROB method which achieves 77% MIoU on Cityscapes (Reviewer 1: \\\"...method used for semantic segmentation is 10 points lower than the SOTA...\\\")\\n\\n4.7. Results on other datasets\\nClearer interpretation of experimental results (Reviewer 3: \\\"How to interpret the results in Table 5\\\")\\n\\n5. Conclusion\\nParagraphs 2,3 - clarified, shortened (Reviewer 1: \\\"explain more straight forward\\\", Reviewer 2: \\\"no technical novelty\\\", Reviewer 2: \\\"if ILSVRC is reasonable\\\")\\n\\nAdded appendix B\\nWe present results on UCSD - to compare with image-wide approach mentioned by Reviewer 2 (Adversarially Learned One-Class Classifier for Novelty Detection).\"}", "{\"title\": \"Reply to AnonReviewer2, Part 2\", \"comment\": \"> Adversarially Learned One-Class Classifier for Novelty Detection, CVPR 2018\", \"we_have_located_and_inspected_the_code_at_https\": \"//github.com/khalooei/ALOCC-CVPR2018, however it appears to be out of sync with the paper: the refinement loss in the code (grep g_r_loss) does not seem to match the equation (4) in the paper. The code does not include information to reproduce the numbers from the tables. Straight-forward evaluation of the provided trained model on few UCSD images does not appear to separate inliers from outliers.\\n\\n> It would have strengthened the paper if the approach was compared also to a novelty detector.\\n\\nThank you for your suggestion! We agree, we shall present such discussion in the revised paper.\\n\\n> It is not clear if the fully convolutional OOD detector is working \\n> on a patch or on the entire image. If it is a patch, of what size?\\n\\nOur fully convolutional OOD detector operates on entire images. It outputs a dense prediction in the form of a H/32xW/32 matrix, where HxW are dimensions of the original image (cf. [long15cvpr]). One could say that our detector operates as if it were applied to RxR patches situated 32 pixels apart where R is effective receptive field of the discriminator (for DenseNet 121 finetuned on full Cityscapes images, R is around 600 pixels).\\n\\n> Page 4, define the \\u201cID\\u201d acronym. \\n\\nID stands for \\\"in-distribution\\\". We shall clarify that, thanks!\"}", "{\"title\": \"Reply to AnonReviewer2, Part 1\", \"comment\": \"Thank you for your review! We answer your questions as follows.\\n\\n> Unfortunately, I do not see any relevant technical novelty, and this is a major issue. \\n\\nIn our opinion, a simple solution to a difficult problem is preferred to a complex one.\\n\\n> Perhaps the only significant conclusion about this paper is that \\n> before designing a new OOD detector, if representing \\n> the set of \\u201cunknown\\u201d classes with ILVRC is reasonable, \\n> then it makes sense to simply train a binary classifier and see how it works.\\n\\nOur experiments suggest that representing outliers with ILSVRC might be reasonable more often than not, since our method correctly classified negative WildDash images such as a white wall, two kinds of noise, anthill closeup, aquarium, etc.\\n\\nA concurrent submission to ICLR 2019 shows that representing outliers with ImageNet and 80 million images is a reasonable choice for a wide selection of datasets. We feel that our submission nicely complements their work by presenting a similar idea in the dense prediction context.\", \"https\": \"//openreview.net/forum?id=H1xwNhCcYm\"}", "{\"title\": \"Reply to AnonReviewer1\", \"comment\": \"Thank you for your review. We answer your concerns as follows.\\n\\n> The main problem I found with this article is that I couldn't fully understand it. \\n> Maybe because the text needs a bit more of review and improvement \\n> or maybe because Im not very familiar with the topic.\\n\\nWe are sorry the paper was not clear to you. Please provide more specific information regarding which parts of the paper could be clarified.\\n\\n> Moreover the article is 10 pages while it is encouraged to be 8. \\n> I find that the method of the paper is quite simple and can be explained \\n> more straight forward and in less pages.\\n\\nCurrent method section (section 3) is less than one page long. It appears to us that shortening that section would not decrease the number of pages. We could move figures 2-5 to the appendix although we feel that leaving them as they are would result in a better flow.\\n\\n> The related work section overlaps a lot with the intro, I suggest to combine both.\\n\\nWe shall resolve some redundancies which we introduced for the convenience of the reader.\\n\\n remove the second-to-last paragraph and shorten the last paragraph in the introduction\\n remove the last paragraph in the related work\\n\\n> First two paragraphs of the method seam that should be in the intro.\\n> Model details from the experiments I consider that should be explained in the method.\\n> I miss a figure explaining the architecture of the model.\\n\\nWe shall refactor and shorten the first two paragraphs of section 3 and add the figure.\\n\\n> Why using the semantic segmentation model proposed and no something standard?\\n> For instance Tiramisu (That is also based on dense layers).\\n> Note that the method used for semantic segmentation is 10 points lower than the SOTA in Cityscapes.\\n\\nAs you noted in your review, OOD detection on pixel level has not been previously investigated. We prefer to focus on baseline models at this stage in order to simplify conclusions as well as speed-up the training.\\n\\nSOTA on cityscapes achieve high benchmark results by recovering fine spatial detail lost due to downsampling. Our segmentation models are not as accurate on object borders or small objects like poles, but work reasonably well. When it comes to OOD detection, we are more interested in existence of OOD regions, rather than their exact outlines. Furthermore, as table 5 shows, cityscapes is in many ways a very specific dataset (single camera, nice weather conditions, German cities). Consequently, chasing SOTA on cityscapes is likely to poorly affect max-softmax OOD detection due to overfitting. Using simpler models is also a way of regularization.\\n\\nTable 3 in our original submission includes OOD-detection results obtained with the model which achieves 77.1 mIoU on Cityscapes. The table shows only results of OOD-detection by classification into foreign classes since this approach worked much better than max-softmax. In the revised paper we show the max-softmax results as well.\\n\\nWe agree that densely connected layers are a very good choice for semantic segmentation. However, tiramisu would not be a suitable choice for our experiments due to following reasons:\\n\\n it has a thick up-sampling path which complicates training on large images due to large memory requirements\\n the Tiramisu paper proposes exotic downsampling paths for which there are no ImageNet-pretrained parameters available; consequently this would require training from scratch and lead to at least a 10-fold increase in training time and loss of accuracy due to overfitting.\\n\\n> Figure 1 is impossible to read as the captions are too small.\\n\\nWe shall improve the captions in the revised paper.\\n\\n> The representations of figures 2-5 are difficult to interpret.\\n\\nCould you please be more specific about what could be done to clarify these figures?\\n\\n> There is no comparison to SOTA\\n\\nTo the best of our knowledge, there is almost no previous work in OOD detection on pixel level. Previous work in OOD detection focuses on classification tasks on entire images. We adapt these approaches for dense OOD-detection and show that our approach performs better. Kendall and Gal (2017) model epistemic uncertainty on pixel level, although they do not use it for OOD detection. Our early experiments with this approach resulted in poor OOD detection performance. WildDash is the first semantic segmentation benchmark that introduces OOD images. Most of existing submissions on WildDash come without an accompanying paper, so it is not clear what, if anything, was used for OOD detection.\"}", "{\"title\": \"Reply to AnonReviewer3\", \"comment\": \"Thank you for your review! We answer your questions as follows.\\n\\n> When you perform training, do you train from scratch or from a pre-trained model?\\n\\nParameters of the feature extractor were initialized from ImageNet pre-trained models. All heads are trained from scratch.\\n\\n> If using pre-trained model, then ILSVRC is not actually pure OOD pixels.\\n\\nWe do not perceive that as a problem neither in discriminative OOD detection (where we train on road driving vs ILSVRC) nor in single-class OOD detection (where we train on road driving images and rely on max-softmax). In discriminative OOD, we cast the problem as binary classification where pre-training can only help. In single-class OOD (max-softmax), the classifier is fine-tuned through 40 epochs on a road driving dataset. Previous work on catastrophic forgetting suggests this likely results in a complete oblivion of features for ILSVRC classes which are not seen in road driving datasets.\\n\\nPlease note that we also successfully detect OOD pixels in WildDash negative images that (at least nominally) do not have anything to do with ILSVRC (white wall, two kinds of noise, anthill closeup, aquarium, etc).\\n\\nMaybe we do not understand your concerns. Could you please clarify?\\n\\n> How to interpret the results in Table 5?\\n\\nTable 5 shows how well the proposed OOD-detection models generalize to datasets which were not seen during training.\\n\\nRows 1 and 3 show the difference between using Vistas and Cityscapes as ID dataset. When using Vistas as ID, almost no OOD pixels are detected in Cityscapes. On the other hand, when using Cityscapes as ID, most Vistas pixels are classified as OOD. This suggests that Cityscapes poorly represents the variety of traffic scenes.\\n\\nRow 2 shows that almost all Pascal VOC 2007 pixels are classified as OOD. This finding complements the results from figure 4, and suggests that training OOD detection on ILSVRC is able to generalize to other datasets.\"}", "{\"title\": \"Discriminative out-of-distribution detection for semantic segmentation\", \"review\": \"Summary:\\nThis paper addresses the problem of out-of-distribution detection for helping the segmentation process. Therefore, the detection is performed on a pixel basis. The application of the approach is to datasets used for autonomous driving, where semantic segmentation of the view of the road is a typical application. Since in a road view there will be pixels that are projections of objects that are likely not in the set of classes known by the semantic segmentation algorithm, it makes sense to flag them as being out of distribution (OOD), or not known, or to assign to them a low confidence level. The proposed approach is trivial: train a binary classifier that distinguishes image patches from a known set of classes from image patches coming from an unknown (background class). The classifier output applied at every pixel will give the confidence value. While there are different dataset options to represent the known classes, the background class is represented by images from ILSVRC. The results show that for the segmentation application the approach works better than using an adaptation of more elaborate out-of-distribution methods.\", \"quality_and_clarity\": \"The paper is well organized and is described very clearly and provides an ok set of results, despite the simplicity of the approach.\", \"originality_and_significance\": \"Unfortunately, I do not see any relevant technical novelty, and this is a major issue. Perhaps the only significant conclusion about this paper is that before designing a new OOD detector, if representing the set of \\u201cunknown\\u201d classes with ILVRC is reasonable, then it makes sense to simply train a binary classifier and see how it works.\\n\\nBesides the novelty, I disagree with the way the paper has been positioned and motivated. It brings into play epistemic and aleatoric uncertainty concepts to justify (the simplicity of) the approach, and it overlooks a large body of machine learning (novelty detection, one-class classification, \\u2026). This is also a major issue.\", \"additional_comments\": \"One of the biggest motivations for this work is that other approaches do not distinguish between epistemic and aleatoric uncertainty and this is why they do not work. This is regarded as a distinctive advantage of the proposed approach. It is claimed that the proposed formulation is insensitive to any aleatoric uncertainty. On the other hand, the paper is written in a way that ignores a large body of literature that goes under the name of \\u201cnovelty detection\\u201d, \\u201canomaly detection\\u201d, \\u201cone-class classification\\u201d, and related names. So, I am wondering how the approaches just mentioned compare with the proposed method, when epistemic and aleatoric uncertainty become part of the discussion. Isn\\u2019t every novelty detector insensitive to aleatoric uncertainty as well? Could the Authors clarify what they claim with that statement, while considering a broader view? \\n\\nThe paper should relate to the literature mentioned above. In particular, I would point the Authors to a couple of recent works that seem to precisely contradict the premises of the proposed approach, which are given at the beginning of section 3:\\n\\n- Adversarially Learned One-Class Classifier for Novelty Detection, CVPR 2018\\n- Generative Probabilistic Novelty Detection with Adversarial Autoencoders, arXiv, July 2018.\\n\\n\\nAgain, related to novelty detection, it looks like the proposed approach still requires tuning one or more thresholds. Therefore, it would not be that different from tuning the threshold of a novelty detector, or a one-class classifier. It would have strengthened the paper if the approach was compared also to a novelty detector.\\n\\nIt is not clear if the fully convolutional OOD detector is working on a patch or on the entire image. If it is a patch, of what size?\\n\\nPage 4, define the \\u201cID\\u201d acronym.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting results, good direction to follow\", \"review\": \"This paper aims to detect out-of-distribution pixels for semantic segmentation, which is a good direction for researchers in this field to explore. As the authors point out, recent semantic segmentation systems surpass 80% mIoU on Pascal VOC 2012 and Cityscapes, which is a good achievement. Unfortunately, most existing semantic segmentation datasets assume closed-world evaluation which means that they require predictions over a predetermined set of visual classes. This work utilize data from other domain to detect undetermined classes, thus can model uncertainty better in an explicit way. I just have minor comments.\\n\\n1. When you perform training, do you train from scratch or from a pre-trained model? If using pre-trained model, then ILSVRC is not actually pure OOD pixels. \\n\\n2. How to interpret the results in Table 5?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Difficult to understand\", \"review\": \"ML models are trained on a predefined dataset formed by a set of classes. Those classes use to be the same ones for training and testing. However, what happen when during testing time images with classes unseen during training are shown to the model? This article focus in this problem which is not currently taking much attention by the mainstream research community and is of great importance for the real world applications.\\n\\nThis article tries to detect areas of the image where those out-of-distribution situations appear in semantic segmentation applications. The approach used is by training a classifier that detects which pixels are out of distribution. For training two datasets are used: the dataset of interest and another different one. The classifier learns to detect if a pixel is from the dataset of interest or from another distribution.\\n\\nThe main problem I found with this article is that I couldn't fully understand it. Maybe because the text needs a bit more of review and improvement or maybe because Im not very familiar with the topic. Moreover the article is 10 pages while it is encouraged to be 8. I find that the method of the paper is quite simple and can be explained more straight forward and in less pages. The related work section overlaps a lot with the intro, I suggest to combine both. First two paragraphs of the method seam that should be in the intro. Model details from the experiments I consider that should be explained in the method. I miss a figure explaining the architecture of the model. Why using the semantic segmentation model proposed and no something standard? For instance Tiramisu (That is also based on dense layers). Note that the method used for semantic segmentation is 10 points lower than the SOTA in Cityscapes. Figure 1 is impossible to read as the captions are too small. The representations of figures 2-5 are difficult to interpret. There is no comparison to SOTA\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
S1ey2sRcYQ
Direct Optimization through $\arg \max$ for Discrete Variational Auto-Encoder
[ "Guy Lorberbom", "Tamir Hazan" ]
Reparameterization of variational auto-encoders is an effective method for reducing the variance of their gradient estimates. However, when the latent variables are discrete, a reparameterization is problematic due to discontinuities in the discrete space. In this work, we extend the direct loss minimization technique to discrete variational auto-encoders. We first reparameterize a discrete random variable using the $\arg \max$ function of the Gumbel-Max perturbation model. We then use direct optimization to propagate gradients through the non-differentiable $\arg \max$ using two perturbed $\arg \max$ operations.
[ "discrete variational auto encoders", "generative models", "perturbation models" ]
https://openreview.net/pdf?id=S1ey2sRcYQ
https://openreview.net/forum?id=S1ey2sRcYQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Hygbflpce4", "rJg3iGZLgN", "HygdSa5rJE", "SJeixZ5NJ4", "BJedFgq41V", "BJxSXov1RQ", "rylSgPw367", "S1gDwY7Na7", "BkewQsl62Q", "rJefZTGLnQ" ], "note_type": [ "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545420809222, 1545110179769, 1544035647559, 1543966962777, 1543966848386, 1542581020618, 1542383340718, 1541843294881, 1541372702859, 1540922618019 ], "note_signatures": [ [ "~Mingzhang_Yin1" ], [ "ICLR.cc/2019/Conference/Paper674/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper674/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper674/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper674/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper674/Authors" ], [ "ICLR.cc/2019/Conference/Paper674/Authors" ], [ "ICLR.cc/2019/Conference/Paper674/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper674/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper674/AnonReviewer1" ] ], "structured_content_str": [ "{\"comment\": \"I enjoy reading the gradient estimation methods with multiple evaluations in the paper!\\nI would like to point our concurrent work which also provides low variance, unbiased gradient estimator for discrete latent variables which may serve as a proper comparison. https://openreview.net/pdf?id=S1lg0jAcYm\", \"title\": \"Related reference\"}", "{\"metareview\": \"The paper presents a novel gradient estimator for optimizing VAEs with discrete latents, that is based on using a Direct Loss Minimization approach (as initially developed for structured prediction) on top of the Gumble-max trick. This is an interesting and original alternative to the use of REINFORCE or Gumble Softmax. The approach is mathematically well detailed, but exposition could be easier to follow if it used a more standard notation. After clarifications by the authors, reviewers agreed that the main theorerm is correct. The proposed method is shown empirically to converge faster than Gumbel-softmax, REBAR, and RELAX baselines in number of epochs. However, as questioned by one reviewer, the proposed method appears to require many more forward passes (evaluations) of the decoder for each example.\\u00a0Authors replied by highlighting that an argmax can be more computationally efficient than softmax (in cases when the discrete latent space is structured), and also clarified in the paper their use of an essential computational approximation they make for discrete product spaces. These are important aspects that affect computational complexity. But they do not address the question raised about using significantly more decoder evaluations for each example. A fair comparison for sampling based gradient estimation methods should rest on actual number of decoder evaluations and on resulting timing. The paper currently does not sufficiently discuss the computational complexity of the proposed estimator against alternatives, nor take this essential aspect into account in the empirical comparisons it reports.\\nWe encourage the authors to refocus the paper and fully develop and showcase a use case where the approach could yield a clear a computational advantage, like the structured encoder setting they mentioned in the rebuttal.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Original approach, but with unclear computational benefit\"}", "{\"title\": \"Reply\", \"comment\": \"I slightly increased my score because of some of the clarifications. However I believe some points still haven't been addressed in the paper:\\n\\n1. I think the discussion about the computational complexity should be mentioned in the paper.\\n\\n2. As mentioned by the reviewer 3, the evaluation of the bias of gumbel-softmax is really noisy so I believe that either you're not using enough samples or there is something wrong. Also you compute the bias with respect to the REINFORCE estimator, I think the methods should actually compared to the \\\"true\\\" gradient computed by $\\\\sum_k \\\\nabla_c p(z=k) \\\\theta(x,z=k)$. Also as a note I believe there is a closed form expression for the bias of your method. \\n\\n3. This is just a note but if you consider the gradient of $\\\\sum_k p(z=k) \\\\theta(x,z=k)$ then the method you propose is equivalent to taking the directional derivative of p(z=k) along the vector \\\\theta(x), which is equal to the gradient when epsilon goes to zero. The \\\"mean-field\\\" approximation of the decoder you propose is then equivalent to choosing a different direction for the directional derivative.\"}", "{\"title\": \"Reviewer response (continued)\", \"comment\": \"For the annealing rate, I actually meant what is the annealing rate in the sense of what does it mean as a phrase? Even if defined in Jang et al. it would be useful to briefly restate here.\\n\\nAlso, I still didn't see any explanation as to whether epsilon changed as training progresses according to some schedule?\\n\\nThanks for your clarification regarding the log loss. \\\"The main advantage of our framework is that it seamlessly integrates semi-supervised learning\\\" is still an overstatement, since many forms of VAE can incorporate semi-supervised learning easily. Also, some of the clarification should go in the paper, not just these review notes.\\n\\nThe bibliography still contains a ton of miscapitalizations, e.g. \\\"john wiley & sons\\\". Please fix.\\n\\nIn figure 1, I suggest using more than 500 samples. The GSM plot is so noisy it is hard to compare the two biases, and the trade-off is hard to see at the moment since the bias appears essentially flat for \\\"direct\\\" due to the noisiness and the scale of the y axis.\\n\\nFor the law of total expectation point, I actually meant that the entire first paragraph of section 4.1 and (4) and (5) are just a simple application of the law of total expectation, and could be replaced by simply stating that and the left side of (4) and the right side of (5).\", \"there_were_a_number_of_other_comments_that_did_not_appear_to_be_addressed\": \"The re-use of theta and psi as both model parameters and the log probability density / distribution is unnecessarily confusing.\\n\\nA small point, but in \\\"the challenge in generative learning is to reparameterize and optimize (2)\\\", the authors assume that q has analytic expression for the second KL term in (1). That's often the case but definitely not always. Also, even if this KL term has an analytic expression, it is not always better to use it (see Duvenaud \\\"Sticking the landing...\\\").\\n\\nIn (11), I wasn't sure whether S included the supervised examples or not (i.e. whether S_1 was disjoint from or a subset of S). If disjoint, shouldn't the KL term be included, or the expectation-over-gamma term be changed to use ground truth z? I guess I was unclear on the form of loss used for the supervised data, and unclear on the motivation for this choice.\\n\\nThe additional sentence after \\\"for which we can approximate z^{...} efficiently\\\" is very opaque and does not really make explicit what the authors are referring to. Explicit equations would be helpful, please!\"}", "{\"title\": \"Reviewer response\", \"comment\": \"I appreciate the authors' feedback, and much of it was very helpful. However, I don't feel like the authors did a very good job of addressing my comments.\\n\\nRegarding Theorem 1, after carefully reviewing the authors' comments and going through some working of my own, I now believe each of the claims in Theorem 1 is correct, and some of my confusion was indeed solely due to non-standard notation rather than incorrect reasoning. It is quite a nice way to derive (10)! However the presentation still leaves a huge amount to be desired even after the authors' improvements. Firstly, the differentiating-under-the-integral argument justifies differentiating under the integral for the expression on the right side of the first equation in the proof, but not for the expression on the left side, which is what is actually used. Please clarify the reasoning here. Secondly, for Folland (1999), Theorem 2.27, we require df/dt to exist (in their notation), but it does not exist everywhere (only almost everywhere) for the max function according to a strict definition of derivative. Please clarify why this theorem actually applies here. Also, what precisely is the integrable function which bounds the derivative? The Danskin argument is used to justify the derivation of equation (8), but the derivation of equation (9) also requires a similar derivative-of-max argument, and Danskin is not completely straightforward to apply in this case since the part inside the max is not convex.\\n\\nI would suggest that it might be easier to first define the integral of $g(\\\\gamma) max_z{v(z) + \\\\gamma(z)} d\\\\gamma$, derive the derivatives of this with respect to $v$, then use this to derive both (8) and (9).\\n\\nA few other small points in the proof of Theorem 1. It might be worth reiterating that $g$ is the *multivariate* Gumbel pdf to stress that the integral is not 1D. Using the limits -infty and infty also seems strange, and suggests a 1D integral. I would suggest omitting those limits. The notation $\\\\theta$ makes sense now; I had not realized it was intended to be a vector with components indexed by $\\\\hat{z}$. I would suggest using the more standard subscript notation $\\\\theta_{\\\\hat{z}}$ or just $\\\\theta_i$ for vectors. The notation $\\\\theta$ also doesn't work very well as it drops the dependence on $x$ (though I understand this is not relevant for the proof). The change of variables in the first equation in the proof also now makes sense to me. It would be helpful to state the result being used that the convolution of a smooth function and a non-smooth function is smooth, and also state this result precisely; it is not true in general for non-compactably-supported functions if I understand correctly. Incidentally it would also be helpful to precisely define \\\"smooth\\\". The meaning I presume is intended (infinitely continuously differentiable) is fairly standard but other definitions are also used. For \\\"differentiating under the integral, now with respect to v\\\" in the paragraph starting \\\"We turn to prove Equation (8)\\\", there is actually no need to differentiate under the integral to obtain the desired result, which is good because if I understand correctly differentiating under the integral here would not be valid. Also, \\\"taking the derivative with respect to $\\\\epsilon = 0$\\\" is awkward shorthand for \\\"taking the derivative with respect to $\\\\epsilon$ and setting $\\\\epsilon$ to 0\\\"; I would use the more explicit version. Also, in \\\"Applying a change of variable...\\\", there is a missing hat on $z$.\\n\\nI still think it's worth mentioning that the key result (10) can also be derived using Theorem 1 in the Song paper (briefly alluding to any regularity conditions needed). This would help give additional confidence that the result is correct.\\n\\nFinally on Theorem 1, I think it would be helpful for the reader's intuition to state why Folland (1999) Theorem 2.27 does not apply to differentiating under the integral of expressions like the right side of (8). Intuitively to me the difference is that the gradient of the integrand in (8) is a delta function, whereas in (6) the gradient of the integrand was merely discontinuous, but bounded. Some clarifying statement would be helpful for the reader to understand when the sort of reasoning the authors are using is valid or not.\"}", "{\"title\": \"Thank you for your insightful suggestions\", \"comment\": \"We thank you for your time and effort. Many of your comments helped us to improve the submission. We uploaded a new manuscript with all suggestions.\\n\\nWe agree that citing Song 2016 will solve these issues, but we think that these issues are a result of poor notation rather than a mistake: we tried to condense Danskin theorem to one line, and implicitly work with product spaces that decouple the dependencies of $\\\\hat z$, it was a mistake as it made our derivation unclear.\", \"the_first_equation_in_theorem_1\": \"This is a convolution between a smooth function and a non-smooth function, and therefore it is smooth. Take for example a two dimensional Gaussian random variable with mean $\\\\mu$. The expectation of their max is a smooth function. Analytically, it equals to $C\\\\int_{-\\\\infty}^\\\\infty \\\\int_{-\\\\infty}^\\\\infty e^{\\\\|\\\\gamma - \\\\mu\\\\|^2/2} \\\\max\\\\{\\\\gamma_1,\\\\gamma_2\\\\} d \\\\gamma_1 d \\\\gamma_2$. While $\\\\max \\\\{\\\\gamma_1,\\\\gamma_2\\\\}$ certainly depends on $\\\\gamma$ through $\\\\mu$, the integral is a smooth function of $\\\\mu$. We defined $g(\\\\gamma)$ more precisely above equation 3 to show it is a product space so it decouples dependencies of $\\\\theta$ within $\\\\hat z$: The notation $g(\\\\hat \\\\gamma - \\\\epsilon \\\\theta - \\\\phi_v )$ implicitly uses the independent product space $\\\\prod_{\\\\hat z} g(\\\\hat \\\\gamma(x,\\\\hat z) - \\\\epsilon \\\\theta(x,\\\\hat z) - \\\\phi_v(x,\\\\hat z) )$. We borrowed this notation from Gaussian random variables, where in this case it is $e^{\\\\|\\\\gamma - \\\\mu\\\\|^2/2} = \\\\prod_{\\\\hat z} e^{(\\\\gamma(\\\\hat z) - \\\\mu(\\\\hat z))^2/2}$. \\n\\n\\\"We turn to prove Equation 8\\\" paragraph:\\nWe agree we chose poor notation. The function $\\\\max_{\\\\hat z} \\\\{\\\\epsilon \\\\theta(x,\\\\hat z) + \\\\phi_v(x,\\\\hat z) + \\\\gamma(\\\\hat z) \\\\}$ is the maximum of linear functions of $\\\\epsilon$. Therefore Danskin Theorem (Proposition 4.5.1 in Convex Analysis and Optimization by Bertsekas) states that $\\\\partial_\\\\epsilon(\\\\max_{\\\\hat z} \\\\{\\\\epsilon \\\\theta(x,\\\\hat z) + \\\\phi_v(x,\\\\hat z) + \\\\gamma(\\\\hat z) \\\\}) = \\\\theta(x,z^{\\\\epsilon \\\\theta + \\\\phi_v + \\\\gamma})$ whenever the $\\\\arg \\\\max$ is unique. Since the $\\\\arg \\\\max$ is unique with probability one, we can continue without problems, thus overcoming the general position condition in McAllester et al 2010 and the regularity conditions in Song et al. 2016.\", \"clarifications\": [\"Annealing: We set the annealing rate to be 1e-3 to follow Jang et al. We stop at $0.1$ to avoid gradient blowup.\", \"Semi-supervised: we referred to general loss functions, beyond the log-loss. We agree that the log loss is a natural choice but there are recent cases where semi-supervised is important and log-loss cannot capture the structures accurately. Such losses can extend Corro and Titov's \\\"Differentiable Perturb-and-Parse: Semi-Supervised Parsing with a Structured Variational Autoencoder\\\"}\", \"We used 500 examples in CelebA\", \"We will fix the biography\", \"We will add the bias in our bias/variance tradeoff\"]}", "{\"title\": \"Thank you for your constructive comments\", \"comment\": \"We thank you for your thoughtful comments. Here are answers to the concerns raised by the review. We complement these answers with a revised submission (appendix and paper)\\n\\nWall-clock time for the CelebA with $10$ binary attributes takes 0.13 seconds for Gumbel-Softmax and 0.06 seconds for Gumbel-Max, when the discrete latent space is structured (spin glass model). More generally: Gumbel-Softmax uses in its forward computation: (i) one computation of $\\\\theta$ (getting as input a softmax) and (ii) $|{\\\\cal Z}|$ computations of $\\\\phi$ for each $z \\\\in {\\\\cal Z}$ (due to the normalization of the softmax). In its backward computation it uses: (i) one gradient for $\\\\theta$ and (ii) $|{\\\\cal Z}|$ computations derivatives of $\\\\phi$ (due to the normalization of the softmax), for each $z \\\\in {\\\\cal Z}$. In contrast, Gumbel-Max uses in its forward computation (i) one computation of $\\\\theta$ (getting as input the argmax) and (ii) max computation over $\\\\phi$. In its backward computation it requires another max operation (total of two max operations), now with $\\\\theta$ (when the $\\\\theta$ is decomposable, as in our discrete product space approximation, this is as efficient as computing the maximum over $\\\\phi$) and two gradient computations of $\\\\phi$. \\n\\nThe computational complexity of the max operation is sometimes much less than the computational complexity of the normalization constant since the max operation does not necessarily requires going over all $z \\\\in {\\\\cal Z}$ in all cases. This happens in structured prediction models, where the max operation can be computed with integer linear solvers efficiently. Structured prediction models are important in practice since they capture correlations between labels (as in the CelebA problem). \\n\\nThe work [1] in its one dimensional form consdiers $\\\\sum_z e^{\\\\phi_v(x,z)} \\\\theta(x,z)$ (assuming the normalization constant is one for simplicity). Comparing to Gumbel-Softmax and Gumbel-Max, it requires in its forward computation $|{\\\\cal Z}|$ computations of $\\\\phi$ and $\\\\theta$ and in its backward computation $|{\\\\cal Z}|$ computations of $\\\\phi$ and $\\\\theta$ gradients. We agree that such a comparison is in place, we will compare the works.\\n\\nWe agree that Eq 9 can be computed similarly to Eq 4, Eq 5, it will result in REINFORCE for Gumbel-Max: The update rule of variational Bayes in its discrete setting is $\\\\sum_z e^{\\\\phi_v(x,z)} \\\\theta(x,z) \\\\nabla_v \\\\phi_v(x,z)$ (assuming the normalization constant is one for simplicity). When doing it in the perturbation space, this will lead to REINFORCE update rule with respect to perturbation models, and the gradient is $ \\\\mathbb{E}_{\\\\gamma \\\\sim g} [ \\\\nabla \\\\log(g(\\\\gamma)) \\\\nabla \\\\phi(x,z^{\\\\phi + \\\\gamma }) \\\\theta(x,z^{\\\\phi + \\\\gamma})] $ where $g$ is the probability density function of the perturbation. This representation perhaps gives some insight for the variance of REINFORCE (multiplying the gradient by a log-gradient and $\\\\theta$, compared to our (biased) approach.\\n\\nWe added a plot (Figure 6 in the Appendix), also experimenting with depth and different loss functions. When $n$ increases the gap becomes smaller and the difference is negligible (and random).\", \"the_variance_experiment_was_conducted_as_follows\": \"the network was trained over all the samples until convergence, then computed the encoder gradients 1000 times (with 1000 different Gumbels) over one sample and computed the variance over the 1000 gumbels. We will add a comparison to Gumbel-Softmax with temperature and the bias as well. however, we are not sure that $\\\\epsilon$ is the right equivalent to temperature in Gumbel-Softmax. Instead, we think that the analogue is the perturbation variance: the temperature signifies the max-argument, as happens when the variance of the perturbation approach zero. $\\\\epsilon$ in our setting inserts the gradient signal\\n\\nIn our experiments, we focused on the settings of Gumbel-Softmax, to be able to compare to the previous methods. For structured setting we worked on CelebA but limited ourselves to small number of discrete variables to be able to compare to Gumbel-Softmax. In retrospect, we should have emphasized the structured encoder setting, where our method excels (when we compute two max operations instead of summing over all structures), and we will elaborate on that.\\n\\nIn semi-supervised setting, we can plug any loss. In the MNIST task it sometimes perform better and sometimes worse. Our setting was chosen from computational aspect, since it allowed us to set the perturbed prediction to the true label.\"}", "{\"title\": \"Worthwhile and interesting paper, but exposition could use some work (rating maintained after author feedback).\", \"review\": \"This paper proposes combining the Gumbel-max trick and \\\"direct loss optimization\\\" for variance reduction in VAEs with discrete latent variables. This is a natural combination (in hindsight), since the Gumbel-max trick turns sampling into non-differentiable optimization, and direct loss optimization provides a way to optimize the expected value of a non-differentiable loss. The paper is well-written for the most part and is backed by good experimental results. However it like some of the mathematical details and some of the exposition could be greatly improved.\\n\\nI think there are several mistakes in the reasoning presented in the proof of Theorem 1 (see detailed comments below). Theorem 1 in the current paper seems to me to be a special case of Theorem 1 in (Song 2016), where the expectation over data is replaced by an expectation over the Gumbel variable gamma. If I've understood correctly, it seems like it would be more correct and concise to simply cite that paper with some explanatory comments.\\n\\nThe word \\\"direct\\\" occurs quite a lot in the paper. It sometimes seemed misplaced. For example for \\\"The direct differentiation of the resulting expectation\\\" in the introduction, in what sense is the differentiation direct, and what would non-direct differentiation be?\\n\\nIn section 3, that's not the meaning of the term \\\"exponential family\\\".\\n\\nThe re-use of theta and psi as both model parameters and the log probability density / distribution is unnecessarily confusing.\\n\\nA small point, but in \\\"the challenge in generative learning is to reparameterize and optimize (2)\\\", the authors assume that q has analytic expression for the second KL term in (1). That's often the case but definitely not always. Also, even if this KL term has an analytic expression, it is not always better to use it (see Duvenaud \\\"Sticking the landing...\\\").\\n\\nIn (3), the usual notation is P(x = i) where x is the random variable and i is its possible value, whereas in (3) the random variable z^{\\\\phi + \\\\gamma} appears on the right of the equals sign.\\n\\nThe first paragraph of section 4.1 and (4) and (5) are just a simple application of the law of total expectation, and it would be simpler and clearer to state that.\\n\\n\\\"gradient of the decoder\\\" should be \\\"gradient of the decoder log probability\\\" (or log prob density depending on preference). Similarly with \\\"the decoder is a smooth function\\\". The decoder is a conditional probability distribution (at least according to my understanding of conventional usage).\\n\\nIn the first equation in the proof of Theorem 1, it seems as though the authors are using the standard change of variables formula for integrals. However the new variable \\\\hat{\\\\gamma} depends on \\\\hat{z} through \\\\theta, so I don't see how it's valid to ignore the max in the way the present paper does. One way to see that something is wrong is the fact that the integrand on LHS has \\\\hat{z} as a bound variable only, whereas the integrand on RHS has \\\\hat{z} as both a bound variable (inside the max) and a free variable (since \\\\theta depends on \\\\hat{z}, though strangely that is not written in the equation). What is the value of \\\\hat{z} used for \\\\theta on RHS?\\n\\nThere's a missing [] after \\\\partial_\\\\epsilon in the third line of the paragraph starting \\\"We turn to prove Equation (8)\\\".\\n\\nIn the same line, I don't see why the two expectations are equal. It seems to me that the differentiation w.r.t. epsilon ignores the fact that changing epsilon occasionally changes z^{\\\\epsilon \\\\theta + \\\\phi_v + \\\\gamma} in a discontinuous way. The term being differentiated has both a continuous-in-epsilon component and a piecewise-constant-in-epsilon component, and the latter appears to have been ignored. While the gradient of a piecewise constant function is zero almost everywhere, the occasional large changes (which could be thought of as delta functions) still can make a large contribution to the overall expression once we take the expectation. To look at it another way, if the reasoning here is correct, why can't the same argument be used on the RHS of (8), first to take the derivative inside the expectation and subsequently to compute the derivative as zero, since the inner term is a piecewise constant function of v? Yet clearly the RHS of (8) is not always zero.\\n\\nAround \\\"However when we approach the limit, the variance of the estimate increases...\\\", I think it would be extremely helpful to explain that for small epsilon, we occasionally obtain a large gradient (and otherwise zero), while for large epsilon we often obtain a moderate non-zero gradient. That gives some insight into the effect of epsilon, and why the variance is larger for small epsilon.\\n\\nAny reason not to plot the bias in right Figure 1, which is ostensibly about the bias-variance trade-off?\\n\\nI didn't follow the meaning of the diagram or caption for left Figure 1.\\n\\nIn (11), I wasn't sure whether S included the supervised examples or not (i.e. whether S_1 was disjoint from or a subset of S). If disjoint, shouldn't the KL term be included, or the expectation-over-gamma term be changed to use ground truth z? I guess I was unclear on the form of loss used for the supervised data, and unclear on the motivation for this choice.\\n\\nIn the last sentence of section 5.1, should \\\"chain rule\\\" be \\\"variance reparameterization trick\\\"?\\n\\nIn section 5.2, it would be helpful to mention what mean field means in terms of the variational distribution q (namely q(z | x) = \\\\prod_i q(z_i | x) ). Also, the term \\\"mean field\\\" is not conventionally used for general distributions (such as the decoder here) as far as I'm aware, only for variational distributions. \\\"Conditionally independent\\\" might be clearer.\\n\\nWhat does \\\"for which we can approximate z^{...} efficiently\\\" refer to?\\n\\nIn section 6.1, what is the annealing rate? Also, the minimal epsilon is set to 0.1. Is epsilon changed as training progresses according to some schedule?\\n\\n\\\"The main advantage of our framework is that it seamlessly integrates semi-supervised learning\\\" seems like an overstatement. Wouldn't semi-supervised learning be relatively straightforward to incorporate into any form of VAE? And why not just use log p(x, z) for updating the decoder parameters and log q(z | x) for updating the encoder parameters?\\n\\nHow many labeled examples were used for the CelebA semi-supervised learning?\\n\\nSome bibliography typos. For example, no capitalization throughout (e.g. \\\"gumbel\\\" instead of \\\"Gumbel\\\"). Also lots of arxiv preprints cited when published papers exist (e.g. Jang 2017 should be ICLR 2017 not arxiv preprint).\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A significant contribution\", \"review\": \"The authors propose a method to apply the reparametrization trick when the random variables of interest are discrete. Their technique is based on a formulation of the objective function in terms of Gumbel-Max operators. They propose a derivation of the gradient in terms of an auxiliary variable \\\\epsilon, such that the resulting gradient estimate is biased but the bias is reduced as \\\\epsilon approaches zero, at the cost of increasing variance. Experiments are performed with VAE including discrete latent variable models. The authors show how their method converges faster than other baselines formed by estimators of the gradient given by the REBAR, RELAX and Gumbel-soft-max methods. In experiments with semi-supervised VAEs, their method outperforms the Gumbel softmax method in terms of accuracy and objective function.\", \"quality\": \"The theoretical derivations seem rigorous and the experiments performed clearly indicate that the proposed method can outperform existing baselines.\", \"clarity\": \"The paper is clearly written and easy to read. I found that the network architecture shown in the left of Figure 1 a bit confusing and needs to be explained more clearly.\", \"significance\": \"The experimental results clearly show that the proposed method can outperform existing baselines and that the proposed contribution is significant.\", \"novelty\": \"The proposed method is novel up to my knowledge. This is the first time I have seen the proposed theoretical derivations, which are significantly different from previous approaches.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An principled approach with weak empirical results\", \"review\": \"This work proposes a new (biased) gradient estimator to learn a discrete auto-encoders. Similarly to the gumbel-softmax estimator this paper proposes to use the gumbel-max trick and the reparametrization trick but instead of relaxing the argmax by a softmax, the authors derive a formula for the gradient based on direct loss optimization to compute the gradient through the argmax.\", \"pros\": [\"The approach is well motivated and the proof of theorem 1 which gives the formula of the new gradient estimator seems correct.\"], \"cons\": [\"The principle downside of the proposed approach is that it requires to compute the value of the objective for several values of z, which makes it more computationally expensive than gumbel-softmax. Could the author compare the different estimators in terms of running time instead of epoch for example in fig2. it seems like RELAX would perform similarly or better in terms of wall-clock time.\", \"[1] also proposed an estimator that requires evaluating the objective for different values of z, and showed that it is unbiased and optimal (lowest variance). I think the authors should mention this related work and how their approach differs. I also think the author should compare their work to [1].\", \"Since both gumbel-softmax and the proposed approach are biased, could the authors give some intuitions on why they believe their approach is better.\", \"I believe the expectation of the right-hand side of equation (9) can be computed in closed form by using a formula similar to eq (4) and (5), which replace the expectation by a sum over the possible values of z. This will lead to a gradient estimator with no variance, can the author comment on this ?\", \"I think the bias induced by the mean-field approximation of the decoder should be investigated more thoroughly. Could the authors plot the gap as a function of n for example ? What happens if we also increase the number of category ? (there is a typo in this section it should be k^n instead of n^k) ? Can they compare to gumbel-softmax, is there a threshold at which gumbel-softmax becomes better ?\", \"It's not clear on what setting is the variance plotted in fig 1. is computed ? Is it computed on the discrete VAE experiment ? if so how many latent variables and category ? Could the bias also be provided ? Could it be compared to gumbel-softmax with varying temperature ?\", \"The experiments are a bit toyish, it's not clear what happens when the task are more complex, the architecture for the encoder and decoder are deeper or the latent space is bigger. In particular the authors only consider linear encoder and decoder when comparing the ELBO of different methods.\", \"In the semi-supervised settings what happens if we don't set the perturbed level to the true label ?\"], \"conclusion\": \"The experiments are quite toyish and the approach is more computationally expensive than gumbel-softmax. More experiments should be done to clearly show the advantage of this method compared to gumbel-softmax.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BJe1hsCcYQ
Lorentzian Distance Learning
[ "Marc T Law", "Jake Snell", "Richard S Zemel" ]
This paper introduces an approach to learn representations based on the Lorentzian distance in hyperbolic geometry. Hyperbolic geometry is especially suited to hierarchically-structured datasets, which are prevalent in the real world. Current hyperbolic representation learning methods compare examples with the Poincar\'e distance metric. They formulate the problem as minimizing the distance of each node in a hierarchy with its descendants while maximizing its distance with other nodes. This formulation produces node representations close to the centroid of their descendants. We exploit the fact that the centroid w.r.t the squared Lorentzian distance can be written in closed-form. We show that the Euclidean norm of such a centroid decreases as the curvature of the hyperbolic space decreases. This property makes it appropriate to represent hierarchies where parent nodes minimize the distances to their descendants and have smaller Euclidean norm than their children. Our approach obtains state-of-the-art results in retrieval and classification tasks on different datasets.
[ "distance learning", "metric learning", "hyperbolic geometry", "hierarchy tree" ]
https://openreview.net/pdf?id=BJe1hsCcYQ
https://openreview.net/forum?id=BJe1hsCcYQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1ejr8gWx4", "BklocZeKJ4", "Bkx5FsmP1N", "Bkx8735NJV", "HJlQ5vP4kV", "HkeimBHmJE", "Hylsv6FiT7", "rklLNTKs6X", "B1epRhtspX", "HyxZY3FiaQ", "SJe6XoC02m", "rkeqJ1V6hX", "Byx1QW7N2X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544779331270, 1544253843340, 1544137602411, 1543969822263, 1543956363353, 1543882018839, 1542327651261, 1542327597542, 1542327509134, 1542327417414, 1541495588816, 1541385953761, 1540792598875 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper673/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper673/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper673/Authors" ], [ "ICLR.cc/2019/Conference/Paper673/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper673/Authors" ], [ "ICLR.cc/2019/Conference/Paper673/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper673/Authors" ], [ "ICLR.cc/2019/Conference/Paper673/Authors" ], [ "ICLR.cc/2019/Conference/Paper673/Authors" ], [ "ICLR.cc/2019/Conference/Paper673/Authors" ], [ "ICLR.cc/2019/Conference/Paper673/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper673/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper673/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"Dear authors,\\n\\nThe reviewers all appreciated the treatment of the topic and the quality of the writing. It is rare for all reviewers to agree on this, congratulations.\\n\\nHowever, all reviewers also felt that the paper could have gone further in its analysis. In particular, they noticed that quite a few points were either mentioned in recent papers or lacked an experimental validation.\\n\\nGiven the reviews, I strongly encourage the authors to expand on their findings and submit the improved work to a future conference.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"A well-written paper that is a bit lacking in novelty\"}", "{\"title\": \"The authors are right about their claim on differentiability\", \"comment\": \"Thanks for the detailed reply. It was a mistake on my side (I ignored the term x/abs(x) in my derivations). Indeed the proposed parametrization had advantages in its smoothness. However, I still think there should be some numerical evidence on how/whether such an advantage can affect the optimization.\\n\\nWhat do you think? I am open to a discussion on whether this empirical evaluation on numerical stability is necessary, and whether the current experiments are sufficient.\"}", "{\"title\": \"Reply to differentiability\", \"comment\": \"We thank the reviewer for taking time to check our statement about the differentiability of Eq. (3).\\nFor simplicity, we will write the proof in .tex code since we cannot update the pdf file.\\n\\nLet us note $\\\\textbf{c} = (x, c_2)$ and $\\\\textbf{d} = (d_1, d_2)$ the considered points in Eq. (3) when the dimensionality is 2 (as suggested by the reviewer). We show in the following that Eq. (3) is not differentiable when $\\\\textbf{c} = \\\\textbf{d}$. To this end, we consider that our variable is $x$ (i.e. the first element of \\\\textbf{c}). \\nWe also note $h = x - d_1$. \\nNote that we could equivalently consider that our variables are the other elements of $\\\\textbf{c}$ or $\\\\textbf{d}$. \\nSince we assume that $\\\\textbf{c} = \\\\textbf{d}$, we have $c_2 = d_2$ and we study the behavior of $x$ when it tends to $d_1$ or equivalently when $h$ tends to 0.\\n\\nWhen $c_2 = d_2$, Eq. (3) can be written as:\\n\\\\begin{equation}\\narcosh(f(h)) = log( f(h) + \\\\sqrt{f^2(h) - 1})\\n\\\\end{equation}\\nwhere \\n\\\\begin{equation}\\nf(h) = 1 + 2 \\\\frac{h^2}{(1 - (d_1 + h)^2 - d_2^2)(1 - d_1^2 - d_2^2)} = 1 + 2 \\\\frac{h^2}{\\\\alpha}\\n\\\\end{equation}\\nwhere we note $\\\\alpha = (1 - (d_1 + h)^2 - d_2^2)(1 - d_1^2 - d_2^2) > 0$ which is positive since the points $\\\\textbf{c}$ and $\\\\textbf{d}$ are constrained to be in the interior of the unit disk.\\n\\nFor all $h$, $arcosh(f(h))$ is then nonnegative.\\n \\nWe also have $f(0) = 1$ and $arcosh(f(0)) = 0$.\\n\\nWe now study two one-sided limits. The first one is the right-sided limit:\\n\\\\begin{equation}\\nlim_{h \\\\to 0^+} \\\\frac{arcosh(f(h)) - arcosh(f(0))}{h} = lim_{h \\\\to 0^+} \\\\frac{arcosh(f(h))}{h} = lim_{h \\\\to 0^+} (arcosh(f(h)))'\\n\\\\end{equation}\\nwhich is nonnegative since $arcosh(f(h))$ is nonnegative.\", \"the_second_one_is_the_left_sided_limit\": \"\\\\begin{equation}\\nlim_{h \\\\to 0^-} \\\\frac{arcosh(f(h)) - arcosh(f(0))}{h} = lim_{h \\\\to 0^-} \\\\frac{arcosh(f(h))}{h} = lim_{h \\\\to 0^-} (arcosh(f(h)))'\\n\\\\end{equation}\\nwhich is nonpositive.\\n\\nWe show in the following that these two limits are different, which implies that the function is not differentiable when $\\\\textbf{c} = \\\\textbf{d}$.\\n\\nWe then need to show that at least one of these limits is nonzero to show they are different.\\nWe study the first one and show that it is nonzero.\\n\\n\\n\\n\\n\\nThe derivative of arcosh wrt $z > 1$ is:\\n\\\\begin{equation}\\narcosh'(z) = \\\\frac{1}{\\\\sqrt{z^2 - 1}}\\n\\\\end{equation}\\n\\nThe derivative of $f$ wrt $h$ is:\\n\\\\begin{equation}\\nf'(h) = \\\\frac{4h (d_1 h + d_2^2 + d_1^2 - 1)}{(d_2^2 + d_1^2 - 1)(h^2 + 2 d_1 h + d_2^2 + d_1^2 - 1)^2}\\n\\\\end{equation}\\n\\nThe derivative of $arcosh(f)$ wrt $h$ is then $f'(h) arcosh'(f(h))$ which can be written:\\n\\\\begin{equation}\\n\\\\frac{4h (d_1 h + d_2^2 + d_1^2 - 1)}{(d_2^2 + d_1^2 - 1)(h^2 + 2 d_1 h + d_2^2 + d_1^2 - 1)^2 \\\\sqrt{(1 + 2 \\\\frac{h^2}{(1 - (d_1 + h)^2 - d_2^2)(1 - d_1^2 - d_2^2)})^2-1}}\\n\\\\end{equation}\", \"which_is_equal_to\": \"\\\\begin{equation}\\nL = \\\\frac{2}{1 - d_2^2 - d_1^2}\\n\\\\end{equation}\\nwhich is nonzero by definition of the domain of $\\\\textbf{d} = (d_1, d_2)$.\\nFor instance, when $\\\\textbf{d} = 0$, we have $L = 2$.\", \"l\": \"= lim_{h \\\\to 0^+} (arcosh(f(h)))' = lim_{h \\\\to 0^+} \\\\frac{2(d_1 h + d_2^2 + d_1^2 - 1)}{(h^2 + 2 d_1 h + d_2^2 + d_1^2 - 1) \\\\sqrt{ \\\\left( h^2 + (1 - (d_1 + h)^2 - d_2^2)(1 - d_1^2 - d_2^2)\\\\right)}}\\n\\\\end{equation}\", \"a_similar_proof_can_show_that_the_left_sided_limit_is\": \"\\\\begin{equation}\\n lim_{h \\\\to 0^-} (arcosh(f(h)))' = -L\\n\\\\end{equation}\\n\\nThe fact that these two one-sided limits are different shows that Eq. (3) is not differentiable when $\\\\textbf{c} = \\\\textbf{d}$.\\n\\nWe would also like to emphasize that L increases as the l2 norm of \\\\textbf{d}$ increases.\"}", "{\"title\": \"Reply to differentiability\", \"comment\": \"Thanks for the reply. I double checked that $d_{\\\\mathcal{P}}(c,d)$ is differentiable on $(c,d)\\\\in(\\\\mathcal{P}^d)^2$. If this is still not clear, please write down your derivations.\"}", "{\"title\": \"About the differentiability of the Poincar\\u00e9 metric on P^2\", \"comment\": \"The Poincar\\u00e9 metric is not differentiable on P^2 (or P^d in general for any value of d) when the compared points are equal.\\nThe formulation of the Poincar\\u00e9 metric on P^d is given in Eq. (3): when c = d (the optimal case that we might want to obtain for some pairs), Eq. (3) is equal to arcosh(1). However, due to its definition, arcosh(x) = log( x + sqrt( x^2 - 1)) is not differentiable at x = 1 (due to the sqrt term).\\nWe agree that arcosh is differentiable on (1, +inf). Moreover, the points have to be reprojected onto the interior of the unit disk if the optimized variables are in P^2.\"}", "{\"title\": \"Numerical stability\", \"comment\": \"Thank you very much for the response.\\n\\nI still think there should be some experimental comparison with Riemannian exp map on the advantage of the proposed method from the optimization perspective (e.g. numerical stability as claimed in the paper). This will make the contribution more complete. Notice that the Poincar\\u00e9 distance metric is differentiable on P^2 (P is the Poincare disk). Why isn't it?\"}", "{\"title\": \"We have updated the paper.\", \"comment\": \"We thank the reviewers, we clarify some points in the individual responses.\\n\\nWe have updated the paper with some requested additions.\", \"the_references_that_we_use_in_our_rebuttals_are\": \"[A] Ungar, Analytic Hyperbolic Geometry in N dimensions\\n[B] Ungar, Barycentric Calculus in Euclidean and Hyperbolic Geometry, 2010\\n[C] Gulcehre et al., Hyperbolic attention networks. Arxiv 2018\\n[D] Nickel and Kiela, Learning continuous hierarchies in the lorentz model of hyperbolic geometry, ICML 2018\\n[E] Ganea et al., Hyperbolic neural networks. NIPS 2018\\n[F] De Sa et al., Representation tradeoffs for hyperbolic embeddings, 2018\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"- \\u201cTheorem 3.4 (centroid computation with constraints) shows that minimizing the energy function $\\\\sum_{i} d_L^2 (x_i, a)$, when a is constrained to a discrete set, is equivalent to minimizing $d_L(c,a)$, where $c$ is given by Lemma 3.2. This is interesting as compared to the previous two theorems, but it is not clear whether/how this equivalence is used in the proposed embedding.\\u201d\\n\\nWe would like to emphasize why this theorem is important for some important contribution of the paper. \\nWe show that the distances with respect to a set of points can be reduced to calculating the distance to the centroid in the case of the Lorentzian distance. Interpreting distances to a set of points can then be done by interpreting their centroid. \\nFrom this observation, we study the property of the centroid which can be written in closed form. In particular, we show that the Euclidean norm of the centroid decreases as the curvature of the space decreases, representing trees then becomes easier. \\n\\n- \\u201cThese contributions are useful but incremental. Notably, (1) needs more experimental evidence (e.g. a toy example) to show the numerical instability of the other methods, and to show the learning curves of the proposed re-parametrization against the Riemannian stochastic gradient descent, which are not given in the paper.\\u201d\\n\\nAs mentioned above, our goal is not to prove that our algorithm is more stable than existing approaches although we explain that the Poincar\\u00e9 distance metric is not differentiable everywhere on the domain and its gradient tends to infinity when distances are infinitesimal. Both our approach and the Riemannian stochastic gradient descent are stable, although we can use momentum-based optimizers.\\n\\nAs already mentioned, the main advantage of our approach is that we are able to interpret the behavior of distances with sets of similar points.\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"- \\u201cUsing Weierstrass allows to use standard optimization methods without leaving the manifold. However, the optimization is still performed on a Riemannian manifold.\\u201d\\n\\nWe understand the concern that a Riemannian optimizer would probably be more appropriate since our representations lie on a manifold. \\nHowever, from Eq. (9), our distance function can also be seen as the sum of a simple bilinear form between real vectors and another term promoting some similarity of their Euclidean norms. This formulation is then very similar to optimizing (squared) Euclidean distances.\\n\\n- \\u201cComputing the centroid in closed form is interesting but isn't really exploited in the paper.\\u201c\\n\\nWe agree that we do not explicitly use the closed-form solution of the centroid in our experiments. However, our last theorem explains that minimizing the distances to a set of points is equivalent to minimizing the distance to its centroid. \\nOur study of the centroid is important to understand the behavior of our distance function with the considered set of similarity constraints (based on hierarchical relationships).\\n\\n- \\u201cThe only use of the centroid seems then to justify the regularization method, i.e., that parents should have a smaller norm than their children. However, this insight alone seems not particularly novel, as the same insight can be derived for standard hyperbolic method and has, for instance, been discussed in Nickel & Kiela (2017, 2018), Ganea et al (2018), De Sa (2018).\\u201d\\n\\nAlthough the fact that the representation of the common ancestor should have lower Euclidean norm is mentioned in these papers, it is never proven that it has lower Euclidean norm. The closest example that mentions a minimizer of an expectation of (squared) hyperbolic distances is De Sa [F]. However, they do not exploit a closed-form of the centroid and have to use a gradient-based method to minimize an optimization problem based on it (see [F], Section 4.2). \\nWe show that the Euclidean norm of the centroid of a set of point can be controlled with the curvature of the hyperbolic space. We experimentally show its impact in Table 2. Retrieval performance (Mean Rank and MAP) in Table 1 shows how close to its descendents a common ancestor is. The Poincar\\u00e9 metric is defined for a fixed curvature of -1 and cannot have smaller curvature given its formulation exploiting arcosh. \\n\\nFig. 1 of our submission shows an example where the centroid of the Poincar\\u00e9 metric does not have a smaller Euclidean distance than the set of points. \\nBy manipulating the curvature of the space, the centroid of the Lorentzian norm can produce centroids with smaller Euclidean norm (as we demonstrate that they depend on each other).\\nWe can also plot the centroid of the squared Poincar\\u00e9 distance, which shows that the corresponding centroid does not have a smaller Euclidean norm either.\\n\\n- \\u201cp.3: Projection onto the Poincar\\u00e9 ball/manifold is only necessary when the exponential map isn't used. Nickel & Kiela (2018), Ganea et al (2018) therefore don't have this problem.\\u201d\\n\\nThat is exactly what we explain in p.3, although Ganea et al. [E] also reproject their embeddings onto the Poincar\\u00e9 ball at each iteration (as explained in the \\u201cNumerical errors\\u201d paragraph of Section 4 of [E]). \\nNickel & Kiela (2018) propose to work in the hyperboloid space to avoid this reprojection as we explain in p.3.\\n\\n- \\u201cp.7: Since MR and MAP are ranking measures, and the ranking of distances between H^d and the L^2 distance should be identical, it is not clear why the experiments show significant differences for these methods when \\\\beta=1\\u201d\\n\\nAlthough the order of distances is the same between H^d and the L^2 (since they only differ by an arcosh activation function), these two distance functions are not equivalent. \\nThe arcosh has a logarithmic form and then tends to penalize differences between small distances more than differences between large distances. \\nThis generates a difference during training that is for instance similar to the difference obtained by training a linear loss vs a quadratic loss. The quadratic loss tends to penalize outliers more than a linear loss. \\nThe fact that these distance functions are not equivalent explains the difference of the results.\\n\\n- \\u201cp.7: Embeddings in the Poincar\\u00e9 ball and the Hyperboloid are both compatible with the regularization method in eq.14. It would be interesting to also see results for these methods with regularization.\\u201d\\n\\nWe agree but the point of that regularizer was to show that the global structure of the tree could be easily recovered by using such constraints without having a significant impact on the retrieval performances (i.e. Mean Rank and Mean Average Precision).\\nWe have added for instance in the updated version a study of the impact of such regularization on classification performance in Table 3. Removing that regularization consistently leads to (slightly) better classification scores.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"-\\u201cEq 11, bears lots of similarity to the Einstein gyro-midpoint\\u201d\\n \\nAs mentioned in our introduction, there exist various hyperbolic geometries with corresponding distances: the Poincar\\u00e9 distance, the Lorentzian distance and the gyrodistance [A,B]. We have investigated their relationships. \\nTo the best of our knowledge, the centroid of the Poincar\\u00e9 distance metric has no closed form solution. We provide in our submission a closed-form solution for the squared Lorentzian distance for any negative curvature of the space. \\nThe main motivation to introduce the centroid is our last theorem which shows that comparing distances wrt the squared Lorentzian distance with a set of points is equivalent to comparing distances with their centroid. By studying the properties of the centroid, we then have an idea of the behavior of distances with the corresponding set of points.\\n\\nGulcehre et al. [C] optimize the Poincar\\u00e9 distance but exploit as representative the Einstein gyrocentroid, also known as the Einstein midpoint when the set has only 2 points. Unlike the 2 previous centroids, that point is in general not an optimizer of an expectation over gyrodistances to a set of points. Nonetheless, when the set contains only 2 points, it is the minimizer of the Einstein addition of the gyrodistances between it and the two points of the set by using the gyrotriangle inequality. However, the Einstein addition is not commutative, the expectation properties then do not generalize for a set of more than 2 points although the gyrocentroid does preserve left gyrotranslation.\\n \\nConceptually, the main difference is that the Lorentzian centroid can be seen as a minimizer of some expectation over distances, the Lorentzian centroid is then ideal to represent the common ancestor of a set of tree nodes. On the other hand, the gyrocentroid can be seen as a point which preserves left gyrotranslation (see Remark 5.7 of [A]). The motivation of exploiting the point that preserves left gyrotranslation to be compared wrt the Poincar\\u00e9 metric (which corresponds to another distance) is not clear to us, at least for our task.\\n\\nEach of the distances have different centroids which are illustrated in the updated Fig 1.\\n \\n- \\\"Experiments are limited to small scale datasets.\\\"\\n \\nWe report quantitative results on the same datasets as [D] and [E].\\nMore exactly, [E] consider two tasks but they mention for the first task: \\\"for the sentence entailment classification task, we do not see a clear advantage of hyperbolic MLR compared to its Euclidean variant.\\\" We then compared our method in the task where they see an improvement when using hyperbolic representations. Our approach outperforms theirs.\\n\\n- \\\"The paper claims that Riemannian optimization is not necessary for this model\\\"\\n\\nWe do not sell the fact that we do not use Riemannian optimization as a contribution. Other approaches such as [D] can also do that (although their function is not differentiable on the whole domain and gradients tend to infinity for pairs of points with infinitesimal distance). We only say that, given the formulation of our distance in Eq. (9), using standard SGD is sufficient to outperform current hyperbolic approaches.\\nWe have not tried a Riemannian optimizer since the performance of a standard SGD already works well. We plan to do that in the future.\\n \\n- \\\"The novelty of the approach is limited compared to [1].\\u201d\\n\\nThe main contribution of the paper is not only a closed-form solution for the centroid. We exploit our last theorem that explains that distances with a set of points can be reduced to the study of the distance wrt the centroid. We then analyze some of its properties and explain why they are appropriate to represent trees.\\n\\n- \\\"Have you tried learning beta?\\\"\\n \\nFollowing the review, we have learned beta by learning a variable constrained to be positive by using a softplus activation function.\\nHere are results on some datasets, they are comparable with those reported when beta is 0.01:\", \"acm\": \"MR: 1.03 - MAP: 98.4 - SROC: 53.4\", \"eurovoc\": \"MR: 1.06 - MAP: 96.5 - SROC: 33.8\", \"wordnet_verbs\": \"MR: 1.10 - MAP: 94.7- SROC: 26.1\\nThe learned beta has values in the interval [10^(-6),10^(-4)] \\n \\n- \\u201cOn Eurovoc the results are worse than d_P in H^d\\u201d\\n\\nOnly the SROC score is worse, we obtain better or comparable retrieval scores (i.e. MR and MAP) on this dataset. The SROC score can be improved by increasing the regularization parameter lambda, but we only reported scores for one value of lambda.\\n\\n- \\u201cCan you reduce the number of data points in Fig 2\\u201d\\n\\nAs requested, we have added Fig. 5 that only plots the names of the kernel method researchers mentioned in Table 4. Most of them are represented close to the same radius, which validates our study of the Lorentzian distance for small values of beta.\\nWe have also added Fig. 3 to illustrate the Lorentzian distance to 1 point as a function of beta.\"}", "{\"title\": \"Review for Lorentzian Distance Learning\", \"review\": \"Summary\\n\\nLearning embeddings of graphs in hyperbolic space have become popular and yielded promising results. A core reason for that is learning hierarchical representations of the graphs is easier in hyperbolic space due to the curvature and the geometrical properties of the hyperbolic space. Similar to [1, 2], this paper uses Lorentzian model of the hyperbolic space in order to learn embeddings of the graph. The main difference of the proposed approach in this paper is that they come up with a closed-form solution such that each node representation close to the centroid of their descendants. A curious property of the equation for the centroids proposed to learn the embeddings of each node also encodes information related to the specificity in the Euclidean norm of the centroid. Also this paper introduces two additional hyperparameters. Beta hyperparameter is selected to control the curvature of the space. Depending on the task the optimal curvature can be tuned to be a different value. This also ties closely with the de-Sitter spaces. Authors provide results on different graph embedding benchmark tasks. The paper claims that, an advantage of the proposed approach is that the embedding of the model can be tuned with regular SGD without needing to use Riemannian optimization techniques.\\n\\nQuestions\\n\\nHave you tried learning beta instead of selecting as a hyperparameter?\\nThe paper claims that Riemannian optimization is not necessary for this model, but have you tried optimizing the model with the Riemannian optimization methods?\\nEquation 11, bears lots of similarity to the Einstein gyro-midpoint method proposed by Abraham Ungar which is also used by [2]. Have you investigated the relationship between the two formulations?\\nOn Eurovoc dataset the results of the proposed method is worse than the d_P in H^d. Do you have a justification of why that happens?\\n\\n\\nPros\\nThe paper delivers some interesting theoretical findings about the embeddings learned in hyperbolic space, e.g. a closed for equation in the\\nThe paper is written well. The goal and motivations are clear.\\n\\n\\nCons\\nExperiments are only limited to small scale-traditional graph datasets. It would be more interesting to see how those embeddings would perform on large-scale datasets such as to learn knowledge-base embeddings or for recommendation systems.\\n\\nAlthough the idea is interesting. Learning graph embeddings have already been explored in [1]. The main contribution of this paper is thus mainly focuses on the close-form equation for the centroid and the curvature hyperparameter. These changes provide significant improvements on the results but still the novelty of the approach is in that sense limited compared to [1].\", \"minor_comment\": \"It is really difficult to understand what is in Figure 2 and 3. Can you reduce the number of data points and just emphasize a few nodes in the graph that shows a clear hierarchy.\\n\\n[1] Nickel, Maximilian, and Douwe Kiela. \\\"Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry.\\\" arXiv preprint arXiv:1806.03417 (2018).\\n[2] Gulcehre, Caglar, Misha Denil, Mateusz Malinowski, Ali Razavi, Razvan Pascanu, Karl Moritz Hermann, Peter Battaglia et al. \\\"Hyperbolic Attention Networks.\\\" arXiv preprint arXiv:1805.09786 (2018).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review - Lorentzian Distance Learning\", \"review\": \"The paper proposes a new approach to compute hyperbolic embeddings based on the squared Lorentzian distance. This choice of distance function is motivated by the observation that the ranking of these distances is equivalent to the ranking of the true hyperbolic distance (e.g., on the hyperboloid). For this reason, the paper proposes to use this distance function in combination with ranking losses as proposed by Nickel & Kiela (2017), as it might be easier to optimize. Moreover, the paper proposes to use Weierstrass coordinates as a representation for points on the hyperboloid.\\n\\nHyperbolic embeddings are a promising new research area that fits well into ICLR. Overall, the paper is written well and good to understand. It introduces interesting ideas that are promising to advance hyperbolic embeddings. However, in the current version of the paper, these ideas are not fully developed or their impact is unclear.\\n\\nFor instance, using Weierstrass coordinates as a representations seems to make sense, as it allows to use standard optimization methods without leaving the manifold. However, it is important to note that the optimization is still performed on a Riemannian manifold. For that reason, following the Riemannian gradient along geodesics would still require the exponential map. Moreover, optimization methods like Adam or SVRG are still not directly applicable. Therefore, it seems that the practical benefit of this representation is limited.\\n\\nSimilarly, being able to compute the centroid efficiently in closed form is indeed an interesting aspect of the proposed approach. Moreover, the paper explicitly connects the centroid to the least common ancestor of children in a\\ntree, what could be very useful to derive new embedding methods. Unfortunately, this is advantage isn't really exploited in the paper. The approach taken in the paper simply uses the loss function of Nickel & Kiela (2017) and this loss doesn't make use of centroids, as the paper notes itself. The only use of the centroid seems then to justify the regularization method, i.e., that parents should have a smaller norm than their children. However, this insight alone seems not particularly novel, as the same insight can be derived for standard hyperbolic method and has, for instance, been discussed in Nickel & Kiela (2017, 2018), Ganea et al (2018), De Sa (2018). Using the centroid to derive new hyperbolic embeddings could be interesting, but, unfortunately, is currently not done in the paper.\\n\\nFurther comments\\n- p.3: Projection back onto the Poincar\\u00e9 ball/manifold is only necessary when\\n the exponential map isn't used. The methods of Nickel & Kiela (2018), Ganea et al (2018) therefore don't have this problem.\\n- p.7: Since MR and MAP are ranking measures, and the ranking of distances between H^d and the L^2 distance should be identical, it is not clear to me why the experiments show significant differences for these methods when \\\\beta=1\\n- p.7: Embeddings in the Poincar\\u00e9 ball and the Hyperboloid are both compatible with the regularization method in eq.14 (using their respective norms). It would be interesting to also see results for these methods with regularization.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An incremental work on hyperbolic embedding with Lorentzian Distance\", \"review\": \"This paper proposed an unsupervised embedding method for hierarchical or graph datasets. The embedding space is a hyperbolic space as in several recent works such as (Nickel & Kiela, 2017). The author(s) showed that using the proposed embedding, the optimization has better numerical stability and better performance.\\n\\nI am convinced of the correctness and the experiment results, and I appreciate that the paper is well written with interesting interpretations such as the demonstration of the centroid. However, the novelty of this contribution is limited and may not meet the publication standard of ICLR. I suggest that the authors enhance the results and resubmit this work in a future venue.\\n\\nTheoretically, there are three theorems in section 3:\\n\\nTheorem 3.1 shows the coordinate transformation from the proposed parametrization to hyperboloid then to Poincare ball preserves the monotonicity of the Euclidean norm. This is straightforward by writing down the two transformations.\\n\\nLemma 3.2 and theorem 3.3 state the centroid in closed expression based on the Lorentzian distance, taking advantage that the Lorentzian distance is in a bi-linear form (no need to take the arcosh() therefore the analysis are much more simplified) These results are quite striaghtforward from the expression of the energy function.\\n\\nTheorem 3.4 (centroid computation with constraints) shows that minimizing the energy function\\n$\\\\sum_{i} d_L^2 (x_i, a)$, when a is constrained to a discrete set, is equivalent to minimizing $d_L(c,a)$, where $c$ is given by Lemma 3.2.\\nThis is interesting as compared to the previous two theorems, but it is not clear whether/how this equivalence is used in the proposed embedding.\\n\\nTechnically, there are three novel contributions,\\n\\n1. The proposed unconstrained reparametrization of the hyperboloid model does not require to project the embedding points onto the hyperbolic manifold in each update.\\n\\n2. The cost is based on the Lorentzian Distance, that is a monotonic transformation of the Riemannian distance (without taking the arccosh function). Therefore the similarity (a heat kernel applied on the modified distance function) is measured differently than the other works. Informally one can think it as t-SNE v.s. SNE which use different similarity measures in the target embedding.\\n\\n3. The authors discussed empirically the different choice of beta, which was typically chosen as beta=1 in previous works, showing that tunning the beta hyperparameter can give better embeddings.\\n\\nThese contributions are useful but incremental. Notably, (1) needs more experimental evidence (e.g. a toy example) to show the numerical instability of the other methods, and to show the learning curves of the proposed re-parametrization against the Riemannian stochastic gradient descent, which are not given in the paper.\\n\\nBy listing these theoretical and technical contributions, overall I find that most of these contributions are incremental and not significant enough for ICLR.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
r1e13s05YX
Neural network gradient-based learning of black-box function interfaces
[ "Alon Jacovi", "Guy Hadash", "Einat Kermany", "Boaz Carmeli", "Ofer Lavi", "George Kour", "Jonathan Berant" ]
Deep neural networks work well at approximating complicated functions when provided with data and trained by gradient descent methods. At the same time, there is a vast amount of existing functions that programmatically solve different tasks in a precise manner eliminating the need for training. In many cases, it is possible to decompose a task to a series of functions, of which for some we may prefer to use a neural network to learn the functionality, while for others the preferred method would be to use existing black-box functions. We propose a method for end-to-end training of a base neural network that integrates calls to existing black-box functions. We do so by approximating the black-box functionality with a differentiable neural network in a way that drives the base network to comply with the black-box function interface during the end-to-end optimization process. At inference time, we replace the differentiable estimator with its external black-box non-differentiable counterpart such that the base network output matches the input arguments of the black-box function. Using this ``Estimate and Replace'' paradigm, we train a neural network, end to end, to compute the input to black-box functionality while eliminating the need for intermediate labels. We show that by leveraging the existing precise black-box function during inference, the integrated model generalizes better than a fully differentiable model, and learns more efficiently compared to RL-based methods.
[ "neural networks", "black box functions", "gradient descent" ]
https://openreview.net/pdf?id=r1e13s05YX
https://openreview.net/forum?id=r1e13s05YX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJeFD9qrxV", "r1xKLxl5R7", "rJxBpMKOC7", "SkeZmft_07", "ByxwCZF_0Q", "BJx7vWFOCm", "rkgznMOCnX", "r1xRq4Zah7", "rkg1n-hLhm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545083488619, 1543270481489, 1543176892653, 1543176729034, 1543176654805, 1543176539170, 1541468841965, 1541375125795, 1540960678528 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper672/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper672/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper672/Authors" ], [ "ICLR.cc/2019/Conference/Paper672/Authors" ], [ "ICLR.cc/2019/Conference/Paper672/Authors" ], [ "ICLR.cc/2019/Conference/Paper672/Authors" ], [ "ICLR.cc/2019/Conference/Paper672/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper672/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper672/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The paper focuses on hybrid pipelines that contain black-boxes and neural networks, making it difficult to train the neural components due to non-differentiability. As a solution, this paper proposes to replace black-box functions with neural modules that approximate them during training, so that end-to-end training can be used, but at test time use the original black box modules. The authors propose a number of variations: offline, online, and hybrid of the two, to train the intermediate auxiliary networks. The proposed model is shown to be effective on a number of synthetic datasets.\", \"the_reviewers_and_ac_note_the_following_potential_weaknesses\": \"(1) the reviewers found some of the experiment details to be scattered, (2) It was unclear what happens if there is a mismatch between the auxiliary network and the black box function it is approximating, especially if the function is one, like sorting, that is difficult for neural models to approximate, and (3) the text lacked description of real-world tasks for which such a hybrid pipeline would be useful.\\n\\nThe authors provide comments and a revision to address these concerns. They added a section that described the experiment setup to aid reproducibility, and incorporated more details in the results and related work, as suggested by the reviewers. Although these changes go a long way, some of the concerns, especially regarding the mismatch between neural and black box function, still remain.\\n\\nOverall, the reviewers agreed that the issues had been addressed to a sufficient degree, and the paper should be accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Important problem setup and strong evaluation\"}", "{\"title\": \"Response to authors' reply\", \"comment\": \"I have read the authors' reply. I am generally happy with the revision and will keep my rating.\"}", "{\"title\": \"Revision change-log\", \"comment\": \"We thank all of the reviewers and we apologize for the late response.\\n\\nTo address comments of the reviewers, the PDF file of the paper has been updated with the following changes:\\n\\n- We have added an appendix that contains the implementation details of our experiments, including architecture and hyperparameters.\\n- Another experiment was added to Table 3 in Section 4. It is an additional k=4 experiment for the Image-Lookup task, and its primary addition is an example of an instance where the argument extractor was able to learn its desired functionality in spite of an imperfect estimator.\\n- Added citations in the Related Works section (under RL). \\n\\nAs well as some small typo fixes.\\n\\nThanks for all of the great comments\\n\\n- Authors\"}", "{\"title\": \"Reply to the reviewer's suggestion\", \"comment\": \"Thank you for your detailed and positive review.\\n\\nRegarding your last comment, if we have understood it correctly, you are suggesting to use the argument extractor when sampling to the estimator during training for better exploration of the argument space. We believe that this has potential to improve sample complexity. The hybrid training has served us well enough in our experiments, but it is a great idea for future work. One challenge is that it is not immediately clear what the initial prior should be (and how strong when updating the posterior). \\n\\nThank you again for your comments\\n- Authors\"}", "{\"title\": \"Appendix added to clarify experiment details\", \"comment\": \"Thank you for the detailed and positive review.\\n\\nTo answer your questions regarding specific experiment details, we have added an appendix which contains all of the relevant details. Specifically, we have indeed used hybrid training for experiments 4.2, 4.3, where the pre-training continued until either a satisfactory performance was reached, such as 90%, or performance stopped increasing (the behavior depends on the difficulty of the task and quality of the offline sampling). Experiment 4.1 uses online training (this was specified in Table 1's caption but we have now added the information to the main text).\\n\\nThe pre-training took a negligible amount of time in comparison to the target/online training since it relied on a sub-component of the network and loss (additionally, the arguments domain is far smaller than the input domain).\\n\\nThe exact loss function is also now detailed in the appendix, but in essence, we used a direct sum (i.e. weights of 1.0) of the target loss and online loss. \\n\\nThank you again for your comments\\n- Authors\"}", "{\"title\": \"Paper revised (including experiment appendix) to address comments\", \"comment\": \"Thank you for the very detailed criticism and positive review.\", \"we_have_updated_the_paper_to_address_your_concerns_in_the_following_way\": \"(1) We have added an appendix section with experimental details for the purpose of reproducing the results. While we cannot make a concrete promise, we will make a concerted effort to release the code.\\n\\n(2) We refer to both of your points (1) and \\\"minor\\\" together:\\n\\nWe have added more detail to Table 3 in the form of a k=4 experiment and an additional column for the estimator accuracy on the online sampling dataset. We could not execute a k=5 experiment because of resource constraints in training an estimator network for a 10^5 lookup table. The experiment results are:\\nImage-Lookup k=4\", \"train\": \"0.69\", \"test\": \"0.1\", \"inference\": \"0.95\", \"argument_extractor\": \"0.986\", \"estimator\": \"0.7\\n\\nIn this experiment, the bbf estimator only reaches a performance of 0.7 (in other words, it learns about 70% of the values in the 10^4 lookup table), which proves enough for the argument extractor to learn the desired functionality, allowing the network to perform well at inference time with the real black-box function. This result should help address your concern in point (1). We note that it is not necessary for the estimator to learn in a way that generalizes to unseen inputs (because it is discarded after training), as long as it performs the correct mapping on the training set.\\n\\nIt is true that learning is dependent on the performance of the estimator. Whether the argument extractor can learn from an imperfect estimator is likely dependent on the ratio of noisy signals (from incorrect estimator decisions) to informative signals, and the ability of the interface between them to generalize from the informative examples to the noisy examples. In the case where the estimator is never correct for any decision of one of the argument extractors, that specific argument extractor will be unable to learn (in the Image-Lookup case, this would mean the estimator is incorrect for an entire dimension slice of the lookup bbf tensor).\\n\\n(3) Regarding the practicality of the approach towards real-world tasks like Semantic Parsing and Question Answering, it has indeed been our main motivation for this work. The synthetic Text-Lookup-Logic experiment was meant to serve as a first step in that direction. We've added a short mention of these motivations in the introduction.\\n\\nWe have also appended the Related Works section with your suggestions.\\n\\nThe new revision of the paper has been uploaded to this page.\\n\\nThank you again for your comments\\n- Authors\"}", "{\"title\": \"Interesting approach with good results on synthetic tasks\", \"review\": \"This paper presents an approach, called EstiNet, to train a hybrid models which uses both neural networks and black-box functions. The key idea is that, during training, a neural network can be used to approximate the functionality of the black-box functions, which makes the whole system end-to-end differentiable. At test time, the true black-box functions are still used. The training objective composes two parts: L_bbf, the loss for approximating the black-box function and L_target, the loss for the end-to-end goal. They tried different variations of when to train the black-box function approximator or not. It is shown to outperform the baselines like end-to-end differentiable model or NALU over 4 synthetic tasks in sample efficiency. There are some analysis about how the entropy loss and label smoothing helps with the gradient flow.\\n\\nThe proposed model is interesting, and is shown to be effective in the synthetic tasks. The paper is well-written and easy to follow. However, some of the experiment details are missing or scattered in the text, which might make it hard for the readers to reproduce the result. I think it helps to have the experimental details (number of examples, number of offline pretraining steps, size of the neural network, etc) organized in some tables (could be put in the appendix).\", \"two_main_concerns_about_how_generally_applicable_is_the_proposed_approach\": \"1. It helps to show how L_target depends on L_bbf, or how good the approximation of the black-box function has to be to make the approach applicable. For example, some functions, such as sorting, are hard to approximate by neural network in a generalizable way, so in those cases, is it still possible to apply the proposed approach? \\n\\n2.The proposed approach can be better justified by discussing some potential real world applications. Two closely related applications I can think of are visual question answering and semantic parsing. However, it is hard to find good black-box functions for VQA and people often learn them as neural networks, and the functions in semantic parsing often need to interact with a database or knowledge graph, which is hard to approximate with a neural network.\", \"some_minor_issues\": \"Table 3 isn\\u2019t very informative since k=2 and k=3 provides very similar results. It would help to show how large k needs to be for the performance to severely degrade.\", \"missing_references\": \"The Related Works section only reviewed some reinforcement learning work on synthetic tasks. However, with some bootstrapping, RL is also shown to achieve state-of-the-art performance on visual question answering and semantic parsing tasks (Johnson et al, 2017; Liang et al, 2018), which might be good to include here. \\n\\nJohnson, J., Hariharan, B., van der Maaten, L., Hoffman, J., Fei-Fei, L., Zitnick, C. L., & Girshick, R. B. (2017, May). Inferring and Executing Programs for Visual Reasoning. In ICCV (pp. 3008-3017).\\nLiang, C., Norouzi, M., Berant, J., Le, Q., & Lao, N. (2018). Memory augmented policy optimization for program synthesis with generalization. arXiv preprint arXiv:1807.02322.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Well written paper and convincing results\", \"review\": \"This paper is about training a neural network (NN) to perform regression given a dataset (x, y) *and* a black box function which we know correctly maps from some intermediate representation to y. Instead of learning a NN directly from x to y, we want to make use of this black box function and learn a mapping from x to the intermediate representation. Call this the \\\"argument extractor\\\" NN. The problem is that (i) the black box function is typically non-differentiable so we cannot learn end-to-end and (ii) we don't have labels for the intermediate representations in order to learn a NN to approximate the black box function. The authors propose to train in three different ways: (1) offline training: train an auxiliary NN that approximates the black box function based on data generated by sampling the input uniformly (or similar); then train both the auxiliary NN and the argument extractor NN together end-to-end using (x, y) data, (2) online training: train the auxiliary NN and the argument extractor NN together, based on (x, y) data; data for training the auxiliary NN comes from the argument extractor NN during the main training, and (3) hybrid training: pre-train the auxiliary NN as in (1) and then train both NNs as in (2).\", \"experimental_results_show\": [\"this approach leads to better performance than regressing directly from x to y in the small data regime,\", \"this approach leads to better generalization (being able to add more image numbers during test),\", \"this approach learns faster than an actor-critic based RL agent,\", \"this approach can be useful even if the functionality of the black-box function inherently cannot be estimated by a differentiable function (lookup table) - the resulting argument extractor NN is useful when used with the non-differentiable black box function,\", \"hybrid training is the best; offline training is the worst,\", \"penalizing low output entropy helps.\", \"It wasn't quite clear to me which training procedure was used for experiments 4.1-4.3. Presumably hybrid? It would also be nice to see how much time is spent in pre-training vs main training. In figure 2, what are the update steps for EstiNet (since there are two losses + pretraining)?\", \"I found this paper to be generally well-written and results convincing.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A good idea with proper validation\", \"review\": \"The paper proposes a method to solve end-to-end learning tasks using a combination of deep networks and domain specific black-box functions. In many machine learning tasks there may be a sub-part of the task can be easily solved with a black-box function (e.g a hard coded logic). The paper proposes to use this knowledge in order to design a deep net that mimics the black-box function. This deep net being differentiable can be utilized while training in order to perform back-propagation for the deep nets that are employed to solve the remaining parts of the task.\\n\\nThe paper is well written and in my opinion the experiments are solid. They show significant gains over well-designed baselines. (It should be noted that I am not super familiar with prior work in this area and may not be aware of some related baselines that can be compared with.)\\n\\nIn Section 3.1.2 the authors discuss offline and online methods to train the mimicking deep network of a black-box function. The offline version suffers from wasting samples on unwanted regions while the online version will have a cold-start problem. However, I believe there can be better solution than the hybrid strategy. In fact there is a clear explore/exploit trade-off here. Therefore, one may start with a prior over the input domain of the black-box function and then as the argument extractor learns well the posterior can be updated. Then we can Thompson sample the inputs from this posterior in order to train the mimicking network. I think such a bandit inspired approach will be interesting to try out.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SJxCsj0qYX
Stackelberg GAN: Towards Provable Minimax Equilibrium via Multi-Generator Architectures
[ "Hongyang Zhang", "Susu Xu", "Jiantao Jiao", "Pengtao Xie", "Ruslan Salakhutdinov", "Eric P. Xing" ]
We study the problem of alleviating the instability issue in the GAN training procedure via new architecture design. The discrepancy between the minimax and maximin objective values could serve as a proxy for the difficulties that the alternating gradient descent encounters in the optimization of GANs. In this work, we give new results on the benefits of multi-generator architecture of GANs. We show that the minimax gap shrinks to \epsilon as the number of generators increases with rate O(1/\epsilon). This improves over the best-known result of O(1/\epsilon^2). At the core of our techniques is a novel application of Shapley-Folkman lemma to the generic minimax problem, where in the literature the technique was only known to work when the objective function is restricted to the Lagrangian function of a constraint optimization problem. Our proposed Stackelberg GAN performs well experimentally in both synthetic and real-world datasets, improving Frechet Inception Distance by 14.61% over the previous multi-generator GANs on the benchmark datasets.
[ "generative adversarial nets", "minimax duality gap", "equilibrium" ]
https://openreview.net/pdf?id=SJxCsj0qYX
https://openreview.net/forum?id=SJxCsj0qYX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJlH7A5CJN", "HygcVFrRyV", "rkeRfHNu1E", "SkxUTyZxyE", "Hkx_aMpapQ", "ByghlSHQTQ", "rylVJXBmaX", "HyebUgh0nQ", "Hkl61NqR2m", "BygSR2UR37", "SyxWuDuj3X", "rkxGpTi9hX", "r1ekBWtq3X", "HJxvQznKn7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544625693243, 1544603953636, 1544205590055, 1543667645855, 1542472383790, 1541784820026, 1541784283644, 1541484617419, 1541477348557, 1541463245318, 1541273448593, 1541221818421, 1541210423495, 1541157406546 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper671/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper671/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper671/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper671/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper671/Authors" ], [ "ICLR.cc/2019/Conference/Paper671/Authors" ], [ "ICLR.cc/2019/Conference/Paper671/Authors" ], [ "ICLR.cc/2019/Conference/Paper671/Authors" ], [ "ICLR.cc/2019/Conference/Paper671/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper671/Authors" ], [ "ICLR.cc/2019/Conference/Paper671/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper671/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper671/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper671/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes new GAN training method with multi generator architecture inspired by Stackelberg competition in game theory. The paper has theoretical results showing that minmax gap scales to \\\\eps for number of generators O(1/\\\\eps), improving over previous bounds. Paper also has some experimental results on Fashion Mnist and CIFAR10 datasets.\\n\\nReviewers find the theoretical results of the paper interesting. However, reviewers have multiple concerns about comparison with other multi generator architectures, optimization dynamics of the new objective and clarity of writing of the original submission. While authors have addressed some of these concerns in their response reviewers still remain skeptical of the contributions. Perhaps more experiments on imagenet quality datasets with detailed comparison can help make the contributions of the paper clearer.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"ICLR 2019 decision\"}", "{\"title\": \"my reply\", \"comment\": \"Thank the authors for their detailed reply.\\n\\nQ1. \\\"On one hand, the population form of Stackelberg GAN includes some forms of previous multi-generators GANs as special cases.\\\" They are definitely not your special cases, but your formulation is a simplified version of theirs. \\\"On the other hand, the empirical losses of Stackelberg GAN and prior GANs are different.\\\" Through our discussion, I got more clear about your formulation. The population loss is the same, but the empirical losses are different. This makes the proof of Theorem 9 the same as before, but minor changes in the training algorithm. This reinforces my impression that the contribution on the algorithmic part is minor.\\n\\nQ2. I do not have time to get back to check whether all typos are corrected and whether the new proof makes sense. I hope that I can read a roughly correct proof (at least typos do not prevent me finishing reading) in my first read. \\n\\nQ3. The authors addressed my Q3 by Theorem 9. From my perspective, Theorem 9 is correct, but the authors' proof is floppy. The equality between (10) and (11) deserves much more words.\\n\\nOverall, I still think that the the contribution on the algorithmic part is minor.\"}", "{\"title\": \"changing the order of expectations v.s. explicitly computing one expectation\", \"comment\": \"Hi Reviewer 3,\\n\\nFrom my point of view, the proposed method does not change the order of expectations, but just explicitly computing the expectation with respect to the multinomial distribution (N, 1/I, 1/I, ..., 1/I) over the N discriminators. \\n\\nWhen we have a loss on output of ensemble, the empirical loss is:\\n\\\\sum_{n=1}^N f(D_{\\\\theta}(x_n) + \\\\sum_{i=1}^I \\\\sum_{n=1}^{N_i} f(1-D_{\\\\theta}(G_{\\\\gamma_i}(z_{i,n}))),\\nwhere {x_n}_{n=1}^N are iid samples from the dataset, {G_{\\\\gamma}(z_{n,i}): 1\\\\le i\\\\le I, 1\\\\le n \\\\le N_i} are iid samples from the mixture generative model, and \\\\sum_{i=1}^I N_i = N.\\n\\nThe population loss is \\n\\\\Expect_{x_n}[ \\\\sum_{n=1}^N f(D_{\\\\theta}(x_n)) ] + \\\\Expect_{N_i, z_{i,n}}[ \\\\sum_{i=1}^I \\\\sum_{n=1}^{N_i} f(1-D_{\\\\theta}(G_{\\\\gamma_i}(z_{i,n}))) ]. \\nThe first expectation is easy, resulting in N \\\\Expect_{x} [ f(D_{\\\\theta}(x)) ].\\nThe second expectation can be simplified by performing a conditional expectation on N_i, and then explicitly computing the expectation with respect to N_i. (I omit a few steps here, but I can provide if needed.) This results in\\n\\\\sum_{i=1}^I N/I \\\\Expect_{z_i}[ f(1-D_{\\\\theta}(G_{\\\\gamma_i}(z_{i}))) ],\\nwhich is exactly the population loss proposed in this paper. \\n\\nTherefore, I think that there's no exchange of expectations in the proposed method. The population loss function used in this paper and in previous ensemble-discriminator papers are the same. The difference is only in the empirical loss, which I think is minor.\"}", "{\"title\": \"I don't see why the review2 says the paper's contribution is negligible.\", \"comment\": \"The paper's treatment clearly changes the order of taking expectation, i.e.\\nminimizing the average of loss vs. minimizing the loss of averages.\\n\\nIf the problem has a linear structure, then it's true that changing the order does not matter, but in this case, it does matter, and the paper supports this claim by showing experimental results. (I'm pretty sure depending on the problem structure one is the upper/lower bound to the other in certain cases.)\"}", "{\"title\": \"Summary of the revision\", \"comment\": \"The authors would like to thank the reviewers for bringing up valuable questions/suggestions to improve our paper. We have updated manuscripts to address all concerns from the reviewers appropriately. We summarize our revision below.\\n----------------------------------------------------------------------------------------------------------------------------------------\\nPositions For which comment Revised version\\n----------------------------------------------------------------------------------------------------------------------------------------\\nSection D Density approximation New results for density approximation\\n\\nTable 1 Comparison with MAD-GAN New experiments for MAD-GAN\\n\\nSections 2 Concave closed hull? Avoid using the term by redefining \\\\hat{cl} f\\n\\nBullet 1, Page 2 Difference with prior GANs Clarify algorithmic difference\\n\\nWhole paper Typos Fix all typos\\n-----------------------------------------------------------------------------------------------------------------------------------------\\n\\nAgain, we would like to emphasize that our contributions focus more on the approximate Nash equilibrium in Theorems 1, 3, and Corollary 2, improving over previous best-known bounds. We argue that the algorithmic similarity is an advantage of this paper, which means that our theoretical results work for broader class of multi-generator GANs.\\n\\nFinally, we would like to mention that the technical contents in the revised version are the same with those in the previous version. We would like to kindly ask reviewers to re-evaluate the paper focusing more on the technical contributions of the paper. Thank you.\"}", "{\"title\": \"Stackelberg game can be a zero-sum game. We provide a necessary condition about why more data generators help by analyzing the existence of approximate Nash equilibrium.\", \"comment\": \"We thank the reviewer for the comments. However, we respectfully disagree with a few points from the reviewer. We will address all concerns from the reviewer in the following form of Q/A.\\n\\nQ1 (The first paragraph of review). A Stackelberg competition is a non-zero-sum game. From a game theoretic perspective, Stackelberg GAN still yields a 2-player zero-sum game. Hence, I have doubts about the general finding that more data generating components decreases the duality gap.\\nA1. Stackelberg competition can be a zero-sum game. There are many references supporting this claim. We cite some of them below. In fact, zero-sum Stackelberg games are equivalent to solving for the minimax equilibrium in zero-sum games. So, people usually don't talk about Stackelberg equilibrium in zero-sum games, instead they talk about minimax equilibrium. Here we mainly use the concept of leader-follower model in stackelberg game to represent the sequential adversarial process between one discriminator and multiple generators. Therefore, we respectfully disagree that the name Stackelberg GAN is misleading, since the underlying problem formulation is indeed a zero-sum Stackelberg game.\\n[1] Stackelberg Security Games: Looking Beyond a Decade of Success, 2018.\\n[2] http://coral.ise.lehigh.edu/wp-content/uploads/coralseminar/ipsem/talks/2005_06/scott_bicrit.pdf\\n[3] Sequential Stackelberg Equilibria in Two-Person Games, 1988.\\n\\nQ2 (The second paragraph of review). Suppose that the data generator components have the same network architecture. This would also imply that all data generator components find the same global best solution, in which case the gap would be identical to just using one of those components.\\nA2. We argue that having the same network architecture for each data generator component does not necessarily imply all data generator components find the same global best solution. Here we provide two reasons about this. (1) Neural network is highly non-convex: starting from different random initializations, each generator would converge to different solutions even with the same network architecture, as the globally optimal solutions might not be unique. (2) In Appendix D, we show that Stackelberg GAN can learn a mixture of distributions under standard assumptions as Goodfellow et al.\\u201914. This implies that all generators cannot find the same globally best solution when each generator does not have enough capacity to learn the real data distribution but a mixture of generators has; otherwise, we have P_{G_{gamma_i}(z)}=P_d for all i, contradicting with the condition that \\u201ceach generator does not have enough capacity to learn the real data distribution\\u201d. From (1) and (2), we see that \\u201cdifferent initializations\\u201d and \\u201cgenerator capacity\\u201d are two factors which might prevent generators from finding the same solution.\\nOur main conclusion holds not only for the worst case, but also holds true for the practical cases. For example, in Figure 1 we use the same network architecture for all generator components. We do not observe the phenomenon that all data generator components find the same globally best solution as the reviewer mentioned.\\n\\nQ3 (The third paragraph of review). If we assume a different family of mappings for each component, the convexity violation of the joint data generator is higher than for each component; hence, the gap does not necessarily decrease with more components.\\nA3. We respectfully disagree with the comment. Denote by Delta_i the convexity violation of the i-th generator and let Delta_max=max{Delta_1,\\u2026,Delta_I}. Our result shows that the convexity violation of the joint data generator (i.e., the duality gap) is no larger than Delta_max/I. Since Delta_max can be a bounded value, this shows that the gap decreases with more components. Indeed, the convexity violation of the joint data generator Delta_max/I is smaller than that of the most non-convex generator, and can even be smaller than the most convex generator when I is sufficiently large.\\n\\nQ4 (The fourth paragraph of review) So why do multiple data generator components help in practice, and why does the proposed model outperform single-component GANs and the multi-branch GAN in the experiments?\\nA4. We answer both questions from the game-theoretic perspective in this paper --- when does approximate equilibrium exist. We believe both points of view (optimization and game theory) are worth studying, while we focus on the latter one. We argue that there are strong connections between the two points of view: if approximate Nash equilibrium does not exist as in the single-component GANs, all optimization methods would suffer from instability and finally fail. In contrast, our study shows that approximate Nash equilibrium exists for multi-component GANs and improves over Arora\\u2019s result. So, our study of Nash equilibrium serves as a necessary condition for the success of GANs.\\n\\nWe are looking forward to your re-evaluation based on our reply. Thanks for your consideration.\"}", "{\"title\": \"Our theory works broadly including some forms of previous multi-generators GANs as special cases.\", \"comment\": \"We thank the reviewer for the valuable comments. We will address all concerns from the reviewer in the following form of Q/A.\\n\\nQ1. (The first paragraph of review) From the algorithm part, I think the algorithm is very similar (and even simpler) than MAD-GAN and MGAN. Therefore, on the algorithm part, the author may want to address the difference between Stackelberg GAN and MAD-GAN and MGAN. On the experiment part, we need to see more comparison between these three methods. In the current experiment, MGAN result is very similar to the proposed method, and MAD-GAN result is missing.\\nA1. On one hand, the population form of Stackelberg GAN includes some forms of previous multi-generators GANs as special cases. We believe this is a *plus* of our paper, because it implies that our theory works for broader GAN models, providing a unified and improved framework for multi-generator GANs. Note that in our theory, we make no assumption on the capacity and architecture of discriminator. Thus, our theory even works for more complicated discriminator such as that of MGAN and MAD-GAN, whose theory on equilibrium is missing in their original papers. On the other hand, the empirical losses of Stackelberg GAN and prior GANs are different. Our choice of sampling scheme is flexible as we claimed in the previous post. Furthermore, MGAN requires shared network parameters among various generators, while Stackelberg GAN enjoys free parameters for each generator. To make the paper clearer, we restate the difference among various models in Page 2 of our revised version (the first bullet).\\nOn the experiment part, we supplement new experiments on MAD-GAN on CIFAR-10 as the reviewer suggested. We did not find existing Inception Score of MAD-GAN on CIFAR-10, so we run it by ourselves. Here is a thorough comparison among MGAN, MAD-GAN, and Stackelberg GAN with the same network capacity and 10 generators. A potential reason of the unsatisfactory performance of MAD-GAN is that the method involves a multi-class discriminator with classes as many as I+1, which leads to an imbalance between real and generated data and the unstable training.\\n-----------------------------------------------------------------------------------------\\nModel Inception Score Frechet Inception Distance\\n----------------------------------------------------------------------------------------\\nMAD-GAN 6.67+-0.07 34.10\\nMGAN 7.52+-0.1 31.34\\nStackelberg GAN 7.62+-0.07 26.76\\n\\nQ2. (The second paragraph of review) There are many typos in the paper (and appendix). In the conclusion, the sentence \\\"we show that the minimax gap shrinks to eps as the number of generators increases with rate O(1/eps)\\\" is an over-claim, because the authors only proved this under the assumption of concavity of the maximization w.r.t. discriminators.\\nA2. We have tried our best to fix all the typos that we find in the paper and appendix. In particular, we avoid the use of \\u201cconcave closed hull of a set\\u201d by redefining \\\\hat{cl}f:=-\\\\br{cl}(-f). We clarify our use of sub-script in h_i(u_i) by saying \\u201cThe subscript i in h_i indicates that the function h_i is derived from the i-th generator. The argument of h_i should depend on i, so we denote it by u_i. Intuitively, h_i serves as an approximate convexification of -\\\\phi(\\\\gamma_i,\\\\cdot) w.r.t the second argument due to the conjugate operation\\u201d. We also modified the sentence in the conclusion as \\u201cwe show that the minimax gap shrinks to eps as the number of generators increases with rate O(1/eps), when the maximization problem w.r.t. the discriminator is concave\\u201d.\\n\\nQ3. (The third paragraph of review) The authors may want to provide some simple results of the Stackelberg GAN from the perspective of density approximation. Whether the distance defined by the maximization problem a distance or divergence. If we exactly minimizing that objective function, do we get the target distribution?\\nA3. Thanks for the comment. As the reviewer suggested, in Theorem 9 of the revised version we provide new results of Stackelberg GAN from the perspective of density approximation under the standard assumption of Goodfellow\\u201914. Our result shows that Stackelberg GAN can learn a mixture of distributions. This theorem gives a positive answer to the reviewer\\u2019s question about whether minimizing the objective function gets the target distribution. For the question concerning whether the distance defined by the maximization problem a distance or divergence, it depends on the choice of function f. For example, when f is the log function, the distance defined by the maximization problem of Stackelberg GAN (i.e., the \\\\tilde{L} in the proof of Theorem 9) is the Jensen-Shannon divergence between the mixture generative distribution and the real distribution.\\n\\nWe are looking forward to a re-evaluation from the reviewer based on our revision.\"}", "{\"title\": \"We do not need to choose N_1=\\u2026=N_I=N/I, although we can as well. The choice is flexible.\", \"comment\": \"We thank AnonReviewer2 for the quick reply! Yes, the difference is in the empirical loss. However, we do not necessarily need to choose N_1=\\u2026=N_I=N/I, although we can as well. We believe the key to your question is on the relationship between population loss and empirical loss --- the unbiased estimator. Note that by the uniform convergence, an unbiased empirical loss asymptotically converges to the population loss. There are multiple ways of samplings which lead to an unbiased empirical loss to the reviewer\\u2019s population loss. Here are three examples: (1) the multinomial distribution with parameter (1/I,\\u2026,1/I) as the reviewer mentioned. Note that even in this case, with high probability N_1=\\u2026=N_I=N/I does not hold. (2) Each generator samples a fixed but unequal number of data points independently, e.g., N_1=1.5N/I, N_2=\\u2026=N_{I-1}=N/I, N_I=0.5/I. (3) Each generator samples a fixed and equal number of data points independently, i.e., N_1=\\u2026=N/I. All the three sampling schemes are unbiased to the population loss, although N_1=\\u2026=N_I=N/I does not always hold true.\\n\\nThanks again for your question.\"}", "{\"title\": \"The difference is in the empirical ensemble loss\", \"comment\": \"Thanks for the reply!\\n\\nFirst, sorry that in my empirical ensemble losses I missed typed G_{\\\\gamma_i} as G_{\\\\gamma}. Both the sum of losses (ensemble losses) and a loss on output of ensemble have G_{\\\\gamma_i}. After I corrected my typo, I did not see much difference there.\\n\\nIn fact, if we consider the population loss, the sum of losses (ensemble losses) and the loss on output of ensemble are exactly the same. Both of them are E_x f(D_{\\\\theta}(x)) + \\\\frac{1}{I}\\\\sum_{i=1}^I E_z f(1-D_{\\\\theta}(G_{\\\\gamma_i}(z))). The loss on output of ensemble can have different weights on different generators, and you method can do it, too.\\n\\nThe difference is in the empirical ensemble loss, as I wrote in my last post. In the loss on output of ensemble, (N_1, N_2, ..., N_I) is a random vector with multinomial distribution (N, 1/I, 1/I, ..., 1/I). It is not clear to me that in your sum of losses (ensemble losses), how will you choose your (N_1, N_2, ..., N_I)? It seems to be that you choose N_1=\\u2026=N_I=N/I?\"}", "{\"title\": \"Misunderstandings about the ensemble loss by AnonReviewer2\", \"comment\": \"We thank AnonReviewer2 for the comment. We believe there are misunderstandings about the loss by the reviewer here. Our ensemble loss is E_x f(D_{\\\\theta}(x)) + \\\\frac{1}{I}\\\\sum_{i=1}^I E_z f(1-D_{\\\\theta}(G_{\\\\gamma_i}(z))). This is totally different from the loss that the reviewer mentioned, as the index i is imposed on the generator parameter \\\\gamma in our loss. Our loss involves optimizing *multiple* generators jointly, while the ensemble loss that the reviewer mentioned only involves learning one generator. Therefore, there is huge difference between the ensemble loss (as in this paper) and the loss on output of ensemble (as the reviewer mentioned).\\n\\nFurthermore, we do not require that each component must contribute the same number of training samples in the ensemble loss. Rather, we only restrict the *weight* of all generators to be the same. Our analysis focuses on the population form where many sampling methods are consistent with it by the law of large number. For example, we allow the generator mixture model with uniform distribution over all generators. We also allow an empirical ensemble loss \\\\frac{1}{N}\\\\sum_{n=1}^N f(D_{\\\\theta}(x_n)) + \\\\frac{1}{I}\\\\sum_{i=1}^I \\\\frac{1}{N_i}\\\\sum_{n=1}^{N_i} f(1-D_{\\\\theta}(G_{\\\\gamma_i}(z_{i,n}))) with i.i.d. z_{i,n}. We even allow the case that N_1=\\u2026=N_I. So, our model does not have the issue that \\u201cStackelberg GAN is more difficult to prove its convergence in the density approximation sense\\u201d, since the training samples indeed can be i.i.d. sampled from the ensemble model.\\n\\nThanks again for your revaluation.\"}", "{\"title\": \"The difference between a sum of losses (ensemble losses) and a loss on output of ensemble is very minor, even negligible\", \"comment\": \"The authors overstate the difference between a sum of losses (ensemble losses) and a loss on output of ensemble. In fact, in terms of algorithm, this difference is very minor, even negligible.\\n\\nWhen we have a loss on output of ensemble, the loss is:\\n\\\\sum_{n=1}^N f(D_{\\\\theta}(x_n) + \\\\sum_{i=1}^I \\\\sum_{n=1}^{N_i} f(1-D_{\\\\theta}(G_{\\\\gamma_i}(z_{i,n}))),\\nwhere {x_n}_{n=1}^N are iid samples from the dataset, {G_{\\\\gamma}(z_{n,i}): 1\\\\le i\\\\le I, 1\\\\le n \\\\le N_i} are iid samples from the mixture generative model, and \\\\sum_{i=1}^I N_i = N.\\n\\nWhen we have a sum of losses (ensemble losses), the loss is:\\n\\\\sum_{n=1}^N f(D_{\\\\theta}(x_n) + \\\\sum_{i=1}^I \\\\sum_{n=1}^{N/I} f(1-D_{\\\\theta}(G_{\\\\gamma_i}(z_{i,n}))),\\nwhere {x_n}_{n=1}^N are iid samples from the dataset, {G_{\\\\gamma}(z_{n,i}): 1\\\\le n \\\\le N/I} are iid samples from the i'th generative model, and all the I generator components contribute N/I samples to the loss equally.\\n\\nTherefore, the only difference is that in a loss on output of ensemble, we are truly sampling from the ensemble model, while in a sum of losses (ensemble losses), we enforce that each component must contribute the same number of training samples.\\n\\nThis difference even makes Stackelberg GAN more difficult to prove its convergence in the density approximation sense, because now its training samples are not iid sampled from the ensemble model.\"}", "{\"title\": \"Interesting view of GANs from game-theory perspective, but algorithmically the Stackelberg GAN is similar with previous multiple-generator GANs\", \"review\": \"This paper proposes the Stackelberg GAN framework of multiple generators in the GAN architecture. The architecture is similar with previous multiple-generator GANs (MAD-GAN and MGAN). In fact, it's even simpler in the sense that Stackelberg GAN has simpler loss function for the discriminator compared with the previous two. The authors prove that the minimax duality gap shrinks as the number of generators increases. And this proof has no assumption on the expressive power of generators and discriminator. With this proof, the authors argues that because the duality gap shrinks as the number of generators increases, the training of GANs gets more stable.\\n\\nFrom the algorithm part, I think the algorithm is very similar (and even simpler) than MAD-GAN and MGAN. The MAD-GAN and MGAN even proposed some specific loss for the discriminator so that it will encourage different generator to generate different modes in the target distribution. The Stackelberg GAN does not do this, but \\\"partially\\\" achieved the same goal. However, from Figure 9, we see that the simpler the generator is, the easier different generator will capture different modes. I think that this is due to the simplicity of discriminator loss. Therefore, on the algorithm part, the author may want to address the difference between Stackelberg GAN and MAD-GAN and MGAN. On the experiment part, we need to see more comparison between these three methods. In the current experiment, MGAN result is very similar to the proposed method, and MAD-GAN result is missing. Personally, I think that on cifar dataset (or larger datasets), these three methods should have very similar behavior. \\n\\nFrom the theoretical part, the authors derived a bound of the minimax duality gap for the Stackelberg GAN, without the assumption on the expressive power of generators and discriminator. Although the bound may not be practical, these are nice efforts. There are many typos in the paper (and appendix), which make me difficult to follow the proofs. For example, \\\"Let clf (bclf) be the convex(concave) closure of f, which is defined as the function whose epigraph (subgraph) is the convex\\n(concave) closed hull of that of function f.\\\" Do we have concave closed hull of subgraph of function f? What is the concave closed hull of a set? The usage of sub(sup)-script is also very confusing, like in the definition of h_i(u_i). The authors may want to correct typos and improve the presentation. In the conclusion, the authors conclude \\\"We show that the minimax gap shrinks to \\\\eps as the number of generators increases with rate e O(1/\\\\eps).\\\" This is an over-claim, because the authors only proved this under the assumption of concavity of the maximization w.r.t. discriminators. \\n\\nFinally, the authors may want to provide some simple results of the Stackelberg GAN from the perspective of density approximation, even assuming infinite capacity of the discriminator set, as other GANs does. Whether the distance defined by the maximization problem a distance or divergence. If we exactly minimizing that objective function, do we get the target distribution?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A nice approach for training multi-generator\", \"review\": \"This paper proposes a way of training multi-generator in the GAN setting.\\nWhile a proposed approach is simply to put N generators and form a sum of GAN losses to train a model, the paper carefully presents a theoretical analysis on the duality gap, and shows as N goes infinity, the duality gap can shrink to zero.\\nOne can think of this as a usual ensemble approach to increase model's capacity and performance, but the main difference to the usual ensemble approach is to form a sum of losses (ensemble losses) instead of a loss on output of ensemble.\\nThe paper shows this can be more effective approach to train a multi-generator architecture and I believe that this can be an effective approach to capture multi-modal sample distributions.\\nFinally, a paper is well-written and well-organized.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The name Stackelberg GAN is misleading as the underlying problem formulation is not a Stackelberg game but (still) a zero-sum game. The argument why more data generators help is not convincing.\", \"review\": \"A Stackelberg competition is a nonzero-sum game where 1) each player has their own objective, which do not sum up to a constant, and 2) there is an order at which the players interact. The proposed formulation only assumes that parameters of one player (data generator) partition in I tuples \\\\gamma_i of parameters, where each tuple parameterizes a different data generator component (e.g., a separate neural network). Further, each of those components is assumed to contribute a term to the game's objective that only depends on the corresponding parameter tuple \\\\gamma_i, and the other player's parameters \\\\theta (e.g., weights of the discriminator). From a game theoretic perspective, this still yields a 2-player zero-sum game where the action space of the data generator is the product space of the I tuple spaces. Hence, I have doubts about the general finding that more data generating components decreases the duality gap.\\n\\nThe gap between the a maximin and minimax solution is determined by the shape of the objective \\\\phi(\\\\gamma,\\\\theta) and is zero, for example, if \\\\phi is (quasi) convex in \\\\gamma=[\\\\gamma_1, ..., \\\\gamma_I], and (quasi) concave in \\\\theta. The authors bound the violation of this property w.r.t. the data generator components' parameters \\\\gamma_i, and argue that this degree of violation is the same for the whole data generator parametrized by \\\\gamma=[\\\\gamma_1, ..., \\\\gamma_I] if the data generator components are from the same family of mappings (e.g., having the same network architecture). While this conclusion is true under worst cast assumption, e.g., the globally maximal possible gap, this would also imply that all data generator components find the same global best solution, that is, yield the same mapping, in which case the gap would be identical to just using one of those components.\\n\\nIntuitively, the only reason to have multiple data generator components is to learn different mappings such that the joint data generator -- mixing the outputs of the different components -- is more expressiv than just a single mapping. If the different mappings only result from the inability of finding the global best solution, a worst case argument is not very insightful; in this case, one should study the duality gap in the neighborhood of the starting solutions. On the other hand, if we assume a different family of mappings for each component, the convexity violation of the joint data generator is higher than for each component; hence, the gap does not necessarily decrease with more components.\\n\\nSo why do multiple data generator components help in practice, and why does the proposed model outperform single-component GANs and the multi-branch GAN in the experiments? Solving a maximin/minimax problem for highly non-convex-concave functions is challenging; there is an infinity of saddle point solutions which yield different \\\"performances\\\". The multi-branch GAN can be seen as a model averaging approach giving more stable results, whereas the proposed GAN seems more of an ensemble approach to stabilize the result. Though, this is speculative and I would encourage the authors to study this in-depth; the reasoning in Remark 1 is not convincing to me.\", \"update\": \"I read the revision and stick to my vote. In the discussion, I wasn't able to get my points across, e.g., that bounding the worst case duality gap is not enough to conclude that the observed duality gap does not grow for multiple local optimal GANs, where the duality gap is expected to be much smaller. A simple experiment could be to actually measure the duality gap (flip the order of the players and measure the difference of the objectives, when starting with the same initialization). If the authors were right, the maximum of those gap should stay constant when adding more data generators. To justify a Stackelberg setting, the authors may provide an example instantiation that cannot be cast into a standard zero-sum game with minimax solution. I can't see such an example but I'm happy to be proven wrong.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BJx0sjC5FX
RNNs implicitly implement tensor-product representations
[ "R. Thomas McCoy", "Tal Linzen", "Ewan Dunbar", "Paul Smolensky" ]
Recurrent neural networks (RNNs) can learn continuous vector representations of symbolic structures such as sequences and sentences; these representations often exhibit linear regularities (analogies). Such regularities motivate our hypothesis that RNNs that show such regularities implicitly compile symbolic structures into tensor product representations (TPRs; Smolensky, 1990), which additively combine tensor products of vectors representing roles (e.g., sequence positions) and vectors representing fillers (e.g., particular words). To test this hypothesis, we introduce Tensor Product Decomposition Networks (TPDNs), which use TPRs to approximate existing vector representations. We demonstrate using synthetic data that TPDNs can successfully approximate linear and tree-based RNN autoencoder representations, suggesting that these representations exhibit interpretable compositional structure; we explore the settings that lead RNNs to induce such structure-sensitive representations. By contrast, further TPDN experiments show that the representations of four models trained to encode naturally-occurring sentences can be largely approximated with a bag of words, with only marginal improvements from more sophisticated structures. We conclude that TPDNs provide a powerful method for interpreting vector representations, and that standard RNNs can induce compositional sequence representations that are remarkably well approximated byTPRs; at the same time, existing training tasks for sentence representation learning may not be sufficient for inducing robust structural representations
[ "tensor-product representations", "compositionality", "neural network interpretability", "recurrent neural networks" ]
https://openreview.net/pdf?id=BJx0sjC5FX
https://openreview.net/forum?id=BJx0sjC5FX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1e9w1OfgE", "H1gY4uvYJE", "SklIQYXqCQ", "HyxC1Km907", "SkeR9OQ50X", "rJl1E7lHTX", "r1eg-zgrT7", "rkgLvpJraX", "S1x9a0e2hX", "S1l7y8esh7", "HJgWEWbPnQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544875874392, 1544284209280, 1543285021574, 1543284966307, 1543284885699, 1541894950759, 1541894647993, 1541893470072, 1541308097896, 1541240283164, 1540981032794 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper670/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper670/Authors" ], [ "ICLR.cc/2019/Conference/Paper670/Authors" ], [ "ICLR.cc/2019/Conference/Paper670/Authors" ], [ "ICLR.cc/2019/Conference/Paper670/Authors" ], [ "ICLR.cc/2019/Conference/Paper670/Authors" ], [ "ICLR.cc/2019/Conference/Paper670/Authors" ], [ "ICLR.cc/2019/Conference/Paper670/Authors" ], [ "ICLR.cc/2019/Conference/Paper670/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper670/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper670/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": [\"AR1 seeks the paper to be more standalone and easier to read. As this comment comes from the reviewer who is very experienced in tensor models, it is highly recommended that the authors make further efforts to make the paper easier to follow. AR2 is concerned about the manually crafted role schemes and alignment discrepancy of results between these schemes and RNNs. To this end, the authors hypothesized further reasons as to why this discrepancy occurs. AC encourages authors to make further efforts to clarify this point without overstating the ability of tensors to model RNNs (it would be interesting to see where these schemes and RNN differ). Lastly, AR3 seeks more clarifications on contributions.\", \"While the paper is not ground breaking, it offers some starting point on relating tensors and RNNs. Thus, AC recommends an accept. Kindly note that tensor outer products have been used heavily in computer vision, i.e.:\", \"Higher-Order Occurrence Pooling for Bags-of-Words: Visual Concept Detection by Koniusz et al. (e.g. section 3 considers bi-modal outer tensor product for combining multiple sources: one source can be considered a filter, another as role (similar to Smolensky at al. 1990), e.g. a spatial grid number refining local role of a visual word. This further is extended to multi-modal cases (multiple filter or role modes etc.) )\", \"Multilinear image analysis for facial recognition (e.g. so called tensor-faces) by Vasilescu et al.\", \"Multilinear independent components analysis by Vasilescu et al.\", \"Tensor decompositions for learning latent variable models by Anandkumar et al.\", \"Kindly make connections to these works in your final draft (and to more prior works).\"], \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting work however it lacks connection with modern tensor models.\"}", "{\"title\": \"Interactive demo uploaded\", \"comment\": \"We have created an anonymized webpage with interactive demos to accompany this paper. The page can be found here:\", \"https\": \"//tpdn-iclr.github.io/tpdn-demo/tpr_demo.html\"}", "{\"title\": \"Revisions uploaded\", \"comment\": \"Thank you again for the suggestions. We have uploaded a new version of the paper that incorporates the changes discussed in our response.\"}", "{\"title\": \"Revisions uploaded\", \"comment\": \"Thank you again for your comments. We have uploaded a new version of the paper that incorporates the changes discussed in our response.\"}", "{\"title\": \"Revisions uploaded\", \"comment\": \"We have uploaded a revised version of the paper that incorporates the change mentioned above.\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for the feedback! Here are replies to the concerns you raise:\", \"point_1\": \"We will edit the introduction to make the contributions clearer.\", \"point_2\": \"Some of these details are available in the appendices, and we will add the ones that are not already there. We will also make it clearer in the main text that such information is available in the appendices. \\n\\nWe will also clarify our discussion of the results in Table 2. We do not have a strong hypothesis for why Skip-thought is approximated less well than the other models. For the other models, our conjecture is that the models\\u2019 representations consist of a combination of a bag-of-words representation and some structural information that is occasionally, but not reliably, present as well. This conjecture is consistent with the finding that these representations could be approximated well, though not perfectly, with a bag-of-words role scheme. \\n\\nWe argue that such representations arise because the training tasks for these sentence embedding models do not depend much on the structure of the input; our results in Table 3b indicate that only structure-sensitive training tasks will induce models to learn structured representations. \\n\\nHowever, we will also clarify the other two possible explanation for the results in Table 2, namely that the models could be well-approximated by some role scheme that we did not test, or that the models are using some systematic but non-TPR structural representation.\", \"point_3\": \"Tables 9 and 10 show the actual performance on downstream tasks of TPDNs trained to approximate the sentence embedding models. We did not emphasize these results, however, because we are presenting the TPDN as a tool for analyzing existent models, not as a new architecture for performing tasks of interest. Therefore, the most relevant metrics are ones showing how the TPDN approximates existing models, not how it performs in its own right. For this same reason, we have not tried training the TPDN end-to-end on these specific tasks rather than training it to approximate existing models.\", \"point_4\": \"Yes, we have considered applying the TPDN to other models. \\n\\nFor example, TPDNs might be used to analyze transformer models by seeing whether the representations generated for each word via self-attention can be approximated as tensor product representations based on the structure of the surrounding context. We are further interested in expanding the domain of inquiry to computer vision to see if convolutional neural networks learn structured representations of scenes that can be approximated by tensor product representations. \\n\\nFinally, we hope that, by making our code available on GitHub, we will enable others to use this technique to analyze the models they are interested in.\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for these comments! Here are replies to the specific concerns you discuss:\", \"point_1\": \"There are two issues raised here. The first is the limitation of using handcrafted role schemes. What this paper attempts to do is explicit, discrete model comparisons between different candidate role schemes. We take this to be a necessary first step on the way to automatically exploring the space of logically possible role schemes, and thus \\\"learning\\\" the optimal role scheme, thereby ruling out this kind of omission. \\n\\nHowever, such a project is an ambitious goal, and we feel it is important to establish the basic methodology, and some basic results, first. Figure 3c and Table 3b show cases where handcrafted role schemes have succeeded near-perfectly, serving as a proof of concept that, given the right role scheme (whether it be hand-crafted or learned), TPDNs can reveal striking levels of systematic structure in RNN representations. \\n\\nThe second issue is the possibility that RNNs do use a systematic structural representation whose representational space cannot be approximated with a TPR. We agree that this is a possibility; although TPRs are capable of capturing complex structural relations, they rely upon certain assumptions about the structure of the representational space. RNNs are not constrained in any way that enforces these assumptions - indeed, this fact is partly why we find the successful TPDN approximations so striking in Figure 3c and Table 3b. \\n\\nIn the final version of the paper, we will emphasize the possibility that RNNs may sometimes use non-TPR structural representations.\", \"point_2\": \"\", \"the_mse_is_informative_on_a_relative_level\": \"It allows us to compare role schemes within a model. To allow comparisons across models, we normalize by dividing by the random-vector performance to factor out overall vector magnitude differences across different models. The other metrics besides MSE allow for absolute measurements of performance. We will clarify the contributions of these different metrics.\", \"point_3\": \"We will edit the paper to clarify the three possibilities for why the alignments in Table 2 are not perfect. \\n\\nTwo of the possibilities, as discussed in our response to your first point, are that the RNNs are using some role scheme other than the ones we tested, or that the RNNs are using some structural representation that cannot be approximated with any tensor product representation. \\n\\nHowever, we argue for a third possibility: the representation can be characterized as a combination of a bag-of-words representation, plus some incomplete (not always encoded) structural information. Such a result is consistent with our observation that bag-of-words roles yield a strong but imperfect approximation for the sentence embedding models. We will edit the text to emphasize that this is merely a conjecture and that the other two possibilities must also be considered. \\n\\nFinally, we agree with your comment that these results do not indicate that RNNs *only* learn tensor-product representations, but we had not intended to make that claim (we meant the title to be read as \\u201cRNNs *sometimes* implement tensor-product representations\\u201d).\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for the feedback. We believe it would be difficult to make a paper completely stand-alone, but it is, indeed, not our goal to discuss sentence/sequence embeddings per se (note that the models we use are sentence models, not document models), but, rather, to describe a general analysis method applied to the special case of these models.\\n\\nTo help make the paper understandable with less context, we will integrate a very short description of what we currently refer to as \\\"the standard left-to-right sequence-to-sequence setup\\\" on page 3.\"}", "{\"title\": \"RNNs implicitly implement tensor-product representations\", \"review\": \"This paper is not standalone. A section on the basics of document analysis would have been nice.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An interesting work offers first step in inspecting RNN representations, the experimental results does not fully support the claim\", \"review\": \"The work proposes Tensor Product Decomposition Networks (TRDN) as a way to uncover the representation learned in recurrent neural networks (RNNs). TRDN trains a Tensor Product Representation, which additively combine tensor products of role (e.g., sequence position) embeddings and filler (e.g., word) embeddings to approximate the encoding produced by RNNs. TRDN as a result shed light into inspecting and interpreting representation learned through RNNs. The authors suggest that the structures captured in RNNs are largely compositional and can be well captured by TPRs without recurrence and nonlinearity.\", \"pros\": \"1. The paper is mostly clearly written and easy to follow. The diagrams shown in Figure 2 are illustrative;\\n2. TRDN offers a headway to look into and interpret the representations learned in RNNs, which remained largely incomprehensible;\\n3. The analysis and insight provided in section 4 is interesting and insightful. In particular, how does the training task influence the kinds of structural representation learned.\", \"cons\": \"1. The method relies heavily on these manually crafted role schemes as shown in section 2.1; It is unclear the gap in the approximation of TPRs to the encodings learned in RNNs are due to inaccurate role definition or in fact RNNs learn more complex structural dependencies which TPRs cannot capture;\\n2. The MSE of approximation error shown in Table 1 are not informative. How should these numbers be interpreted? Why normalizing by dividing by the MSE from training TPDN on random vectors?\\n3. The alignment between prediction using RNN representations and TPDN approximations shown in Table 2 are far from perfect, which would contradict with the claim that RNNs only learn tensor-product representation.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An interesting paper in general\", \"review\": \"This paper presents an analysis of popularly-use RNN model for structure modeling abilities by designing Tensor Product Decomposition Networks to approximate the encoder. The results show that the representations exhibit interpretable compositional structure. To provide better understanding, the paper evaluates the performance on synthesized digit sequence data as well as several sentence-encoding tasks.\", \"pros\": \"1. The paper is well-written and easy to follow. The design of the TPDN and corresponding settings (including what an filler is and what roles are included) for experiments are understandable. It makes good point at the end of the paper (section 4) on how these analysis contribute to further design of RNN models, which seems useful.\\n2. The experiments are extensive to support their claims. Not only synthetic data but also several popularly-used data and models are being conducted and compared. An addition of analogy dataset further demonstrate the effect of TPDN on modeling structural regularities.\", \"cons\": \"1. More detailed and extensive discussion on the contribution of the paper should be included in the introduction part to help readers understand what's the point of investigating TPDN on RNN models.\\n2. Some details are missing to better understand the construction. For example, on page 4, Evaluation, it is unclear of how TPDN encoder is trained, specifically, which parameters are updated? What's the objective for training? It is also unclear of whether the models in Figure 3(c) use bidirectional or unidirectional or tree decoder? In Section 3, it could be better to roughly introduce each of the existing 4 models. How do TPDN trained for these 4 sentence encoding models need to be further illustrated. More reasons should be discussed for the results in Table 2 (why bag-of-words role seem to be ok, why skip-thought cannot be approximated well).\\n3. It could be better to provide the actual performance (accuracy) given by TPDN on the 4 existing tasks.\\n4. Further thoughts: have you considered applying these analysis on other models besides RNN?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
r1GAsjC5Fm
Self-Monitoring Navigation Agent via Auxiliary Progress Estimation
[ "Chih-Yao Ma", "Jiasen Lu", "Zuxuan Wu", "Ghassan AlRegib", "Zsolt Kira", "Richard Socher", "Caiming Xiong" ]
The Vision-and-Language Navigation (VLN) task entails an agent following navigational instruction in photo-realistic unknown environments. This challenging task demands that the agent be aware of which instruction was completed, which instruction is needed next, which way to go, and its navigation progress towards the goal. In this paper, we introduce a self-monitoring agent with two complementary components: (1) visual-textual co-grounding module to locate the instruction completed in the past, the instruction required for the next action, and the next moving direction from surrounding images and (2) progress monitor to ensure the grounded instruction correctly reflects the navigation progress. We test our self-monitoring agent on a standard benchmark and analyze our proposed approach through a series of ablation studies that elucidate the contributions of the primary components. Using our proposed method, we set the new state of the art by a significant margin (8% absolute increase in success rate on the unseen test set). Code is available at https://github.com/chihyaoma/selfmonitoring-agent.
[ "visual grounding", "textual grounding", "instruction-following", "navigation agent" ]
https://openreview.net/pdf?id=r1GAsjC5Fm
https://openreview.net/forum?id=r1GAsjC5Fm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SylMd7nMlV", "S1eCDpX114", "HJliJgopAX", "Bklhwhtm07", "BkelrYdXCQ", "SkxO6muXC7", "Bkxytl_7CQ", "SJxOQedmC7", "H1x9gg_70m", "Bkg3NyOmAm", "S1gQk2P70X", "BklgYjvXRm", "rkgCnbU-6Q", "SJeQkmIR3m", "BJee1Q403m", "HJeZ0QS9nm", "SygyyCBKh7", "HyeQF5h_2Q", "Skgq_VrU37", "BJelNhWS2X", "S1g2c3eVn7", "B1l-VpoQ37", "r1lxWit7nX", "HygR_tZW2m", "Hye-BY21nX", "SklYDcv12X", "HJgT5ts0im", "Hkl2ncv0jQ", "rJxkZKSCoQ", "Skejk716om", "H1eXo56ji7", "HJx7k51tsX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "comment", "official_review", "official_comment", "comment", "official_comment", "official_comment", "official_review", "comment", "official_comment", "comment", "comment", "official_comment", "official_comment", "comment", "comment", "official_comment", "comment", "comment" ], "note_created": [ 1544893290462, 1543613798373, 1543512035137, 1542851683554, 1542846775596, 1542845375523, 1542844535105, 1542844448211, 1542844402366, 1542844212323, 1542843355464, 1542843256311, 1541657013683, 1541460699196, 1541452504166, 1541194697424, 1541131734798, 1541094011313, 1540932722111, 1540852776117, 1540783252288, 1540762920730, 1540754167595, 1540589942445, 1540503865470, 1540483681407, 1540434325056, 1540418228502, 1540409590890, 1540317923433, 1540246171435, 1540057563481 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper669/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper669/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper669/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper669/AnonReviewer3" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper669/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "ICLR.cc/2019/Conference/Paper669/AnonReviewer2" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper669/Authors" ], [ "~Peter_Anderson1" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"title\": \"meta-review\", \"metareview\": \"The authors have described a navigation method that uses co-grounding between language and vision as well as an explicit self-assessment of progress. The method is used for room 2 room navigation and is tested in unseen environments. On the positive side, the approach is well-analyzed, with multiple ablations and baseline comparisons. The method is interesting and could be a good starting point for a more ambitious grounded language-vision agent. The approach seems to work well and achieves a high score using the metric of successful goal acquisition. On the negative side, the method relies on beam search, which is certainly unrealistic for real-world navigation, the evaluation metric is very simple and may be misleading, and the architecture is quite complex, may not scale or survive the test of time, and has little relevance for the greater ML community. There was a long discussion between the authors and the reviewers and other members of the public that resolved many of these points, with the authors being extremely responsive in giving additional results and details, and the reviewers' conclusion is that the paper should be accepted.\", \"recommendation\": \"Accept (Poster)\", \"confidence\": \"4: The area chair is confident but not absolutely certain\"}", "{\"title\": \"Updating score\", \"comment\": \"Most of my questions were answered by the authors. I raise my rating to 6 based on the response and revised paper. However, I am not convinced by the argument for using beam search. I do not believe something unrealistic should be used to get better numbers just because the task is challenging. The authors mention that beam search can be used in robotics but do not provide any reference. I highly doubt the effectiveness of beam-search in an imperfect mapping of an environment, with realistic fine-grained motion as compared to discretized motion and perfect forward model in the simulated environment used. Nevertheless, the proposed model provides better performance even without beam search.\"}", "{\"title\": \"update\", \"comment\": \"Dear authors,\\n\\nThanks for the detailed response! The new revision addresses my presentation concerns and answers my questions, so I've increased my score to a 7. I still think it would be useful to present a slightly more targeted set of ablations in the main paper rather than the kitchen sink in table 2: e.g. just \\n- \\\"co-grounding\\\" / nothing\\n- progress monitor / progress inference / both / neither\\n- best model + data augmentation / best model (no data aug)\\n\\nHaven't proofread the new draft carefully but \\\"state of the arts\\\" in Table 1 is wrong, so you should do another copyediting pass before the final version if the paper is accepted.\"}", "{\"title\": \"Thanks for reproducing our results\", \"comment\": \"Hi,\\n\\nThanks for reproducing our results using the co-grounding module. \\n\\n1. \\nThe 46% reported in the comment is without data augmentation. Originally, we only use data augmentation for test server submission, and all ablation study settings are without data augmentation unless specified. Please see the updated ablation study table in our revision for further details on the performance reported. \\n\\n2. The Eq. 5 seems to be correct. Can you please elaborate more if you have any concerns? \\n\\n3. The losses across time step are summed together. Please note that the loss should be zero for the samples which are ended. Also, if the distance to the goal is lower than 3, we set the target y^{pm}_t to 1 (we have clarified this in the revision). \\n\\n4. Yes, it is roughly 10:1 during training.\"}", "{\"title\": \"Please see the updated ablation study table in our revision\", \"comment\": \"Hi,\\n\\n1. \\nAlthough removing visual grounding may seem to produce a slightly higher performance on unseen SR, We chose to use both visual and textual grounding since the training is more stable and the model is less prone to overfitting. \\n\\n2. \\nThe test server result for \\\"without beam search\\\" is using \\\"co-grounding + progress monitor\\\". Please kindly see our updated ablation study table in our revision for further details.\"}", "{\"title\": \"Additional results without positional encoding and related work added.\", \"comment\": \"Hi,\\n\\nWe would like to thank the reviewer for the thoughtful and constructive feedback.\\n\\nWe thank the reviewer for bringing the additional literature from related fields to our attention. We have included and discussed them in the revised paper (please see both introduction and related work sections for the changes we made).\\n\\n1. Regarding \\u201cimportance of reasoning over completed or next instruction\\u201d:\\n\\nWe rewrote the sentences in the second paragraph in the introduction and try to make the reason why reasoning over both past and present instructions is important and essential. Please see the revised paper for the changes we made. We emphasize that the transition between past and next part of the instructions is a soft boundary, in order to determine when to transit and to follow the instruction correctly the agent is required to keep track of both grounded instructions. \\n\\n2. Regarding \\u201cvisual grounding over visual features\\u201d:\\n\\nIn order to provide a fair comparison with prior arts, we chose to use the image feature vector provided directly with this task. Our current visual grounding module performs attention over different parts of the panoramic image and grounds the located instruction to a part of the panoramic image. To further provide fine-grained visual grounding, we agree that it would be interesting to use panoramic images directly as input and perform visual grounding on feature maps or object-level bounding boxes.\\n\\n3. Regarding \\u201ceffect of positional encoding\\u201d:\\n\\nIn our early experiments, we found that, although removing positional encoding can achieve better results on val-seen, the agent overfits quickly on val-unseen. We believe that the agent\\u2019s ability to generalize to unseen environments is more important than achieving good results on val-seen. Thus, we use positional encoding for ablation study and produce the final result.\\n-----------------------------------------------------------------------------------------------------------------------------------\\n\\t Val-seen\\t \\t Val-unseen\\n NE\\u2193 SR\\u2191 OSR\\u2191 SPL\\u2191 NE\\u2193 SR\\u2191 OSR\\u2191 SPL\\u2191\\n-----------------------------------------------------------------------------------------------------------------------------------\\nOurs (No PE) \\t 3.37 0.69 0.78 0.61\\t 6.04 0.42 0.55 0.30\\nOurs\\t\\t\\t 3.72 0.63 0.75 0.56 5.98 0.44 0.58 0.30\\n-----------------------------------------------------------------------------------------------------------------------------------\\n\\n4. Regarding \\u201cwhat if the orderings of the instruction and actions are inconsistent\\u201d:\\n\\nThe Room-to-Room dataset comes with the ground-truth starting location, goal location, and the instruction associated with it. The ground-truth trajectory is computed as the shortest path from the navigational graph from starting location to the goal, and the quality of given instructions is verified by humans which achieved 86% success rate on the test set. From our own observation, the ordering of actions is consistent with the ordering of instructions. If this was not the case, our current language grounding mechanism may still applicable since it can represent an arbitrary weighting over the sentence. Similarly, since progress monitoring is a learned function over this it could still learn to estimate progress. However, note that our assessment depends on how different the orders of instruction and actions are. If the difference is small, our agent is very likely to be able to recover from incorrect instruction. We believe this can be an interesting direction for future work.\"}", "{\"title\": \"(Cont\\u2019d)\", \"comment\": \"11. Regarding \\u201cthe existence of non-attentive models\\u201d:\\n\\nTo the best of our knowledge, all existing methods use attention models on the VLN task, but we have shown in Figure 3 that the baseline method (using attention on textual grounding) was not able to successfully track the instruction. Our proposed progress monitor made this possible and demonstrated superior performance across difference metrics.\"}", "{\"title\": \"(Cont\\u2019d)\", \"comment\": \"5. Regarding \\u201cusing beam search without auxiliary objective\\u201d:\\n\\nYes, and we have provided the result of using beam search with the co-grounding model (without progress monitor) in the updated ablation study table as shown below. We have also included this updated ablation study table in the revision.\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\t\\t\\t\\t Inference Mode\\t\\t\\t Validation-Seen Validation-Unseen\\n\\t Co-\\t Progress Greedy Progress Beam\\t Data \\t\\n\\t# Grounding Monitor Decoding Inference Search\\tAug.\\tNE\\u2193 SR\\u2191 OSR\\u2191 SPL\\u2191 NE\\u2193 SR\\u2191 OSR\\u2191 SPL\\u2191\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\nBaseline \\t\\t\\t\\t\\t\\t\\t\\t 4.36 0.54 0.68 - 7.22 0.27 0.39 -\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\t 1 \\u2714 \\t\\t\\t \\u2714\\t\\t\\t\\t 3.65 0.65 0.75 0.56 6.07 \\t0.42 0.57 0.28\\nOurs\\t2 \\u2714 \\t \\u2714 \\t\\t \\u2714\\t\\t\\t\\t 3.72 0.63 0.75 0.56 5.98 \\t0.44 0.58 0.30\\n\\t 3 \\u2714\\t \\u2714\\t\\t \\u2714\\t\\t\\t \\u2714 \\t 3.22 0.67 0.78 0.58 5.52 0.45 0.56 0.32\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\nOurs\\t4 \\u2714 \\t \\u2714 \\t\\t\\t \\u2714\\t\\t\\t 3.56 0.65 0.75 0.58 5.89 \\t0.46 0.60 0.32\\n\\t 5 \\u2714 \\t \\u2714 \\t\\t\\t \\u2714\\t\\t \\u2714\\t 3.18 0.68 0.77 0.58 5.41 \\t0.47 0.59 0.34\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\t 6 \\u2714 \\t \\t\\t\\t\\t \\u2714\\t\\t 3.66 0.66 0.76 0.62 5.70 \\t0.49 0.68 0.42\\nOurs\\t7 \\u2714 \\t \\u2714 \\t\\t\\t\\t \\u2714\\t\\t 3.23 0.70 0.78 0.66 5.04 \\t0.57 0.70 0.51\\n\\t 8 \\u2714 \\t \\u2714 \\t\\t\\t\\t \\u2714\\t \\u2714\\t 3.04 0.71 0.78 0.67 4.62 \\t0.58 0.68 0.52\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\n6. Regarding \\u201cif contribution stacks with the pragmatic inference\\u201d:\\n\\nWe believe that the proposed method would stack with the Pragmatic Inference in Fried et al with some performance improvement since the underlying methods for \\u201cranking\\u201d the candidate routes are different. Pragmatic Inference ranks the routes by learning a mapping from a complete sequence of visual inputs to textual output, whereas the Progress Monitor learns a mapping from incomplete visual-textual grounding output and actions to distance. We expect there will be performance improvement by further using pragmatic inference since these two methods are orthogonal. \\n\\n7. Regarding \\u201cadding SPL metric\\u201d:\\n\\nWe completely agree with the reviewers that reporting SPL will be beneficial for future research work along this direction. As suggested, the new SPL metric is added along with ALL settings in the updated ablation study table and the original table 1 for comparing with state of the arts as well.\\n\\n8. Regarding the comments in MISCELLANEOUS:\\n\\nWe thank the reviewer for paying extra attention to the details of the paper. We have revised the paper according to the suggestions in the MISCELLANEOUS section and clarified some confusion by rephrasing some sentences. We now provide answers to some of the questions asked. \\n\\n9. Regarding \\u201cthe position of grounded instruction can follow past and future instructions\\u201d:\\n\\nWe meant to indicate that (also as we demonstrated from qualitative results), the positions of grounded instructions at each step reflects the action that is required at this step and the action that was completed from last or previous steps (see Figure 5 and 6 for examples). \\nAlso, to the best of our knowledge, the order of the instruction will be the same as the order of actions required from the agent because ground-truth actions are computed from the shortest path on the navigation graph. \\n\\n10. Regarding \\u201cfor empirical reasons\\u201d:\\n\\nWe empirically found that using concatenation leads to slightly higher performance and stable training over using element-wise addition. We have made the change in the revision to make this clear.\"}", "{\"title\": \"Results for sigmoid, ablation study table updated, SPL metric included.\", \"comment\": \"Hi,\\n\\nWe would like to thank the reviewer for their thoughtful and constructive feedback.\\n\\n1. Regarding \\u201cpresentation and naming\\u201d:\\n\\nWe thank the reviewer for pointing out the potential issue regarding the naming of the paper. We agree with the reviewer and we have been trying our best to find a better name. For now, one of the options that we come up with is \\u201cSelf-Monitoring Navigation Agent via Auxiliary Progress Estimation\\u201d. We hope this new title is more clear and suitable for the work. If the reviewers have other suggestions, please kindly let us know. \\n\\n2. Regarding \\u201cthe definition of distance\\u201d:\\n\\nThe distance is defined in units of length the same as the simulator. We also agree that using the number of steps is also a reasonable approach, and it would be interesting to explore in the future. We have clarified the definition in the revision. \\n\\n3. Regarding \\u201dusing sigmoid as opposed to tanh\\u201d:\\n\\nWe have found that using sigmoid performs similarly to using the tanh function. The results on different metrics are shown below:\\n-----------------------------------------------------------------------------------------------------------------------------------\\n \\t Val-seen\\t \\t Val-unseen\\n NE\\u2193 SR\\u2191 OSR\\u2191 SPL\\u2191 NE\\u2193 SR\\u2191 OSR\\u2191 SPL\\u2191\\n-----------------------------------------------------------------------------------------------------------------------------------\\nOurs (Tanh)\\t\\t 3.72 0.63 0.75 0.56 5.98 0.44 0.58 0.30\\nOurs (Sigmoid) 3.72 0.64 0.72 0.59\\t 5.92 0.44 0.56 0.33\\n-----------------------------------------------------------------------------------------------------------------------------------\\n\\nThe main incentive for using tanh is to allow the agent to wander around when it\\u2019s distance to the goal is larger than the original starting distance. As a result, this allows the progress monitor to output values lower than 0 indicating that the agent has lost the track of textual grounding. \\n\\nWe simply normalize the output of progress monitor between 0 to 1 before combining the score for beam search. We have revised the paper to clarify this accordingly. \\n\\n4. Regarding \\u201cinstructions with various lengths\\u201d:\\n\\nWe use zero-padding to handle instructions with various lengths. We have also explored ideas like using interpolation to upsample the attention weights of short instructions to a fixed length of 80, but it produces a similar performance as without interpolation.\\nGenerally, we have observed that the validation samples with longer instructions are more likely to fail, but this may due to the fact that the required total number of steps of these samples is larger. Hence, the agent is prone to errors since it needs to predict actions correctly for more steps.\"}", "{\"title\": \"(Cont\\u2019d)\", \"comment\": \"If the goal is given then appearance features can inform progress, both immediately near the goal but also contextually (e.g. rooms that tend to be near the goal or tend to contain the object in the goal). In a scenario where the instruction only describes the goal rather than the path to the goal, the agent will require a map or positions to estimate the progress using the proposed progress monitor. It would be interesting to see when the progress monitor combines with the semantic maps proposed recently in [1], [2], or [3], where the positions of the semantic map are also associated with the progress prediction condition on the given instruction. The agent can be constrained to select directions that have the closest image representation with the expected image feature representation extracted from semantic map representation and use the associated progress estimate as an additional indicator for action selection.\\n\\n[1] Gordon, Daniel, et al. \\\"IQA: Visual question answering in interactive environments.\\\" CVPR. 2018.\\n[2] Savinov, Nikolay, Alexey Dosovitskiy, and Vladlen Koltun. \\\"Semi-parametric topological memory for navigation.\\\" ICLR (2018).\\n[3] Walter, Matthew R., et al. \\\"A framework for learning semantic maps from grounded natural language descriptions.\\\" The International Journal of Robotics Research 33.9 (2014): 1167-1190.\"}", "{\"title\": \"(Cont\\u2019d)\", \"comment\": \"------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\t Val-seen\\t \\t Val-unseen\\t\\t Test\\n NE\\u2193 SR\\u2191 OSR\\u2191 SPL\\u2191\\t NE\\u2193 SR\\u2191 OSR\\u2191 SPL\\u2191 NE\\u2193 SR\\u2191 OSR\\u2191 SPL\\u2191\\n------------------------------------------------------------------------------------------------------------------------------------------------------------\\nStudent-forcing\\t\\t 6.01 0.39 0.53 -\\t 7.81 0.22 0.28 -\\t 7.85 0.20 0.27 0.18\\nRPA\\t\\t\\t 5.56 0.43 0.53 -\\t 7.65 0.25 0.32 -\\t 7.53 0.25 0.33 0.23\\nSpeaker-Follower\\t\\t 3.36 0.66 0.74 -\\t 6.62 0.36 0.45 -\\t 6.62 0.35 0.44 0.28\\n------------------------------------------------------------------------------------------------------------------------------------------------------------\\nOurs (Greedy Decoding)\\t 3.22 0.67 0.78 0.58\\t 5.52 0.45 0.56 0.32\\t 5.99 0.43 0.55 0.32\\nOurs (Progress Inference)\\t 3.18 0.68 0.77 0.58\\t 5.41 0.47 0.59 0.34\\t 5.67 0.48 0.59 0.35\\n------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\nIn addition to the comparison without beam search provided above, we would like to elaborate on the reason beam search is important given the research progress in this direction. \\n\\nAdmittedly, beam search exhaustively searches a much larger space so that the agent can better decide which direction to go and when to stop. The VLN task along with the R2R dataset was recently introduced less than 1 year ago with a more than 60% SR gap between the best-known model and human performance. Each work including ours has step by step brought this gap down to 25%. We agree that in the long-term, ideally, the common goal of the research community is to develop an agent that achieves a high success rate while maintaining a low trajectory length. We argue that given the complexity of the VLN task which requires the agent to simultaneously achieve visual grounding, textual reasoning, temporal memorization/reasoning, and intelligent action selection to navigate, the need to relax the task along multiple directions in order to make progress is important and essential. There is no current best model for both metrics (SR and SPL), and beam search typically differentiates the two regimes. Also, whether the beam search is realistic or not depends on the application. For example, in robotics, it is not atypical to have some exploration and/or mapping of the environment as well, after which beam search can be utilized. In fairness and future comparison, we have provided SR and SPL metrics for our proposed method and all settings in the ablation study table.\\n\\n3. Regarding \\u201cOSR differences in the submission and OpenReview comment\\u201d:\\n\\nThe OSR were 0.96 and 0.97 respectively due to the fact that, when submitting the results to the test server, it is strictly required that all submissions need to include all locations that the agent traversed. Since beam search explores the environment, the trajectory length is usually significantly higher. The chance of passing/reaching the goal is also higher, hence the OSR is close to 1. In order to be consistent with how the existing work reports their performance (see the recently updated Speaker-Follower paper for example), we follow the same convention: when using beam search, only the performance on the test set includes all viewpoints traversed. The performance reported on the validation set use only the highest ranked trajectory after beam search.\\n\\n4. Regarding \\u201cscenarios of instructions describe the goal, not the path\\u201d:\\n\\nOur proposed agent learns to infer and leverage the progress monitor to constrain and regularize the textual grounding module. We believe that the high-level concept of the progress monitor will work as long as inferring progress made towards the goal can be done for the given task, i.e. there is some information in the grounding or visual information to accurately estimate it. Using textual attention distribution is just one instantiation that we explore on leveraging the progress monitor for the VLN task.\"}", "{\"title\": \"SOTA and ablation study tables updated, SPL metric included, comparison without beam search added\", \"comment\": \"Hi,\\n\\nWe would like to thank the reviewer for the thoughtful and constructive feedback.\\n\\n1. Regarding \\u201cconfusion regarding the result of beam search\\u201d:\\n\\nThe updated ablation study table is shown below, and the same table has been added to the revised paper. In this table, we show the performance improvement of each component with different inference modes (which are described in the revised paper). We have also added the recently introduced SPL metric under all settings to table. From the results, we can again see that the proposed components improved the performance across different evaluation metrics. \\n\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\t\\t\\t\\t Inference Mode\\t\\t\\t Validation-Seen Validation-Unseen\\n\\t Co-\\t Progress Greedy Progress Beam\\t Data \\t\\n\\t# Grounding Monitor Decoding Inference Search\\tAug.\\tNE\\u2193 SR\\u2191 OSR\\u2191 SPL\\u2191 NE\\u2193 SR\\u2191 OSR\\u2191 SPL\\u2191\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\nBaseline \\t\\t\\t\\t\\t\\t\\t\\t 4.36 0.54 0.68 - 7.22 0.27 0.39 -\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\t 1 \\u2714 \\t\\t\\t \\u2714\\t\\t\\t\\t 3.65 0.65 0.75 0.56 6.07 \\t0.42 0.57 0.28\\nOurs\\t2 \\u2714 \\t \\u2714 \\t\\t \\u2714\\t\\t\\t\\t 3.72 0.63 0.75 0.56 5.98 \\t0.44 0.58 0.30\\n\\t 3 \\u2714\\t \\u2714\\t\\t \\u2714\\t\\t\\t \\u2714 \\t 3.22 0.67 0.78 0.58 5.52 0.45 0.56 0.32\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\nOurs\\t4 \\u2714 \\t \\u2714 \\t\\t\\t \\u2714\\t\\t\\t 3.56 0.65 0.75 0.58 5.89 \\t0.46 0.60 0.32\\n\\t 5 \\u2714 \\t \\u2714 \\t\\t\\t \\u2714\\t\\t \\u2714\\t 3.18 0.68 0.77 0.58 5.41 \\t0.47 0.59 0.34\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\t 6 \\u2714 \\t \\t\\t\\t\\t \\u2714\\t\\t 3.66 0.66 0.76 0.62 5.70 \\t0.49 0.68 0.42\\nOurs\\t7 \\u2714 \\t \\u2714 \\t\\t\\t\\t \\u2714\\t\\t 3.23 0.70 0.78 0.66 5.04 \\t0.57 0.70 0.51\\n\\t 8 \\u2714 \\t \\u2714 \\t\\t\\t\\t \\u2714\\t \\u2714\\t 3.04 0.71 0.78 0.67 4.62 \\t0.58 0.68 0.52\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\n2. Regarding \\u201cThe use of beam search and SPL metric\\u201d:\\n\\nWe have provided all the results without beam search in the updated ablation study table (see table above or from the revised paper), and the comparison with methods without beam search is shown in the table below (for Speaker-Follower, the Pragmatic Inference which relies on beam search is removed for comparison purpose). This table is also included in the revised paper.\"}", "{\"comment\": \"1. It seems that the performance with \\\"Textual only\\\" is 1% beyond the Co-Grounding model. So why not use the \\\"textual only\\\" in the further experiments?\\n\\n2. Could you give more details on the method that you submitted to the test server for the \\\"Test-Unseen\\\" result? Is it \\\"Co-Grouding\\\" or \\\"Co-Grounding + Progress Monitor\\\"?\\nIf the progress monitor is included, is it still non-beam search result? Because the progress monitor is only used in beam-search according to your paper.\\n\\nPlease let me know if I have any misunderstanding.\", \"title\": \"Result is not clear\"}", "{\"title\": \"Interesting Approach to Route Instruction Following with Thorough Evaluation\", \"review\": [\"The paper considers the problem of following natural language route instructions in an unknown environment given only images. Integral to the proposed (\\\"self-aware\\\") approach is its ability to reason over which aspects of the instruction have been completed, which are to be followed next, which direction to go in next, as well as the agents current progress. This involves two primary components of the architecture. The first is a visual-textual module that grounds to the completed instruction, the next instruction, and the next direction based upon the visual input. The second is a \\\"progress monitor\\\" that takes the grounded instruction as input and captures the agent's progress towards completing the instruction.\", \"STRENGTHS\", \"The paper describes an interesting approach to reasoning over which aspects of a given instruction have been correctly followed and which aspect to act on next. This takes the form of a visual-textual co-grounding model that identifies the instruction previously completed, the instruction corresponding to the next action, and the subsequent direction in which to move. The inclusion of a \\\"progress monitor\\\" allows the method to reason over whether the navigational progress matches the instruction.\", \"The paper provides a thorough evaluation on a challenging benchmark language understanding dataset. This evaluation includes detailed comparisons to state-of-the-art baselines together with ablation studies to understand the contribution of the different components of the architecture.\", \"The paper is well written and provides a thorough description of the framework with sufficient details to support replication of the results.\", \"WEAKNESSES\", \"The paper would benefit from a more compelling argument for the importance of reasoning over which aspects of the instruction have been completed vs. which to act on next.\", \"The paper emphasizes the use of images, the visual grounding reasons over visual features.\", \"The paper incorrectly states that existing methods for language understanding require an explicit representation of the target. Several existing methods do not have this requirement. For example, Matuszek et al., 2012 parse free-form language into a formal logic representation for a downstream controller that interprets these instructions in unknown environments. Meanwhile, Duvallet et al., 2014 and Hemachandra et al., 2015 exploit language (together with vision and LIDAR) to learn a distribution over the unknown environment that guides grounding. Meanwhile, Mei et al., 2016 reason only over natural language text and parsed images, without knowledge of the environment or an explicit representation of the goal.\", \"C. Matuszek, E. Herbst, L. Zettlemoyer, and D. Fox, \\u201cLearning to parse natural language commands to a robot control system,\\u201d in Proceedings of the International Symposium on Experimental Robotics (ISER), 2012.\", \"S. Hemachandra, F. Duvallet, T. M. Howard, N. Roy, A. Stentz, and M. R. Walter, \\u201cLearning models for following natural language directions in unknown environments,\\u201d in Proc. IEEE Int\\u2019l Conf. on Robotics and Automation (ICRA), 2015\", \"F. Duvallet, M. R. Walter, T. Howard, S. Hemachandra, J. Oh, S. Teller, N. Roy, and A. Stentz, \\u201cInferring maps and behaviors\", \"from natural language instructions,\\u201d in Proceedings of the International Symposium on Experimental Robotics (ISER), 2014.\", \"While it's not a neural approach, the work of Arkin et al., 2017 which reasons over the entire instruction history when deciding on actions (through a statistical symbol grounding formulation)\\u2060\", \"J. Arkin, M. Walter, A. Boteanu, M. Napoli, H. Biggie, H. Kress-Gazit, and T. Howard. \\\"Contextual Awareness: Understanding Monologic Natural Language Instructions for Autonomous Robots,\\\" In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2017\", \"The paper misses the large body of literature on grounded language acquisition for robotics.\", \"QUESTIONS\", \"What is the effect of using positional encoding for textual grounding as opposed to standard alignment methods such as those used by Mei et al., 2016?\", \"Perhaps I missed it, but what happens if instructions are specified in such a way that their ordering is not consistent with the correct action ordering (e.g., with corrections interjected)?\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"comment\": [\"Hi,\", \"Thanks a lot for your responses. My reproduction of co ground model (without progress monitor) achieved comparable performance to reported. However, training the progress monitor seems pretty hard. Mind answering a few more questions?\", \"Did you include data augmentation in the reported 46% for co-ground + progress monitor?\", \"Any chance you could verify EQ5?\", \"It seems from EQ6 that you used a training loss equal to the sum of loss on any trajectory. Did you consider using per-step loss? Did you take the average loss across batch?\", \"The scale of co-ground loss and progress-monitor loss are roughly 10:1. Is this expected for training?\"], \"title\": \"Training Progress Monitor\"}", "{\"title\": \"Good idea, unclear results\", \"review\": [\"This submission introduces a new method for vision+language navigation which tracks progress on the instruction using a progress monitor and a visual-textual co-grounding module. The method is shown to perform well on a standard benchmark. Ablation tests indicate the importance of each component of the model. Qualitative examples show that the proposed method attends to different parts of the instruction as the agent moves.\", \"Here are some comments/questions:\", \"I like the underlying idea behind the method. The manuscript is written well for most parts.\", \"The qualitative examples and Figure 2 are really helpful in understanding the reasons behind the improved performance.\", \"There is a lot of confusion regarding the use of beam search. It's unclear from the current manuscript which results are with and without beam search. It seems like beam search was added from Ours 1 to Ours 2 in Table 2. It's not clear which rows involve beam search in Table 1. Some concerns about beam width were raised in the comments which I agree with. Please modify the submission to clearly indicate the use of beam search for each result and specify the beam width.\", \"The use of beam search seems unrealistic to me as I can not think of any way a navigational model using beam search can be transferred or applied to real-world. I understand that one of the baselines uses beam search, so it's fair for performance comparison purposes, but could you provide any justification of how it might be useful in real-world? If there's no reasonable justification, could you also provide all the results (along with SPL metric) without beam search, including ablation, comparing with only methods without beam search?\", \"I do not understand why the OSR in the submission is 0.64 and 0.70 for Speaker-Follower and proposed method and 0.96 and 0.97 in the comments.\", \"It seems like the proposed method is tailored for the VLN task. In many real-world scenarios, an agent might be given an instruction which only describes the goal (such as in Chaplot et al. 2017 and Hermann et al. 2017) and not the path to the goal, could the authors provide their thoughts on whether the proposed would work well for such instructions? What would the progress monitor and textual attention distribution learn in such a scenario?\", \"Due to confusion about results and concerns about beam search, I give a rating of 5. I am willing to increase the rating if the authors address the above concerns.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Only progress monitor use beam search during inference\", \"comment\": \"Hi,\\n\\nWe are sorry if there is any confusion regarding table 2 in the paper. As stated in the Sec. 2.3 of \\u201cProgress Monitor\\u201d in the paper, during inference we use beam search with the progress monitor. Therefore, the 1st row in our proposed method with only co-grounding does not use beam search, which outperformed baseline with panoramic action space by 15% on validation-unseen SR. We hope that the ablation study table above clarifies this. \\n\\nIn the comment below, we have shown that, with beam size 5, our proposed method already achieved the state-of-the-art performance. By further increasing beam size, the performance gradually increases until it saturates.\\n\\nWith panoramic action space, many of the beams are actually empty when using a very large beam size due to limited navigable directions per viewpoints. Thus, increasing the beam size to a larger number will not necessarily help if the competing between beams already provides good selections (we achieved this by leveraging the progress monitor). Nonetheless, we provide the result with a beam size 40 as per requested. \\n\\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\t\\t\\tCo-Grounding\\t Progress Data\\t\\t\\t Beam \\t \\t\\tValidation-Seen \\t Validation-Unseen\\n\\t\\t # Visual\\tTextual Monitor Augmentation\\tSearch (size) \\tNE\\u2193 SR\\u2191 OSR\\u2191 NE\\u2193 SR\\u2191 OSR\\u2191\\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\\nOurs\\t\\t \\u2714 \\t \\u2714\\t\\t\\u2714 \\t\\t \\u2714 \\t \\t\\t 40\\t\\t \\t\\t3.13 0.70 0.77 \\t 4.51 0.58 0.68\\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\"}", "{\"comment\": \"The 5th row reported above clearly corresponds to 2nd row of Table 2 in your paper:\\n1 3.65 0.65 0.75 6.07 0.42 0.57\\n2 3.23 0.70 0.78 5.04 0.57 0.70\\n3 3.04 0.71 0.78 4.62 0.58 0.68\\n\\nSo all the reported numbers in Table 2 use beam search. However, 1st row in Table 2 matches entirely with 3rd row reported above (has no beam search), which is contradicting and confusing. \\n\\nIn the table below, you claimed the proposed approach is better than Speaker-Follower with or without beam search. However, the following comparison was also misleading, as you used beam size 15. Why can't you adopt the same beam size and compare? On the other hand, the 61% SR is 4% boost compared to no data-augmentation counterpart in Table 1 of paper, while on validation-unseen, the gap is merely 1%, any idea why?\\n\\n\\t\\t\\t\\t\\t\\t\\t\\t\\twith beam search\\n--------------------------------------------------------------------------------------------------------------------------------------\\nSpeaker-Follower\\t\\t \\t 1257.38 4.87 0.53 0.96 \\t0.01 \\nOurs\\t\\t\\t \\t\\t\\t 373.09 4.48 0.61 0.97 \\t0.02\", \"title\": \"Contradictions of statistics reported above\"}", "{\"title\": \"Zero-padding for various lengths of instructions\", \"comment\": \"Hi,\\n\\nThanks for the comment. \\n\\nZero-padding is exactly how we handled it for various lengths of instructions. We have also explored ideas like using interpolation to upsample the attention weights of short instructions to a fixed length of 80, but it produces a similar performance as without interpolation.\"}", "{\"title\": \"Difference in beam size and their performance\", \"comment\": \"Hi,\\n\\nThanks for the opportunity to further clarify our usage of a smaller beam size (15 as opposed to 40).\\n\\nWe do not claim that using a smaller number of beams is one of the major contributions. It was a nice side effect that resulted from our progress monitor, where we can evaluate the partial and unfinished candidate routes during beam search. As a result, we are able to maintain a lower number of beams but still achieve state-of-the-art success rate. \\n\\nBelow are the results with different beam size for reference. \\n\\n\\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\t\\t\\tCo-Grounding\\t Progress Data\\t\\t\\t Beam \\t \\t\\tValidation-Seen \\t Validation-Unseen\\n\\t\\t # Visual\\tTextual Monitor Augmentation\\tSearch (size) \\tlength\\u2193 NE\\u2193 SR\\u2191 OSR\\u2191 \\tlength\\u2193 NE\\u2193 SR\\u2191 OSR\\u2191\\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\t\\t 1 \\u2714 \\t \\u2714\\t\\t\\u2714 \\t\\t \\u2714 \\t \\t\\t 5\\t\\t \\t\\t159.19\\t3.03 0.71 0.79 \\t 168.13\\t 4.77 0.55 0.68\\nOurs\\t 2 \\u2714 \\t \\u2714\\t\\t\\u2714 \\t\\t \\u2714 \\t \\t\\t 10\\t\\t \\t\\t271.51\\t3.11 0.71 0.78 \\t 277.13\\t 4.64 0.57 0.68\\n\\t\\t 3 \\u2714 \\t \\u2714\\t\\t\\u2714 \\t\\t \\u2714 \\t \\t\\t 15\\t\\t \\t\\t355.13\\t3.04 0.71 0.78 \\t 360.46\\t 4.62 0.58 0.68\\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\"}", "{\"title\": \"Review\", \"review\": \"This paper describes a model for vision-and-language navigation. The proposed\\nmodel adds two components to the baseline model proposed by Fried et al. (2018):\\n\\n- a panoramic visual attention (referred to in this paper as \\\"visual--textual\\n co-grounding\\\"), in which the full scene around the agent's current position is\\n attended to prior to selecting a direction to follow\\n\\n- an auxiliary \\\"progress monitoring\\\" loss which encourages the agent to to\\n produce textual attentions from which the distance to the goal can be directly\\n inferred\\n\\nThe two components combine to give state-of-the-art results on the Room2Room\", \"dataset\": \"small improvements over existing approaches on the \\\"-seen\\\" evaluation\\nset and larger improvements on the \\\"-unseen\\\" evaluation sets. These improvements\\nalso stack with the data-augmentation approach of Fried et al.\\n\\nI think this is a reasonable submission and should probably be accepted. However, I\\nhave some concerns about presentation and a number of specific questions about\\nmodel implementation and evaluation.\\n\\nPRESENTATION AND NAMING\", \"first_off\": \"I implore the authors to find some descriptor other than \\\"self-aware\\\"\\nfor the proposed model. \\\"Self-aware\\\" is an imprecise description of the agent in\\nthis paper---the agent is specifically \\\"aware\\\" of its visual surroundings and\\nits distance from the goal, neither of which is meaningfully an aspect of\\n\\\"self\\\". Moreover, self-awareness means something quite different in adjacent\\nareas of cognitive science and philosophy; overloading the term in the specific\\n(and comparatively mundane) way used here creates confusion. See section 3.4 of\", \"https\": \"//arxiv.org/abs/1807.03341 for broader discussion. Perhaps something\\nlike \\\"visual / temporal context-sensitivity\\\" to describe what's new here? A bit\\nclunky, but I think it makes the contributions of this work much clearer.\\n\\nAs suggested in the summary above, I also think \\\"visual--textual co-attention\\\"\\nis also an unhelpfully vague description of this aspect of the contribution. The\\ntextual attention mechanism used in this paper is the same as in all previous\\nwork on the task. Representations of language don't even interact with the\\nvisual attention mechanism except by way of the hidden state, and the salient\\nnew feature of the visual attention is the fact that it considers the full\\npanoramic context before choosing a direction.\\n\\nMODELING QUESTIONS\\n\\n- p4: $y_t^{pm}$ is defined as the \\\"normalized distance from the current\\n viewpoint to the goal\\\". Is this distance in units of length (as defined by the\\n simulator) or units of time (i.e. the number of discrete \\\"steps\\\" needed to\\n reach the goal)?\\n\\n The authors have already clarified on OpenReview that the progress monitor\\n objective uses an MSE loss rather than a likelihood loss. Do I understand\\n correctly that ground-truth distances are in [0, 1] but model predictions are\\n in [-1, 1]? Why not use a sigmoid? Also, how does scoring beam-search\\n candidates as $p_t^{pm} \\\\times p_{k,t}$ work if $p_t^{pm}$ can flip the sign?\\n\\n- The input to the progress monitor is formed by concatenating the attention\\n vector $\\\\alpha_t$ to a vector of state features, and then multiplying by a\\n fixed weight matrix. How is this possible? The size of $\\\\alpha_t$ varies\\n depending on the length of the instruction sequence. Are attentions padded out\\n to the length of the longest instruction in the training set? If so, how can\\n the model learn when it's reached the end of a short instruction sequence?\\n What would happen if the agent encountered a sequence that was too long?\\n\\nEVALUATION QUESTIONS\\n\\n- The progress monitor is used both as an auxiliary training objective and as a\\n beam search heuristic. Is it possible to disentangle these two contributions?\\n (E.g. by ignoring the scores during beam search, or by doing augmented beam\\n search in a model that was trained without the auxiliary objective.)\\n\\n- Not critical, but it would be nice to know if the contributions here stack\\n with the pragmatic inference procedure in Fried et al.\\n\\n- While, as pointed out on OpenReview, it is not required to include SPL\\n evaluations, I think it would be informative to do so---the preliminary\\n results with no beam search look good!\\n\\nMISCELLANEOUS\", \"p1\": \"\\\"...smoothly\\\" What does \\\"smoothly\\\" mean in this context?\", \"p2\": \"\\\"the position of grounded instruction can follow past and future\\n instructions\\\". Is the claim here that if instructions are of the form \\\"ACB\\\"\\n and the agent is supposed to do \\\"ABC\\\", that the proposed model will execute\\n these instructions successfully and the baseline will not? This claim does\\n not appear to be evaluated anywhere in the body of the paper.\", \"p4\": \"\\\"for empirical reasons\\\" What does this mean?\", \"p5\": \"\\\"Intuitively, an instruction-following agent is required...\\\" The existence\\n of non-attentive models that do reasonably well at these\\n instruction-following tasks suggest that this is not actually a requirement.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"comment\": \"thanks for the ablation study.\\n\\nI notice that in your paper the beam size is 15, while the beam size used in the speaker-follower model is 40. Playing with the beam size can influence the performance and the trajectory lengths (shorter lengths as you said). But it might not be appropriate to claim this as the contribution of your work.\", \"title\": \"Beam size\"}", "{\"title\": \"Additional results for ablation study\", \"comment\": \"Hi,\\n\\nThank you for the suggestions on ablation study. Below are the results as requested. We replaced the soft attention on visual or textual inputs with a simple mean pooling, e.g., only visual grounding means we simply use mean-pooling on textual input, and vice versa. \\n\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------\\n\\t\\t\\t Co-Grounding\\t Progress Beam \\t Validataion-Seen \\t Validation-Unseen\\n\\t\\t # Visual\\tTextual Monitor Search \\t NE\\u2193 SR\\u2191 OSR\\u2191 \\tNE\\u2193\\t SR\\u2191 OSR\\u2191\\n------------------------------------------------------------------------------------------------------------------------------------------------\\nBaseline \\t\\t\\t\\t\\t\\t\\t\\t\\t\\t 4.36 0.54 \\t0.68\\t 7.22 0.27 0.39\\n------------------------------------------------------------------------------------------------------------------------------------------------\\n\\t\\t 1 \\u2714 \\t\\t\\t\\t\\t\\t\\t\\t 3.94 \\t0.62 \\t0.73 \\t6.34 \\t0.40 \\t0.53\\n\\t\\t 2 \\t \\u2714\\t\\t\\t\\t\\t\\t 3.60 \\t0.65 \\t0.75 \\t6.27 \\t0.43 \\t0.54\\nOurs\\t 3 \\u2714 \\t \\u2714\\t\\t\\t\\t\\t\\t 3.65 \\t0.65 \\t0.75 \\t6.07 \\t0.42 \\t0.57\\n\\t\\t 4 \\u2714 \\t \\u2714\\t\\t\\u2714 \\t\\t\\t\\t 3.56 \\t0.65 \\t0.75\\t 5.89 \\t0.46 \\t0.60\\n\\t\\t 5 \\u2714 \\t \\u2714\\t\\t\\u2714 \\t\\t \\u2714 \\t\\t 3.23 \\t0.70 \\t0.78 \\t5.04 \\t0.57 \\t0.70\\n------------------------------------------------------------------------------------------------------------------------------------------------\"}", "{\"comment\": \"Thanks so much for the prompt reply.\", \"one_more_question_regarding_eq6\": \"how do you handle the text attention weights, alpha? The supplementary material suggests that W_pm is one linear layer of shape 592 x 1. The text attention weights, however, is produced from various lengths of instructions and can have variable lengths. It seems a bit odd to simply pad zeros at the end of it to extend it to the fixed length 80. How did you handle it?\", \"title\": \"What about the variable length text attention weights?\"}", "{\"comment\": \"I think the ablation with or without beam search is very valuable, please add it to paper or at least supplementary?\\nPlus, ablation with (only visual grounding) (only textual grounding) (only cogrounding without progress monitor) would be very illuminating as well.\", \"title\": \"Consider adding ablations to paper / supplementary?\"}", "{\"title\": \"We use the model with panoramic action space as baseline\", \"comment\": \"Hi,\\n\\nThank you for your interest in our work and kind words!\\n\\nOur proposed method built upon the established work, and we use the panoramic action space proposed in the Speaker-Follower as the baseline (hence making the comparison to our improvements fair). We focus on highlighting the novel contributions/ideas that improve on this baseline. Removing the panoramic action space from the proposed method requires non-trivial changes to be made, including visual grounding module, action selection module, and finally the progress monitor itself. We thus encourage readers to refer to the Speaker-Follower paper for performance improvement regarding panoramic action space.\"}", "{\"title\": \"Thank you for bringing this to our attention\", \"comment\": \"Hi,\\n\\nThank you for your interest in our paper and trying to reproduce it!\\n\\nYou are correct. The cross entropy loss in Eq 6 should be a Mean Squared Error loss (MSELoss). The equation should thus be changed to \\\\sum_{t=1}^{T} (y^{pm}_{t} - p^{pm}_{t})^2. The original cross entropy loss was a variant we experimented with predicting whether the agent is making progress or not (binary prediction). The current version with tanh() at the output of the progress monitor and trained with MSE loss gave us better performance. \\n\\nWe will correct this error in the revision.\"}", "{\"comment\": \"Hi interesting paper! I am trying to reproduce the result and is having problem with Eq 6.\\nSpecifically, the progress monitor module seems to output p_t^pm which seems to be a 1D value between -1~1 after tanh(). The target y_t^pm is also a 1D number that is between -inf ~ 1. \\nIn this case how do you use CrossEntropy Loss as suggested in the paper? \\nEq 6 suggested that the loss should incorporate - y_t^pm * log (p_t^pm). Well I am confused if this is still \\\"cross entropy loss\\\". What's more log() does not like negative values?\", \"title\": \"Confusions about Eq 6\"}", "{\"comment\": \"Hi, thanks for the good performance!\\n\\nI am wondering how much the panoramic action space helps in your model. Can you report the performance without the panoramic action space? Thanks!\\n\\nBest regards\", \"title\": \"Panoramic Action Space\"}", "{\"title\": \"Results without beam search\", \"comment\": \"Hi,\\n\\nThank you for raising an important discussion about metrics. \\n\\nThe VLN task was recently introduced less than 1 year ago with a more than 60% success rate gap between the best-known model and human performance. Each existing work has step by step helped us to advance and reduce this gap to 33%. \\n\\nOur proposed work, when combined with beam search, is able to further close the gap to 25% measured with success rate. We focused on this metric since that was the metric used by recent state of art. Yet, there is obviously still room for improvement. Ideally, the common goal from the research community is to develop an agent that achieves a high success rate with low trajectory length. We argue that given the complexity of the VLN task which requires the agent to simultaneously achieve visual grounding, textual reasoning, temporal memorization/reasoning, and intelligently select actions to navigate, the need to relax the task along multiple directions in order to make progress is important and essential. There is no current best model for both metrics (SR and SPL), and beam search typically differentiates the two regimes.\\n\\nEven from the robotics perspective, these are two important objectives that one might want to trade off (length/time and success rate) given that there is no one solution that is pareto-optimal, and beam search can be seen as an exploration mechanism which is not uncommon in robotics. Beam search, which thoroughly explores the environment, eases the burden for the agent in intelligently selecting actions given the progress made towards the goal so that the agent can solely focus on identifying the implicit target represented by navigational instruction. However, the best performing model with beam search is still more than 20% lower than the human performance. Further, note that our state of art success rate has only about 29% of the trajectory length of the Speaker-Follower model which also uses beam search.\\n\\nNonetheless, we agree that the newly introduced SPL metric is also important, though it emphasizes a different aspect of the navigation task. For future comparison, we thus submitted our proposed model without beam search to the test server. Our result achieves state of art SPL results compared to existing approaches (note that we exclude submissions after the ICLR deadline) and is shown in the table below (the leaderboard only allows one result to be shown from a team). For each metric, with or without beam search, our proposed method outperforms existing approaches by a large margin.\\n\\n--------------------------------------------------------------------------------------------------------------------------------------\\n\\t \\t \\t\\t\\t\\t\\t\\t Test-Unseen\\n \\t\\t\\t\\t\\t\\t\\tlength\\u2193 NE\\u2193 SR\\u2191 OSR\\u2191 \\tSPL\\u2191 \\n--------------------------------------------------------------------------------------------------------------------------------------\\n\\t\\t\\t\\t\\t\\t\\t\\t\\twithout beam search\\n--------------------------------------------------------------------------------------------------------------------------------------\\nSeq2Seq Baseline\\t\\t \\t8.13 7.85 0.20 0.27 0.18\\nLook Before You Leap\\t \\t 9.15\\t 7.53 0.25 0.32 0.23 \\nOurs\\t\\t \\t \\t\\t\\t18.04 5.67 0.48 0.59 0.35\\n--------------------------------------------------------------------------------------------------------------------------------------\\n\\t\\t\\t\\t\\t\\t\\t\\t\\twith beam search\\n--------------------------------------------------------------------------------------------------------------------------------------\\nSpeaker-Follower\\t\\t \\t 1257.38 4.87 0.53 0.96 \\t0.01 \\nOurs\\t\\t\\t \\t\\t\\t 373.09 4.48 0.61 0.97 \\t0.02 \\n--------------------------------------------------------------------------------------------------------------------------------------\"}", "{\"comment\": \"ICLR reviewer guidelines state that \\\"no paper will be considered prior work if it appeared on arxiv, or another online venue, less than 30 days prior to the ICLR deadline.\\\" The paper defining the SPL metric appeared on arXiv on 18 July. However, as an organizer of the VLN challenge and a co-author of the arXiv paper mentioned above, I would like to state for the benefit of reviewers that the SPL metric was not added to the public VLN leaderboard until September 8th (19 days before the ICLR deadline). In fairness to authors with work in progress, reviewers may wish to exclude this metric from the definition of prior work for ICLR 2019 since it was not implemented on the leaderboard 30 days prior to the deadline. Existing work on the dataset has been primarily evaluated in terms of 'Success Rate', as reported in this submission.\", \"title\": \"A note from the challenge organizers on the SPL metric\"}", "{\"comment\": \"This paper only reports the absolute Success Rate as the evaluation metric and hides the trajectory lengths. It is well known that the Success Rate can be generally improved by exhaustedly exploring the environment before committing to a decision. However, beam search is not appropriate for robotics, because longer trajectories have more costs (battery, wear, delays for the user, etc).\\n\\nTherefore, Success rate weighted by normalized inverse Path Length (SPL) trades-off Success Rate against Trajectory Length. SPL is defined in the paper On Evaluation of Embodied Navigation Agents (https://arxiv.org/abs/1807.06757) and introduced as one of the evaluation metrics for the VLN task.\\n\\nI am not sure why the authors didn't include the trajectory lengths in the paper. But from the VLN challenge leaderboard, the SPL score of the authors' submission is only 0.02 (out of 1.00), which is severely worse than the Seq2Seq baseline (0.18). The trajectory length is 373.09 meters. It seems like the authors are gaming the Success Rate with exhaustive search. Hence, I do not think it is proper to claim that the method has achieved new SOTA performance.\", \"title\": \"Concerns on the evaluation metric and the so-called SOTA performance\"}" ] }
HJG0ojCcFm
Negotiating Team Formation Using Deep Reinforcement Learning
[ "Yoram Bachrach", "Richard Everett", "Edward Hughes", "Angeliki Lazaridou", "Joel Leibo", "Marc Lanctot", "Mike Johanson", "Wojtek Czarnecki", "Thore Graepel" ]
When autonomous agents interact in the same environment, they must often cooperate to achieve their goals. One way for agents to cooperate effectively is to form a team, make a binding agreement on a joint plan, and execute it. However, when agents are self-interested, the gains from team formation must be allocated appropriately to incentivize agreement. Various approaches for multi-agent negotiation have been proposed, but typically only work for particular negotiation protocols. More general methods usually require human input or domain-specific data, and so do not scale. To address this, we propose a framework for training agents to negotiate and form teams using deep reinforcement learning. Importantly, our method makes no assumptions about the specific negotiation protocol, and is instead completely experience driven. We evaluate our approach on both non-spatial and spatially extended team-formation negotiation environments, demonstrating that our agents beat hand-crafted bots and reach negotiation outcomes consistent with fair solutions predicted by cooperative game theory. Additionally, we investigate how the physical location of agents influences negotiation outcomes.
[ "Reinforcement Learning", "Negotiation", "Team Formation", "Cooperative Game Theory", "Shapley Value" ]
https://openreview.net/pdf?id=HJG0ojCcFm
https://openreview.net/forum?id=HJG0ojCcFm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryevoN5NxE", "S1gEXHjmRX", "SylVbi5QA7", "B1xs4Xqm0X", "r1xtnTcuaX", "SyxHgL9d6Q", "BJePkI4PaQ", "rkg7yVVPpm", "rJgnVWg-TX", "H1llkY3J6m", "BkguDb8kpm", "HygI9hnqnm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1545016479245, 1542858012041, 1542855419805, 1542853427121, 1542135217474, 1542133228638, 1542043102569, 1542042586913, 1541632308386, 1541552343887, 1541525856412, 1541225614016 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper668/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper668/Authors" ], [ "ICLR.cc/2019/Conference/Paper668/Authors" ], [ "ICLR.cc/2019/Conference/Paper668/Authors" ], [ "ICLR.cc/2019/Conference/Paper668/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper668/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper668/Authors" ], [ "ICLR.cc/2019/Conference/Paper668/Authors" ], [ "ICLR.cc/2019/Conference/Paper668/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper668/Authors" ], [ "ICLR.cc/2019/Conference/Paper668/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper668/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This paper was reviewed by three experts. Initially, the reviews were mixed with several concerns raised. After the author response, there continue to be concerns about need for significantly more experiments. If this were a journal, it is clear that recommendation would be \\\"major revision\\\". Since that option is not available and the paper clearly needs another round of reviews, we must unfortunately reject. We encourage the authors to incorporate reviewer feedback and submit a stronger manuscript at a future venue.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}", "{\"title\": \"We have added experiments regarding the correspondence with the Shapley value. We also implemented a train/test partition with more boards, and a bot based on the Shapley value.\", \"comment\": [\"Thanks again for your feedback!\", \"Indeed, the heart of the paper is the Shapley value comparison. You felt that experiment 4 is incomplete, and that we should add new experiments in the vein of experiment 4 to help understand how and why the correlation with Shapley values occurs. We have added two experiments in that spirit:\", \"1) an experiment showing how the correlation with the Shapley value depends on the weight variance (or equivalently, the inequality in negotiation position strength); We find that when the weights have a lower variance (agents are more equal in their negotiation position strength), we get a stronger correspondence with the Shapley values: the new Appendix E contains the full result (with a weight-variance conditional figure equivalent to Figure 3).\", \"2) To further rule out concerns regarding the capacity of RL agents to compute the Shapley value, the new Appendix F has an experiment showing that even when RL agents observe the true Shapley values as a part of the state input, they still deviate from the Shapley value; the experiment indicates that the deviation from Shapley value occurs due to the multi-agent independent RL procedure (agents optimizing for a *policy* that maximizes their *personal gain* in the context of other learners / non-stationary environment).\", \"We now partition boards to a train-set and a test-set (with 50 boards instead of 20 evaluation boards we had before), showing that RL agents can generalize to previously unobserved boards. The results regarding the bot-comparison and the Shapley correspondence still hold (see revised figures and numbers in Section 4.2)\", \"We have added a Shapley-value based bot (similar to the weight-proportional bot, but using a target based on the Shapley value). RL agents are still competitive, even with this more sophisticated bot. We also added an experiment regarding training RL agents against a weight-proportional bot, and evaluating them against a Shapley-proportional bot.\", \"We improved the discussion of the motivation and novelty of the work: comparing how the behavior of RL agents relates to *cooperative* game theory (which studies how players form teams and share the achieved rewards).\", \"We expanded the discussion of the Shapley value, and why it measures the strength of an agent\\u2019s negotiation position, or the fair share of the reward it should receive. We now provide a list of the fairness axioms and related work on power indices (see Appendix A)\", \"We still have a couple of days to further revise the paper, so any suggestions you have after reading the revised paper are very much appreciated!\"]}", "{\"title\": \"We revised the paper with a train/test board partition and more boards, an evaluation against a Shapley bot, and a more detailed discussion of the advantages of RL agents versus hand crafted bots\", \"comment\": [\"Thanks again for your feedback! As you can see, we have revised the paper:\", \"We now partition boards to a train-set and a test-set, and make sure agents are not memorizing actions for specific boards, and can generalize to previously unobserved boards (i.e. boards not encountered during training). The results regarding the bot-comparison and the Shapley correspondence still hold (see revised figures and numbers in Section 4.2)\", \"We now consider a much larger set of train boards (150 boards) and evaluation boards (50 boards), rather than the 20 boards we had before.\", \"We have added a Shapley-value based bot (similar to the weight-proportional bot, but using a target based on the Shapley value). RL agents are still competitive, even with this more sophisticated bot (which is computationally intractable for games with a much larger number of agents). We also added an experiment regarding training RL agents against a weight-proportional bot, and evaluating them against a Shapley-proportional bot.\", \"We added a discussion regarding the necessity of an RL approach. In short, RL allows us to uncover good negotiation policies, handling diverse negotiation protocols and environments. If one only wants to approximate the negotiation power in an abstract cooperative game, it is sufficient to apply supervised learning (see section 4.4). However such analysis ignores protocol specific details, such as spatial locations, which do affect outcomes (see Figure 4, for example).\", \"We have added experiments regarding the impact of the weight variance (or equivalently, the inequality in negotiation position strength / Shapley values). We find that when the weights have a lower variance (agents are more equal in their negotiation position strength), we get a stronger correspondence with the Shapley values. The new Appendix E contains the full result (with a weight-variance conditional figure equivalent to Figure 4a).\"]}", "{\"title\": \"We revised the paper, with a train-test board partition (and more boards), a Shapley-based bot, an investigation of robustness to hyper-parameters, examination of out-of-training bots, and more detailed motivating examples and discussion of the Shapley value\", \"comment\": [\"Thanks again for your feedback! As you can see, we have revised the paper:\", \"We now partition boards to a train-set and a test set, and make sure agents generalize to previously unobserved boards (i.e. boards not encountered during training). The results regarding the bot-comparison and the Shapley correspondence still hold (see revised figures and numbers in Section 4.2)\", \"We now consider a much larger set of train boards (150 boards) and evaluation boards (50 boards), rather than the 20 boards we had before.\", \"We include a discussion of experiments regarding hyper-parameter settings. The results are robust for changing the learning rate, hidden layer sizes and labda (eligibility traces parameter) by 25% (and likely more, these are the settings we have evaluated).\", \"We have added a couple of motivating examples for applications of negotiation in cooperative games, and a more detailed discussion of the axiom behind the Shapley value (see revised Appendix A)\", \"We have added a Shapley-value based bot (similar to the weight-proportional bot, but using a target based on the Shapley value). RL agents are still competitive, even with this more sophisticated bot (which is computationally intractable for games with a much larger number of agents)\", \"We have added an experiment regarding out-of-training bots. We train RL agents against a weight-proportional bot, and evaluate them against a Shapley-proportional bot. While this does hinder their performance a bit, they remain competitive.\", \"We have added experiments regarding a board distribution with lower weight variance, yielding a stronger correspondence with the Shapley value (Appendix E). We also added experiments strengthening our experiment 4 (from section 4.4), further investigating the reasons behind the deviation from the Shapley value. Appendix F showing that even when RL agents observe the true Shapley values as a part of the state, they can deviate from the Shapley value.\"]}", "{\"title\": \"Good plan but requires execution to evaluate\", \"comment\": \"I appreciate the authors' thoughtful response to my comments.\\n\\nIf the authors were able to execute and report the promised new tests and experiments with positive results, I would be willing to revise my score. \\n\\nRegarding the need for 500,000 training steps. It's worth noting that this amount severely restricts the domain of application for the method. In what situations would there be the opportunity to train on that many real cases? This fact highlights the importance of checking whether the method performs well on out-of-sample situations and bots.\"}", "{\"title\": \"Thanks for the clarifying response.\", \"comment\": \"This addresses most of the main points from my review. It promises a new baseline and improvements to the presentation. Improved presentations of some parts are provided in the response.\\n\\nI still think the paper is missing a larger set of experiments in the vein of experiment 4 to help understand how and why the correlation with Shapley values occurs. The review by AnonReviewer1 shares similar concerns and mentions some potential experiments in the last paragraph.\\n\\nUnfortunately, this isn't quite enough for me to change my rating.\"}", "{\"title\": \"Thanks! We'll evaluate performance on held out boards (train/test board partition), evaluate against a Shapley bot baseline, and discuss the advantages/disadvantages of RL-agents vsersus alternatives (bots or human data).\", \"comment\": \"Thank you for the helpful review, especially as an emergency reviewer!\\n\\nAs you suggest, we will add an experiment where we perform a train / test partition of boards, and will evaluate agents on held-out boards to make sure they are not memorizing good actions for the specific training boards. \\n\\nGaining more insight regarding the learned agent policy is tricky, as the policy relates to a large state space. One aspect we can examine in more depth is whether agents tend to agree quickly (e.g. the number of steps until a team is formed). We will add an experiment looking at this in more depth. \\n\\nAs discussed in the response to other reviewers, we intend to add an experiment comparing the RL agents with a bot based directly on the Shapley value. Such a bot does not scale to many players as computing the Shapley value is an NP hard problem, but in games with 5 players which we use in our experiments it is possible to compute in reasonable time.\", \"regarding_the_necessity_of_an_rl_approach\": \"the key advantage using our RL-based approach is being able to handle diverse negotiation protocols and environments. We see two possible alternatives to RL: building a hand-crafted bot, designed for a specific negotiation protocol, or gathering data from humans who engage in negotiation and training a bot to mimic human participants.\\nBoth of the above alternatives are tailored to a specific negotiation protocol, and are very costly (either in gathering enough human data, or in designing and engineering the bot). Although very costly, these alternatives can achieve potentially higher quality negotiation policies. In our analysis we have noticed that for boards where some players have a very strong or very weak negotiation position, there is a more noticeable deviation from the Shapley value. \\nWe will add an experiment examining the impact of the weight variance (or similarly, the degree of inequality between agents\\u2019 negotiation power) on the correspondence of outcomes achieved by RL agents with the Shapley value. We will also clarify the discussion of alternatives to RL (hand-crafted bots and human daa), and their potential advantages and disadvantages.\"}", "{\"title\": \"Thanks! We'll add experiments regarding a Shapley bot baseline and weight variance; We'll better discuss the Shapley value and clarify our discussion.\", \"comment\": \"Thank you for the very thorough and helpful review, especially as an emergency reviewer!\\n\\nRegarding motivation, our main focus was indeed on comparing the outcomes reached by RL agents with the predictions from cooperative game theory. Cooperative game theory focuses on the negotiation position of players, abstracting away details regarding the specific protocol used to negotiate and share the joint reward. As you point out, when facing a specific protocol, agents seek to maximize their own reward by using an effective policy for that protocol. We show that one can use RL to find effective negotiation policies for any given protocol. \\nRegarding novelty, earlier work on multiagent RL has focused on non-cooperative game theory (and in particular on competition between agents or social dilemmas). The key novelty of this work is in comparing how the behaviour of RL agents relates to *cooperative* game theory (which studies how players form teams and share the achieved rewards). To our knowledge we are the first to do so, and we\\u2019ll clarify the presentation of our motivation. \\n\\nAs you suggest, we will dedicate more space to discussing the Shapley value as a solution concept. Indeed, the Shapley value range is [0,1], but we measure the *proportion* of the reward an agent achieves on average (which has the same range). The Shapley value is a \\u201cpower index\\u201d, designed to objectively measure the strength of an agent\\u2019s negotiating position. It can be viewed as the agent\\u2019s power to affect the outcome of the game, or the relative number of opportunities it has to form successful teams. As shown in the example in the appendix, this power is not always proportional to an agent\\u2019s weight; Each agent must infer from experience where their negotiation position lies in the team formation hierarchy, making this an interesting problem in multi-agent reinforcement learning. We\\u2019ll make the discussion in the main text longer, and put a more detailed presentation in the appendix. \\n\\nRegarding a Shapley Bot baseline, we will add a baseline bot using Shapley values rather than proportional weights, and compare with our current agents. This is a much stronger baseline; note that computing Shapley values is NP-hard, so only tractable because we have relatively few agents. \\n\\nAs suggested, we\\u2019ll analyze the impact of weight variance, using a conditional version of figure 3 (for high and low variance boards). \\n\\nRegarding Experiment 4, the experiment shows that given a direct supervision signal (boards labeled with the Shapley values), a small neural net can approximate the Shapley value well. Our RL agents have a more challenging task for two reasons: (1) They have to take into account not only their negotiating position but also protocol details. (2) Their RL supervision is weaker: they only know how successful a whole *sequence* of actions was, and not the \\u201ccorrect\\u201d action they should have taken at every timestep. Our conclusion from the experiment is that at least the basic supervised learning task can be accomplished with a small neural network i.e. the agent\\u2019s network has the capacity required to estimate their raw negotiating power, abstracting away protocol details. Clearly, there are many further potential reasons for the RL agents to deviate from Shapley (optimization error, incorrect credit assignment and learning dynamics / nonstationarity). Based on your comment we will better motivate the experiment, and briefly discuss the alternative reasons for deviation.\\n\\nHuman data is an alternative to using RL to train agents. Agents can be trained to mimic humans who negotiate under a protocol, but obtaining human data is extremely costly and does not scale.\\n\\nWe proposed the team patches environment to show that our approach generalizes to another negotiation protocol, of a spatial nature. Interacting in the real world requires being at the same physical location at the same time as your negotiation partners. People who negotiate must thus reason about both the high-level negotiation strategy (such as their negotiation position/strength), as well as low-level policies (such as where to go to meet the right partners). We wanted to demonstrate that our approach can handle such complexities. Moreover, just as in the real world, the details of the spatial environment can and do impact the negotiation outcomes in our experiments. We will clarify our discussion of this. \\n\\nIndeed, we hope this work would convince the community to further investigate RL through a cooperative game theory prism.\"}", "{\"title\": \"Late Review for Shapley Values Paper\", \"review\": \"Note: This is an emergency review. I managed not to look at existing comments/ratings for this paper before writing my review.\\n\\nSummary\\n---\\n\\nThis paper studies deep multi-agent RL in settings where all of the agents must cooperate to accomplish a task (e.g., search and rescue, multi-player video games). It uses simple cooperative weighted voting games 1) to study the efficacy of deep RL in theoretically hard environments and 2) to compare solutions found by deep RL to a fair solution concept known in the literature on cooperative game theory.\\n\\nIn a weighted voting game each agent is given a weight and the agents attempt to form teams. The first team whose total weights exceed a known threshold get the total reward, which is distributed amongst the team members. Given such a game, the __shapely value__ of an agent measures the importance of that agent. How much does it contribute to a team from this set of agents? How much payoff should it get? These have existed in the literature for over 60 years and appear to be widely known and used.\", \"all_of_this_is_agnostic_to_how_the_agents_communicate_to_form_teams\": \"i.e., the communication protocol or the actions available in the environment. The protocol matters because it can allow certain teams to form more or less easily than others, even though the same team would get the same reward regardless of protocol. This can make an agent more or less effective under different protocols. Here two protocols are considered - one where agents suggest proposed teams directly and another where they suggest teams by congregating on a 2d plane. Both protocols result in games whose Nash equilibria are computationally intractable.\", \"the_paper_shows_4_results\": \"1) It considers a hand-designed bot similar to models from the game theory literature. Relative to a group of RL agents, an additional RL bot will outputperform a hand-designed bot in terms of average reward it receives.\\n\\n2) The average reward of a bot is strongly correlated with that bots shapely value.\\n\\n3) In the negotiation by congregation environment, a bot's spatial position can affect its ability to negotiate.\\n\\n4) Shapely values can be predicted quite accurately from the weights and threshold that define a cooperative voting game, though these predictions have high variance.\", \"the_paper_concludes_that_deep_rl_is_effective_at_learning_agents_for_cooperative_games_in_multiple_ways\": \"1) Deep agents are better than a hand-designed agent.\\n\\n2) Deep agents easily extend across negotiation protocols (something hand-designed agents don't do).\\n\\n3) A popular result in cooperative game theory predicts how effective agents should be. Deep agents are just about that effective.\\n\\nStrengths\\n---\\n\\n* The paper does a pretty good job of reviewing relevant work from game theory.\\n\\n* Some of the organization is nice (e.g., the list of reasons classic game theory doesn't extend to practice; one section per experiment).\\n\\nWeaknesses mentioned in individual sections...\\n\\nQuality\\n---\\nOverall, things were well thought through, but I would have liked more out of the experiment 4 section and I think a few minor details might have been missed.\", \"details\": \"Section 4.5/Experiment 4: The Shapely value comparison is the most important part of the paper. This section is important because it tries to explain those results, but it seems like there's more work to be done here. I'm not sure capacity is eliminated as a concern, and there might be other concerns not listed like optimization error.\\n\\n* I'm not sure what conclusion to take from experiment 4. Shapely values can be computed from the cooperative games directly, independent of protocol. We're interested in __policies__ that get exactly the shapley values as their average reward. Policies depend on the protocol. Does being able to predict shapley values mean that a model with similar capacity can learn a policy that will have the desired shapley value? Was that the desired conclusion?\", \"other_comments\": \"* The current hand-designed baseline uses weights to form a probability distribution. There should be another baseline that uses Shapley values instead of weights.\\n\\n* It's not clear exactly what the spatial nature of the Team Patches environment adds. It is good to try another environment just to have an additional notion of generalization.\\n\\nClarity\\n---\\nOverall, the motivation could be clearer. Is the point to do work on cooperative games or to compare to Shapley values?\", \"presentation_details\": \"* The paper does not get to specific examples of agents acting in environments until about page 4. Providing a simple, brief example which leaves out some details at the beginning would go a long way toward aiding intuitions about the abstract concepts discussed. Here are some clarity issues I had that might have been helped with an example:\\n * What exactly is it about a task which requires agents to form teams? How necessary are those teams?\\n * What exactly is a negotiation protocol?\\n * What does it mean to distribute/share a reward across agents?\\n\\n* When talking about shapely values, fairness seems to be emphasized somewhat often, but no concrete intuition about what fairness means in this setting is provided.\\n\\n* Intro para 4: What does the human data measure? And thus how might it be useful?\\n\\n* Intro para 7: People in the ICLR community will be more familiar with this work. What is the difference between communication and team forming?\\n\\n* The section on Shapley values should provide more intuition about what they're thought about as measuring. (An agent's importance or what payoff it should expect, according to wikipedia.)\\n\\n* Instead of measuring correlation to Shapley values, the paper measures whether average reward approximates Shapley values. It seems like the two are on a different scale. Average reward is unbounded and Shapley values are in [0, 1]. How are they comparable?\\n\\n* The paper mentions how results vary over different types of boards (ones with higher and lower variance in the sampled weights). It does not show results to support this discussion. A conditional analysis of performance would be interesting and relevant, perhaps conditional versions of Fig. 3.\\n\\nOriginality\\n---\\nI do not know much about game theory and I'm only somewhat familiar with multi-agent deep RL, so I am not in a great position to judge novelty. Nonetheless, Given existing work in multi-agent RL, it is unsurprising that deep RL agents learn reasonable policies in these environments.\\n\\nAs far as I know, the comparison of average reward to shapely values has not been done before. \\n\\n\\nSignificance\\n---\\nMost work in multi-agent RL evaluates by 1) comparing to baselines or 2) measuring some environment/task-specific metric. The best thing about this work is that it evaluates by comparing actual performance to some external theory that suggests how well an agent should be able to do, falling into a 3rd category. It's not alone in this category (e.g., paper compare to theoretically optimal baselines if they can), but it is interesting to see another example of this kind of evaluation.\\n\\nThe community might possibly start to focus more on cooperative games because of this paper. A more interesting result would occur if others are inspired to implement more comparisons to how agents __should__ perform in theory.\\n\\n\\nJustification for Final Rating\\n---\\n\\nI am unsure about novelty. As described above, the paper is lacking in clarity and quality (esp. section 4.5), but I don't think these concerns would invalidate the main result. I think the contribution is significant because of the kind of evaluation, but I'm not sure it will ultimately have a large impact. Thus I think some of the concerns above should be addressed before publication, but I would not be very disappointed if it were published as is.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Our results are robust to learner hyper-parameters; We'll evaluate over previously unobserved boards.\", \"comment\": \"Indeed, our focus is on using multiagent reinforcement learning to train agents in negotiation, rather than using hand crafted bots tailored to a specific negotiation protocol or interaction rules.\\n\\nIndeed, there are many RL algorithms that can be used, each having multiple hyper-parameters (neural network architecture, learning rates, optimizer and loss, eligibility traces configuration etc.). Clearly, any application of deep RL requires setting / tuning such parameters. When comparing to the Bot baselines we found that the RL-agents beat the bots across many such settings (i.e. the success in negotiation is robust to the choice of learning algorithm and hyper-parameter settings); even a shallow function approximator (with a small hidden layer size) is sufficient to beat the baseline bot. Similarly, the correspondence with the Shapley value holds under many learner configurations. We will add a discussion of this to the paper, as well as an analysis showing such robustness. As discussed in the paper, we emphasize that the main advantage of our approach is achieving a reasonably strong negotiator without having to hand craft a bot. We believe that tuning hyper-parameters for an RL algorithm requires considerably less work than writing a full fledged bot for a negotiation protocol, which must take into account not only the negotiation position of the agents, but also nuances regarding the interaction protocol. \\n\\nThe number of training steps for the analysis was 500,000. The full details are in the appendix (page 14), but indeed this detail belongs in the main text - we will move it there. \\n\\nAs for potential applications of this work, consider the following motivating example: multiple providers (travel agencies / carriers) can allow a person to travel between various destinations, and a client is willing to pay a certain amount to get to a desired destination (while there is no direct flight, there are multiple routing options, using different carrier combinations). How should the carriers share the customer's payment? Similarly, consider a manufacturing scenario where multiple companies can provide subsets of components required to manufacture an end product, and where each company has only some of the components. How should they share the profit from selling the end product? Both scenarios can be captured as a cooperative game, so RL agents can be used to learn to negotiate in such domains (for similar examples, see: Chalkiadakis, G., Elkind, E., & Wooldridge, M. (2011), Computational aspects of cooperative game theory). We will add a brief discussion of these motivating examples to the paper. \\n\\nAs you point out, training the RL-agents with some bots, then testing them with other bots is likely to hinder the performance of these agents. We will carry out such an experiment, and examine the impact of \\\"out of training set\\\" bots. However, we must note that even a negotiation bot designer faces a similar problem: when designing a bot to have a good performance against a bots of type A, its performance may be sub-optimal against a bot of type B. As in any game theoretic setting, the outcome an agent achieves depends not only on its own policy, but also on the policy used by others. \\n\\nAs you note, our experiments are based on settings with few agents (5 agents in a game), which makes it tractable to compute Shapley values. However, computing the Shapley value is an NP-hard problem, so approaches based on computing the Shapley value directly may not scale to games with many agents (while an RL approach does scale). As you propose, we will add an experiment comparing a bot which uses the Shapley value as the target for its share under the negotiation (the weight proportional bot is a rough approximation of a Shapley bot), as this is a strong negotiation baseline. The optimal policy for an agent to use depends on the policies used by other agents, so the Shapley bot may not be optimal against all agents (for instance, it may be too stubborn).\\n\\nOur analysis is based on 20 different board configurations, but as each board has 5 agents (and thus 5 weights), so there are 100 different negotiation positions each agent may have. Given a total payoff of 10, the action space of the agent is any integral partition of 10 points to 5 agents (which is 14 choose 4 , or over 1,000 different proposal actions), resulting in a huge policy space, even for the very simple propose accept environment. This seems a reasonably large space to explore. However, we can certainly increase the number of sampled board configurations, which we'll do in the revised version. We wholeheartedly agree that is important to see how agents perform in negotiation situations they have not encountered. We will examine performance against bots and Shapley correspondence on held-out boards, and will include this in the revised version - thanks for pointing this out!\"}", "{\"title\": \"Interesting problem, more experiments would be nice\", \"review\": \"This is an emergency review, so apologies for the briefness.\\n\\nThe paper introduces an approach to learning negotiation strategies using reinforcement learning. The authors propose a new setup in which self-interested agents must cooperatively form teams to achieve a reward. They explore two ways of proposing agreements: one involving a random agent proposing an agreement symbolically, and another in which agents form teams by moving to the same location. Results show that RL-trained models outperform simple rule-based bots, and correlate with game-theoretic predictions. I think the paper is very well clearly presented, and tackles an interesting an important problem.\\n\\nOne issue I have is that as I understand it, the results are only reported for training games. Could the agents just be memorizing a good outcome for that specific environment, rather than actually learning to negotiate? Why not evaluate on held out games?\\n\\nThe experiments are pretty interesting, and I appreciated the last one showing that limitations are due to the difficulty of RL, rather than expressive power of the network. However, I think there are some other natural questions that could be explored, including: what kind of strategies are the models learning? Could we change the environment in such a way that the proposed approach is not sufficient? Is the choice of RL approach crucial, or does anything work? I think further experiments would strengthen the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Interesting exploration into RL for negotiation in coalition games\", \"review\": \"This paper develops a reinforcement learning approach for negotiating coalitions in cooperative game theory settings. The authors evaluate their approach on two games against optimal solutions given by the Shapley value.\\n\\nThe work builds upon a substantial and growing literature on reinforcement learning for multiagent competitive and cooperative games. The most novel component of the work is a focus on the process of negotiation within cooperative coalition games. The two game environments studied examine a \\\"propose-accept\\\" negotiation process and a spatial negotiation process.\\n\\nThe main contribution of the work is the introduction of a reinforcement learning approach for negotiation that can be used in cases where unlimited training simulations are available. This approach is a fairly straightforward application of RL to coalition games, but could be of interest to researchers studying negotiation or multiagent reinforcement learning, and the authors demonstrate the success of RL compared to a normative standard.\", \"my_primary_concerns_are\": [\"The authors advertise the work as requiring no assumptions about the specific negotiation protocol, but the learning algorithms used are different in the two cases studied, so the approach does require fine-tuning to particular cases.\", \"Maybe I missed it, but how many training games are required?\", \"In what real applications do we expect this learning algorithm to be useful?\", \"The experiments where the RL agents are matched against bots include training against those specific bot types. How does the trained algorithm perform when matched against agents using rules outside its training set?\", \"Since the Shapley value is easily computable in both cases studied. If the bots are all being trained together, why wouldn't the bots just use that to achieve the optimal solution?\", \"Why are only 20 game boards used, with the same boards used for training and testing? How do the algorithms perform on boards outside the training set?\", \"Overall, the paper is somewhat interesting and relatively technically sound, but the contribution seems marginal.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HkxAisC9FQ
Improved robustness to adversarial examples using Lipschitz regularization of the loss
[ "Chris Finlay", "Adam M. Oberman", "Bilal Abbasi" ]
We augment adversarial training (AT) with worst case adversarial training (WCAT) which improves adversarial robustness by 11% over the current state- of-the-art result in the `2-norm on CIFAR-10. We interpret adversarial training as Total Variation Regularization, which is a fundamental tool in mathematical im- age processing, and WCAT as Lipschitz regularization, which appears in Image Inpainting. We obtain verifiable worst and average case robustness guarantees, based on the expected and maximum values of the norm of the gradient of the loss.
[ "Adversarial training", "adversarial examples", "deep neural networks", "regularization", "Lipschitz constant" ]
https://openreview.net/pdf?id=HkxAisC9FQ
https://openreview.net/forum?id=HkxAisC9FQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rkx-LhumxE", "BJgh0eTL1N", "HkePeI0SyV", "HJxhfMpp0m", "rJejui5aCX", "BJgKHaxwAX", "Hye2u2xPCm", "BklZ42xPRm", "rkxOCoewAQ", "B1xzNWKERm", "BJg8q4ZuTm", "rkgx_ajThm", "H1e9KG-ch7", "H1xFcKa_hX", "HkxQ77mUiX" ], "note_type": [ "meta_review", "official_comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1544944712541, 1544110292435, 1544050158900, 1543520788291, 1543510898532, 1543077184600, 1543076979619, 1543076905042, 1543076816179, 1542914346201, 1542096013651, 1541418343560, 1541177985865, 1541097872758, 1539875610546 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper667/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper667/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper667/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper667/Authors" ], [ "ICLR.cc/2019/Conference/Paper667/Authors" ], [ "ICLR.cc/2019/Conference/Paper667/Authors" ], [ "ICLR.cc/2019/Conference/Paper667/Authors" ], [ "ICLR.cc/2019/Conference/Paper667/Authors" ], [ "~Oleg_Trott1" ], [ "ICLR.cc/2019/Conference/Paper667/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper667/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper667/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper667/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper suggests augmenting adversarial training with a Lipschitz regularization of the loss, and suggests that this improves the adversarial robustness of deep neural networks. The idea of using such regularization seems novel. However, several reviewers were seriously concerned with the quality of the writing. In particular, the paper contains claims that not only are not needed but also are incorrect. Also, the Reviewer 2 in particular was also concerned with the presentation of prior work on Lipschitz regularization.\\n\\nSuch poor quality of the presentation makes it impossible to properly evaluate the actual paper contribution.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"The quality of the presentation makes it hard to properly assess the quality of the results\"}", "{\"title\": \"training details\", \"comment\": \"Hi,\\n\\nThanks for your interest. You're correct, a mixed derivative -- in both x (image) and theta (parameters) is computed. Our implementation was easily done in PyTorch, simply by running autograd twice - first in x to get the norm gradient, then in theta. In practice we found that networks trained with this Lipschitz penalty took no longer than four times the training time of an unregularized network. Which means that if you are doing many steps of PGD adversarial training (for example in Madry et al uses 7 steps), Lipschitz regularization can be faster. The comparison depends on how many PGD steps you take.\"}", "{\"comment\": \"This paper looks quite interesting to connect Lipschitz regularization with min-max adversarial training. If my understanding is correct, the training process will be performed on (1). Since the norm of gradient with respect to x is penalized, solving (1) requires the gradient of \\\\partial | \\\\nabla_x l(x) | / \\\\partial \\\\theta, where \\\\theta are network parameters. Is such quantity easy to compute? Compared to min-max adversarial training, is there a significant speed-up using Lipschitz regularization?\", \"title\": \"about training details\"}", "{\"title\": \"thanks for the pointer\", \"comment\": \"I see there is a discussion of how our results compare. Some interesting points are raised by the poster and by the authors of the other paper.\", \"we_stand_by_our_statement_in_our_paper\": \"We implemented the attacks correctly and our robustness results are stronger than the other paper, and any other published results for L2 attacks.\"}", "{\"comment\": \"There is a thread discussing this paper, which reviewers may be interested in at:\", \"https\": \"//openreview.net/forum?id=ByxGSsR9FQ&noteId=Bklz239KCX\", \"title\": \"pointer to discussion\"}", "{\"title\": \"Reply to AnonReviewer2\", \"comment\": \"Thank you for your comments. Following your suggestion, we have completely reworked our submission, with an eye towards clarity and the page limit. In fact we have reduced the page count to just over seven pages. We hope that you find the paper better organized and easier to read.\\n\\nRegarding comparison to other results, we have re-analyzed our experimental results and have now included a direct comparison with state-of-the-art results. We find that when attacks are measured in the 2-norm, our method is state-of-the-art, improving on the previous state-of-the-art by 11%. When measured in the max-norm, our results are comparable to the state-of-the-art (Madry et al (2017)), however we use only one-step adversarial training, whereas in Madry et al seven step adversarial training is used.\\n\\nWe have also now included a section explicitly comparing our methods with prior methods, which can be summarized as follows. Prior work has focused on controlling the estimate of the Lipschitz constant using the product of norms of weight matrices. We argue that for deep networks this estimate is inaccurate, since its error grows exponentially in the number of layers. In our work we propose an alternative method for estimating the Lipschitz constant, which is an underestimate, and is estimated from the training data. This is a novel approach.\\n\\nPlease also see the general reply to all reviewers, above.\"}", "{\"title\": \"Reply to AnonReviewer3\", \"comment\": \"Thank you for your review. We have reworked our draft, and we hope that our new version addresses your points.\\n\\nWe would like to make a comment regarding Big-O notation. In the context used in the paper, which corresponds to https://en.wikipedia.org/wiki/Big_O_notation , Big-O notation is a rigorous result, not experimental. \\n\\nIn many areas of scientific computing, engineering and statistics it is accepted that results need only be shown up to Big-O of some error term. For example in polynomial interpolation it typically it suffices to show a particular method has error epsilon^(n+1) with a n-th degree polynomial. We have shown that adversarial training is equivalent to Total Variation minimization, up to order epsilon^2, where epsilon is the size of the adversarial perturbation. This means that when epsilon is small, as is typical (our epsilon is 0.01), the two methods are nearly equivalent. By equivalent, lossely speaking, we mean that replacing one term with another should lead to results which are very close. However the Big-O notation has a rigorous meaning in the limit as \\\\epsilon goes to zero. \\n\\nPlease also see our general reply to all reviewers, above.\"}", "{\"title\": \"Reply to AnonReviewer1\", \"comment\": \"We have posted a new version of our paper. We have re-written the paper to be as accessible as possible to someone not directly familiar with adversarial robustness. We have also pushed the heavier math to the appendix, and reinterpreted our Lipschitz regularization as worst-case adversarial training, which is an interpretation of a more familiar idea in the area.\\n\\nWe would greatly appreciate your comments on the new draft. We hope that you will find it easier to read.\\n\\nPlease also see our general reply to all reviewers above.\"}", "{\"title\": \"General reply to reviewers\", \"comment\": \"Thank you to the reviewers for your comments. We have posted a new draft of the manuscript. Following AnonReviewer2\\u2019s comments, we have completely reworked the draft. We have attempted to communicate our ideas and results as clearly as possible. We hope that the paper is accessible to persons outside the field of adversarial attacks as well, such as AnonReviewer1.\\n\\nWe would like to highlight the merits of our results. \\n\\n1. We achieve state-of-the-art results when measuring attacks in the 2-norm, improving by 11% over the previous state-of-the-art on CIFAR-10. Following off line conversations with some of our colleagues, we have also analyzed our results in the max-norm as well. In the max-norm, we are on par with the current state-of-the-art (Madry et al (2017)). However we did not focus our efforts on the max-norm, so we believe with a bit more effort our results could possibly be improved, for example if we had used multi-step adversarial training like in Madry et al (we only used one-step adversarial training).\\n\\n2. Our implementation of Lipschitz regularization is novel and an improvement to existing results in terms of both accuracy (by orders of magnitude) and efficiency (we can leverage the gradients already used in adversarial training). Training networks by penalizing with estimates of the Lipschitz constant have in the past used the product of weight matrix norms. However this estimate of the Lipschitz constant grows exponentially in the number of layers. As such the estimate is intractable for deep networks. In contrast, our method provides a more accurate estimate, which we demonstrate is closer in magnitude to the true value of the Lipschitz constant.\\n\\n3. We believe that our interpretation of adversarial training will be interesting and useful to the community. We show that adversarial training is a form of Total Variation regularization, which has been used successfully outside the deep learning community in image preprocessing to denoise images. We believe that this is a useful insight that could be leveraged further in the future. \\n\\n4. We obtain novel average case and worst case robustness bounds, which we verify empirically. These bounds allow use to predict adversarial robustness based on the statistics of quantities we can read off of the trained model.\"}", "{\"title\": \"The lemma was not needed, it was redundant, so it has been removed\", \"comment\": \"Hi,\\nThanks for your comment. We had in mind a result which would be true under additional assumptions. But, in fact, we don't use the lemma anywhere - it was just to illustrate upper bounds on the Lipschitz constant coming from the architecture. \\n\\nWe are removing it from the revision.\"}", "{\"comment\": \"I don't think Lemma 3.3 is correct. As I understood it, the Lemma claims that to calculate a particular Lipschitz constant (2,inf) of a feed-forward network with entry-wise 1-Lipschitz nonlinearities, one can ignore the nonlinearities (and of course the biases).\\n\\nPlease consider this runnable Numpy code as a counterexample. The network is defined by f. The product of the matrices w1 and w0 is 0. However, the network generates distinct outputs f(x1) and f(x2):\\n\\n\\nimport numpy as np\\nfrom numpy.linalg import norm\\n\\ndef relu(x): return np.maximum(x, 0)\\n\\nw0 = np.array([[1., -1.], [-1., 1.]])\\nw1 = np.array([[1., 1.]])\\n\\ndef f(x): return w1.dot(relu(w0.dot(x)))\\n\\ndef lip_lower_bound(x1, x2): return norm(f(x1) - f(x2), np.inf) / norm(x1 - x2, 2)\\n\\nx1 = np.array([0., 0.])\\nx2 = np.array([-1., 1.])\\n\\nprint(w1.dot(w0)) # 0\\n\\nprint(lip_lower_bound(x1, x2)) # sqrt(2)\", \"title\": \"Lemma 3.3 is incorrect\"}", "{\"title\": \"Possibly a good paper but not my area of expertise at all\", \"review\": \"The authors propose a novel method of training neural networks for robustness of adversarial attacks based on 2-norm and Lipschitz regularization. Unfortunately I'm not at all familiar with the literature on adversarial attacks so it is difficult for me to judge the quality and significance of this work. The theoretical results look plausible and clearly stated. The experiments show improvements over existing methods but I can't tell whether the right baselines were used. Overall the writing is reasonably clear but not very accessible for someone not already familiar with the area.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Interesting idea -- could be significantly strengthened\", \"review\": \"Summary: this paper uses a trick to simplify the adversarial loss by one in which the adversarial perturbation appears in closed form.\", \"pros\": [\"interesting idea\", \"experiments are interesting\"], \"cons\": [\"formal results are either trivial or could be improved in their statements\", \"experimental guarantees only, up to what is hidden in the Big-Oh notations of Theorem 2.2, 2.3.\"], \"details\": [\"In Theorem 2.2, you need to remove the $O(epsilon^2)$, unless you point to the Taylor theorem that guarantees that for the identity you claim before (5). The closest one I see is that the O(||a||^2) is in fact $||a|| u(||a||)$ with $\\\\lim u(x) = 0$ as $x \\\\rightarrow 0$, which does not guarantee the $O$ notation for any $a$.\", \"In Theorem 2.2, how do you pass from the solution of (5) (which is indeed a vector) to the solution of the following equation, which, without constraint, gives a dim > 1 subspace in the general case ?\", \"In all cases, you do not get Theorem 2.3 in its form as the $O$ notation just guarantees you an upperbound. You need to rephrase.\", \"Figure ?? (twice) before Section 3\", \"Define the \\u201cgroup norm\\u201d notation appearing with the max in (8) (isn\\u2019t one redundant ?)\", \"Section 3.4 is interesting. Have you looked at generalising your observation in the last identity to more losses = f-divergences (hence, proper losses modulo assumptions) ?\", \"Section 4: many Figure ??\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting idea but poorly written\", \"review\": \"This paper explores augmenting the training loss with an additional gradient regularization term to improve the robustness of models against adversarial examples. The authors show that this training loss can be interpreted as a form of adversarial training against optimal L2 and L_infinity adversarial perturbations. This augmented training effectively reduces the Lipschitz constant of the network, leading to improved robustness against a wide variety of attack algorithms.\\n\\nWhile I believe the results are correct and possibly significant, the paper is poorly written (especially for a 10 page submission) and comparison with prior work on reducing the Lipschitz constant of the network is lacking. The authors also made little to no effort in writing to ensure the clarity of their paper. I would like to see a completely reworked draft before opening to the idea of recommending acceptance.\", \"pros\": [\"Theoretically intuitive method for improving the model's robustness.\", \"Evaluation against a wide variety of attacks.\", \"Empirically demonstrated improvement over traditional adversarial training.\"], \"cons\": [\"Lack of comparison to prior work. The authors are aware of numerous techniques for controlling the Lipschitz constant of the network for improved robustness, but did not compare to them at all.\", \"Poorly written. The paper contains multiple missing figure references, has a duplicated table (Tables 1 and 3), and the method is not explained well. I am confused as to how the 2-Lip loss is minimized. Also, the paper organization seems very chaotic and incoherent, e.g., the introduction section contains many technical details that would better belong in related works or methods sections.\", \"--------------------------------------------\"], \"revision\": \"I thank the authors for incorporating my suggestions and reworking the draft, and I have updated my rating in response to the revision. While I believe the organization is much cleaner and easier to follow, there is still much room for improvement. In particular, the paper does not introduce concepts in a logical order for a non-expert to follow (e.g. Reviewer 1) and leaps into the paper's core idea too quickly. I am strongly in favor of exceeding the suggested page limit of 8 pages and using that space to address these concerns.\\n\\nA more pressing concern is the evaluation of prior work. The authors added a short section (Section 5.4) comparing their method to that of (Qian and Wegman, 2018). This is certainly a reasonable comparison and the results seem promising, the evaluation lacks an important dimension -- varying the value of epsilon and observing the change in robustness. This is an important aspect for defenses against adversarial examples as certain defense may be less robust but are insensitive to the adversary's strength. Showing the robustness across different adversary strengths gives a more informative view of the authors' proposed method in comparison to others. The evaluation is also lacking in breadth, ignoring other similar defenses such as (Cisse et al., 2017) and (Gouk et al., 2018).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Results measured in L-infinity\", \"comment\": \"Several people have suggested that it would be helpful if we also reported measurements of adversarial distance in the L-infinity norm (to complement L2). Following this suggestion, we have re-generated all the tables and figures in L-infinity.\\n\\nFor example, our main results are presented in Table 1, where we report median adversarial distance and percent error at a fixed adversarial distance. Here is Table 1 with distances in L-infinity. We report percent misclassified at adversiarial distance 1/16 (rather than 0.1) to more easily compare with other literature's results.\\n\\nDataset defense method median distance % err at eps=1/16\\n\\nCIFAR-10 J0 (baseline) 1.02e-2 99.92\\n J1 (AT, FGSM) 2.12e-2 96.06\\n J2 (AT, L2) 3.45e-2 84.76\\n J2-Lip & tanh 6.00e-2 51.64\\n\\nCIFAR-100 J0 (baseline) 5.83e-3 99.61\\n J1 (AT, FGSM) 1.07e-2 98.46\\n J2 (AT, L2) 1.06e-2 98.03\\n J2-Lip & tanh 1.60e-2 93.73\\n\\nWe hope this updated table is useful during the review process.\"}" ] }
HJl0jiRqtX
EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE
[ "Chao Ma", "Sebastian Tschiatschek", "Konstantina Palla", "Jose Miguel Hernandez Lobato", "Sebastian Nowozin", "Cheng Zhang" ]
Making decisions requires information relevant to the task at hand. Many real-life decision-making situations allow acquiring further relevant information at a specific cost. For example, in assessing the health status of a patient we may decide to take additional measurements such as diagnostic tests or imaging scans before making a final assessment. More information that is relevant allows for better decisions but it may be costly to acquire all of this information. How can we trade off the desire to make good decisions with the option to acquire further information at a cost? To this end, we propose a principled framework, named EDDI (Efficient Dynamic Discovery of high-value Information), based on the theory of Bayesian experimental design. In EDDI we propose a novel partial variational autoencoder (Partial VAE), to efficiently handle missing data over varying subsets of known information. EDDI combines this Partial VAE with an acquisition function that maximizes expected information gain on a set of target variables. EDDI is efficient and demonstrates that dynamic discovery of high-value information is possible; we show cost reduction at the same decision quality and improved decision quality at the same cost in benchmarks and in two health-care applications.. We believe there is great potential for realizing these gains in real-world decision support systems.
[ "active variable selection", "missing data", "amortized inference" ]
https://openreview.net/pdf?id=HJl0jiRqtX
https://openreview.net/forum?id=HJl0jiRqtX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJe6OS1Lx4", "S1xhkTNxAX", "rk-Q2zgA7", "rJlodiMx07", "rke_8oGgCm", "H1lvv_feA7", "HJxgYLy-pX", "B1lI6Wnah7", "HJgQ25LF27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545102709068, 1542634724492, 1542626328517, 1542626162653, 1542626127598, 1542625374822, 1541629559582, 1541419454385, 1541135018807 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper666/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper666/Authors" ], [ "ICLR.cc/2019/Conference/Paper666/Authors" ], [ "ICLR.cc/2019/Conference/Paper666/Authors" ], [ "ICLR.cc/2019/Conference/Paper666/Authors" ], [ "ICLR.cc/2019/Conference/Paper666/Authors" ], [ "ICLR.cc/2019/Conference/Paper666/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper666/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper666/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper develops an active variable selection framework that couples a partial variational autoencoder capable of handling missing data with an information acquisition criteria derived from Bayesian experimental design. The paper is generally well written and the formulation appears to be natural, with a compelling real world healthcare application. The topic is relatively under-explored in deep learning and the paper appears to attempt to set a valuable baseline. However, the AC cannot recommend acceptance based on the fact that reviewer 2 has brought up concerns about the competitiveness of the approach relative to alternative methods reported in the experimental section, and all reviewers have found various parts of the paper to have room for improvement with regards to technical clarity. As such the paper would benefit from a revision and a stronger resubmission.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"valuable baselines though lots of room for improvement\"}", "{\"title\": \"Summary of the new revision\", \"comment\": [\"Dear all,\", \"We have revised our paper utilizing all the feedback. We thus summarize main the changes in our revised paper below for your references.\", \"Based on the comment of Reviewer 3, we have moved the introduction and discussion of recurrent PNs to the Appendix B.5. We have also revised the presentation of the Partial VAE include Figure 1. We have also updated all the experimental results with non-recurrent PN in the main paper.\", \"We have added statistical tests for model comparison in Appendix B. 2. (Reviewer 3), all the improvement is statistically significant.\", \"We have added RMSE plots in Appendix B.2.5, as requested by Reviewer 1 and show that using RMSE, the conclusion is consistent as using predictive likelihood.\", \"We have discussed a new baseline that utilizing lasso in Appendix B.2.5, as suggested by Reviewer 2;\", \"We have added a discussion on the active feature acquisition (AFA) in Section 2.2 and clarified the different of AFA and our problem setting.\", \"We have added the introduction of VAEs and amortized inference in Section 3.2, as required by Reviewer 1;\", \"We have added brief descriptions of image inpainting task in Section 4.1 (Reviewer 1).\"]}", "{\"title\": \"Thank you for your reviews. We have reivised the paper accoridingly and added the new baseline.\", \"comment\": \"We thank you for your support of our work and valuable feedback. We have clarified all the concerns accordingly in the paper. The original review is indented using >.\\n\\n\\n> 1. Does p(x_i | z) include parameters? How do these parameters be trained?\\n\\nYes, p(x_i|z) is the generator component of our partial VAE, and is trained by optimizing the partial variational bound on already observed data (with missing data), we have clarified this in our revised version of the paper .\\n\\n> 2. Does sample from p(x_i | x_o) follow by sampling z from q(z|x_o)\\n> then sample x_i from p(x_i | z)? How to sample from p(x_\\\\phi | x_i,\\n> x_o) in Eq (7)?\\n\\nYes, to sample from p(x_i | x_o), we first sample z from q(z|x_o), and then sample x_i from p(x_i | z). In the case of p(x_\\\\phi | x_i,x_o) in Eq (7), as indicated after Eq(8), we first sample z from q(z|x_o, x_i), then sample p(x_\\\\phi|z), since x_\\\\phi is also an element of the set of all possible variables, x.\\n\\n\\n> 3. In Eq (9), it uses q(z_i|x_o), q(z_i | x_i, x_o), q(z_i | x_i,\\n> x_o, x_\\\\phi) while in Eq (4) it only shows how to learn q(z|x_o). Does\\n> it need to learn multiple partial inference networks for all\\n> combination of i and \\\\phi ?\\n\\nNo, it does not. This is one of the main novelty of our approach: we call our partial VAE approach an \\\"amortized\\\" partial inference method since our partial VAE parameterization of q(z|x_o) is able to handle all possible lengths of features to be conditioned on. During training (due to missingness in data), the lengths of x_o^n (where we use $n$ to indicate the index of the data point in the training set here) are different from each other. This gives q(z|x_o) the ability to generalize to q(z_i|x_o), q(z_i | x_i, x_o), and q(z_i | x_i,x_o, x_\\\\phi) on test data during test time, without the need for training multiple networks.\\n\\n> 4. The comparison with similar algorithms seems to be weak in the\\n> experiment section. RAND is random feature selection, and SING is\\n> global feature selection by using the proposed method. These\\n> comparison methods cannot provide enough information on how well the\\n> proposed methods performs. There are plenty of works in the area of\\n> \\u201cactive feature acquisition\\u201d and also many works in feature selection\\n> dated back to Lasso which should be considered as comparison targets.\\n\\nThank you for your suggestion. \\n\\nWe have added a new baseline adapted from LASSO in Appendix B.2.6 with UCI dataset, since LASSO requires fully observed data, and only works in problems with one-dimensional outputs. As LASSO is linear and non-probabilistic, for more fair comparison, we use the set of selected features returned by the LASSO to construct variable selection strategy and use the Partial VAE to evaluate predictive likelihoods. Please refer to our revised paper (Section 4.2) for details and results.\\n\\nWe would like to point out that our framework is different traditional feature selection such as LASSO. For traditional feature selection methods, they are non-sequential and they require fully observed dataset for both training and testing which is not the case for our problem setting. Additionally, their goal is also to choose a global subset of features from fully observed data to obtain the best performance instead of select the most informative feature with any given partial observation. \\n\\nOur problem setting also differs from active feature acquisition (AFA) methods. As discussed in our revised paper (Section 2.2), AFA mainly studies the optimization of *optimal training set* that would result in the best classifier (model), under limited budget of costs. On the contrary, our framework studies the problem: given a pretrained model, how to identify and acquire high value information under uncertainty, with minimal costs. Hence, AFA can not be directly applied and compared. Also, AFA requires fully observed variables at test time, while our framework does not require this assumption. Last but not least, the realization of these framework relies on various heuristics and suffer from very limited scalability. To the best of our knowledge, DRAL is the only prior work that shares the same problem setting. We have only compared DRAL to our EDDI on a single UCI dataset since DRAL is not scalable.\\n\\n> 5. In the \\u201cpersonalized\\u201d implementation of EDDI on each data\\n> instances, is the model trained independently for each data point or\\n> share some parameters across different data? If so, what are the\\n> shared parameters?\\n\\nThe Partial VAE part of EDDI is trained on the training set. In the active variable selection experiments, all test data that are used to evaluate EDDI are never seen by the model before. All model parameters are shared across different data points. In our paper, \\\"personalized\\\" simply means we evaluate Equation (9) on each data point individually.\\n\\n\\nWe hope that we have fully addressed your concerns in the current revised version of the paper. Please let us know if you have further questions.\"}", "{\"title\": \"Comment Part 2 On significance of experimental results\", \"comment\": \"> The results in Table 2 need to be clarified and further explained. 1)\\n> what are the error bars, considering multiple runs and datasets? \\n\\nWe have revised accordingly in the paper. Regarding the error bars: in Table 2, for each run, we run all active learning strategies on each data point of each dataset. Then, we rank all strategies on an individual basis, which gives us $R * (\\\\sum_j N_j)$ different rankings, where N_j is the size of the test set in the j-th dataset, and R is the number of runs. Finally, we simply compute the mean and standard error statistics based on these individual rankings. This procedure is explained in detail in section 4.2 in our revised version. \\n\\n>2) How can EDDI be so much better than SING when individual AUICs in\\n> Tables 6-11, the only significant difference (accounting for error\\n> bars) is on Boston data? \\n\\nThis is a good question. We have included discussions regarding this issue in Appendix B.2.2 and B.2.3. In particular, it seems that the avg. AUIC results in Tables 6-11 contradicts the avg. ranking of AUIC results in Table 2 of the main text. However, this is not the case. In Tables 6-11, AUIC numbers only provide a simplified statistics of *marginal performances* of each method. \\n\\nOn the contrary, the performance comparison problem is an example of the so-called *paired samples*, which refers to the situation that different algorithms are evaluated on exactly the same set of test data points. This introduces correlations between the performances of different algorithms. The average AUIC ranking measure actually takes into account this *joint performance* of all methods, meaning that ranking is a function of the performance of all methods. With this additional information of correlations, this gives a more accurate evaluation regarding the actual performance of different methods. Notably, in the practical scenario of active variable selection, the latter setting is more sensible and fare.\\n\\nThe above conjecture is further validated and confirmed by applying the nonparametric statistical test on the performance results, namely the Wilcoxon signed-rank significance test on the performance samples of different methods, which are detailed in Appendix B.2.2 and B.2.3. Wilcoxon test is a very powerful statistical test which includes the information of the joint distribution in *paired samples*. In our case, the term *paired samples* refers to the situation that different algorithms are evaluated on exactly the same set of test data points, which introduces correlations between the performances of different algorithms. \\n\\n\\n\\n> 3) according to Tables 6-11, PNP is only the\\n> best in 1 of 5 datasets, so how come is the overall beast by a large\\n> margin? This being said, the results in Table 2 are at best\\n> misleading.\\n\\nWe believe this has been addressed in our previous reply on significance. \\n\\nAdditionally, we would like to point out that the purpose of Tables 6-11 is to provide supplementary intuitive support that, our proposed methods, i.e., EDDI (+ PNP or PN) give the best result in 4 out of 6 datasets, compared with ZI based methods that currently dominates missing data problems in generative models. Which one to choose between PN and PNP depends on the application need.\\n\\n> In Table 4, how can PNP-EDDI be so much better than PNP-SING, when in\\n> Figure 6 error bars overlap almost everywhere?\\n\\nPlease refer to our previous reply regarding the PNP-SING, the *joint performance* evaluation metric, and the Wilcoxon tests.\\n\\n> I enjoyed reading the paper, the motivation is clear and the problem\\n> is important. The approach is modestly novel compared to existing\\n> approaches and in general well explained despite the fact that the\\n> need for multiple recurrent steps is not well justified and the\\n> differences between PN and PNP, advantages/disadvantages and when to\\n> use each are not described or explored in the experiments.\\n\\nWe are grateful that you enjoyed reading the paper and point out the part of the paper that needs clarification. We hope that we have fully addressed your concerns in the current revised version of the paper. Please let us know if you have further questions.\"}", "{\"title\": \"Comment Part 1: on methodologies\", \"comment\": \"We thank reviewer 3 for the constructive review. We have replied the concerns to clarify possible misunderstandings and updated the paper accordingly. The original review is indented using >.\\n\\n\\n> It is not clear why multiple recurrent steps improve perfromance. This\\n> is not conceptually justified and empirically (see Figure 8), it is\\n> also unclear whether PNP5 significantly outperforms PNP1. Further,\\n> results seem to support that PNP is always better than PN, so why\\n> introduce the methodology around PN or even present it at all. \\n\\nThanks for your comment. We have agreed that the multiple recurrent steps is not crucial for performance improvement. Importantly, we agree that for the whole framework, the recurrent structure of PN is not critical for the presentation of the entire EDDI framework. Following your advice, we have replaced all PN5 result with the PN1 result in all the experiments in the paper. We have moved the recurrent PN to the Appendix as a possible extension and added a short discussion on whether one should use the recurrent version. \\n\\n> Note that the authors do not offer an explanation about the performance\\n> differences between PN and PNP.\\n\\nWe treat PN and PNP are two different settings of our framework. In our experiments, PNP setting performs better than PN setting for most of the evaluation. Additionally, we have analyzed the use of PNP structure in Appendix C.1. In short, we have shown that PNP parameterization actually combines ZI-VAE (which dominates the applications of VAEs on missing data) with PN-VAE. Therefore, we expect that PNP will enjoy the advantages from both PN and ZI, hence improve the performance. This conjecture is confirmed in the experimental results that you have mentioned.\\n\\n\\n> In the inpainting regions section, the authors write about\\n> well-calibrated uncertainties without any context. What do they mean\\n> by calibration, well-calibrated and how can they support their claim\\n> about it?\\n\\nThanks for pointing this out. We changed to \\u201cbetter-estimated uncertainties\\u201d instead of \\u201cwell-calibrated uncertainties\\u201d to be more technically precise in the revised version of the paper. We have also added more explanation about it. In this case, the term \\\"better-estimated uncertainty\\\" is reflected by the quality of the samples generated from p(x_U|x_O). Therefore, the quality of model uncertainty is quantitatively evaluated by the test ELBO (available in Table 1, and visualized in Figure 2) of inpainting on the partially observed MNIST dataset (averaged over test set). This is calculated by $1/(N) \\\\sum_{n=1}^{N} ELBO(n|x_O) $, where N is the size of the test set, and ELBO(n|x_O) corresponds to the conditional ELBO of the n-th data point (where the inference net q is conditioned on x_O). Please refer to the revised paper for details.\\n\\n> In Figure 3 it is not clear that PNP+Ours outperforms PNP+SING. For\\n> Boston hosing seems to be marginally better but the error bars (which\\n> I assume are standard deviations, not stated) make difficult to\\n> ascertain whether the differences are significant. Although I\\n> understand the value of having \\\"personalized\\\" decisions, one wonders\\n> whether this personalization comes with any generalizable measurable\\n> gains given the results.\\n\\nThank you for the comment. In all figures in the revised version of the paper, error bars represent standard errors. We have also performed the significance test and reported the results in Appendix B.2.3 (explained in detail for our reply for the later comments). Our method is significantly better. \\n\\nAdditionally, if you also look at the enlarged subplot included in Fig 3 and Figure 9 in Appendix B.2.4, it is generally significant that the PNP+Ours curve is below the PNP+SING. The first variable selection step should be ignored when conducting such comparison since in theory, both methods should select exactly the same variable.\\n\\nHere we would also like to first emphasize that the proposed SING-ordering method is already a very strong alternative setting of our proposed method. First, it makes use of the same Partial VAE information of our personalized method. Secondly, in SING-ordering, it assumes that the whole test set is available *at the same time*: the objective of the SING is to find the average information reward for the *whole test set* at each step, which is very unrealistic in practice. This gave SING unfair advantages over EDDI.\"}", "{\"title\": \"Revision uploaded\", \"comment\": \"Thank reviewer 1 for appreciating and application and the positive results. We have replied the concerns to clarify possible misunderstandings and updated the paper accordingly. The original review is indented using >.\\n\\n> Review: The paper presents an algorithm EDDI that uses a a partial VAE and does active \\n> feature selection. The authors show quite a bit of experiments that seem to indicate the \\n> approach gives positive results. However, since this is not my main area of expertise I do \\n> not know if these tasks are standard evaluation for this task.\\n\\n> For instance in Section 4.3, 4.4 why don't the authors plot accuracy as a function of \\n> steps/number of variables observed. That would seem much more useful than log \\n> likelihood.\\n\\nApart from existing results, we have reported test RMSE as suggested in the Appendix.B.2.5 for all UCI experiments in the revised version of the paper. Accuracy in terms of RMSE is consistent with the reported result using predictive log likelihood. Additionally, we would like to clarify that, log likelihood is the common standard when evaluating the performance related to generative models [1,2]. Compared with accuracy metric such as RMSE, log likelihood also account for model uncertainties (of the posterior on latent variable, z), which is very crucial in the practical application of active variable learning.\", \"reference\": \"[1] Kingma, Diederik P., and Max Welling. \\\"Auto-encoding variational Bayes.\\\" arXiv preprint arXiv:1312.6114 (2013).\\n[2] Gregor, Karol, et al. \\\"Draw: A recurrent neural network for image generation.\\\" arXiv preprint arXiv:1502.04623 (2015).\\n[3] Kingma, Diederik P., and Prafulla Dhariwal. \\\"Glow: Generative flow with invertible 1x1 convolutions.\\\" arXiv preprint arXiv:1807.03039 (2018).\\n\\n> In general, I found the methodology in the paper to be difficult to understand and not enough background was given.\\n> I think the paper would be clearer if it was more self-contained.\\n\\n> For instance, I found much of Section 3 to not have enough background. The authors use \\n> lots of terminology around VAEs but don't give enough rigorous background so the paper \\n> doesn't feel self contained. \\n\\n> The same is true regarding \\\"amortized inference\\\" which I also feel isn't rigorously defined \\n> anywhere but often discussed. \\n\\nThanks for your comment. We have revised the paper and added a paragraph \\u201cVAE and amortized inference\\u201d in section 3.2 which is a brief, self-contained introduction to VAEs and amortized inference.\\n\\n> The task for Section 4.1 (image inpainting) is not quite defined.\\n\\nWe have added a short description in 4.1 to make the task more clarified and well-defined.\"}", "{\"title\": \"Interesting but difficult to read\", \"review\": \"----I acknowledge that the authors have made improvements to the paper and have increased my score to 6\\n\\nThis is still definitely not my area of expertise and so I am leaving my confidence score low. \\n---\\n\\nThe paper presents an algorithm EDDI that uses a a partial VAE and does active feature selection. The authors show quite a bit of experiments that seem to indicate the approach gives positive results. However, since this is not my main area of expertise I do not know if these tasks are standard evaluation for this task.\\n\\nFor instance in Section 4.3, 4.4 why don't the authors plot accuracy as a function of steps/number of variables observed. That would seem much more useful than log likelihood.\\n\\nIn general, I found the methodology in the paper to be difficult to understand and not enough background was given.\\nI think the paper would be clearer if it was more self contained.\\n\\n-For instance, I found much of Section 3 to not have enough background. The authors use lots of terminology around VAEs but don't give enough rigorous background so the paper doesn't feel self contained. \\n\\n-The same is true regarding \\\"amortized inference\\\" which I also feel isn't rigorously defined anywhere but often discussed. \\n\\n-The task for Section 4.1 (image inpainting) is not quite defined.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"EDDI: EFFICIENT DYNAMIC DISCOVERY OF HIGH-VALUE INFORMATION WITH PARTIAL VAE\", \"review\": \"The authors present an information discovery approach based on (partial) variational autoencoders and an information theoretic acquisition function that seeks to maximize the expected information gain over a set of unobserved variables. Results are presented on image inpainting, UCI datasets and health data, namely ICU and NHANES.\\n\\nIt is not clear why multiple recurrent steps improve perfromance. This is not conceptually justified and empirically (see Figure 8), it is also unclear whether PNP5 significantly outperforms PNP1. Further, results seem to support that PNP is always better than PN, so why introduce the methodology around PN or even present it at all. Note that the authors do not offer an explanation about the perfromance differences between PN and PNP.\\n\\nIn the inpainting regions section, the authors write about well-calibrated uncertainties without any context. What do they mean by calibration, well-calibrated and how can they support their claim about it?\\n\\nIn Figure 3 it is not clear that PNP+Ours outperforms PNP+SING. For Boston hosing seems to be marginally better but the error bars (which I assume are standard deviations, not stated) make difficult to ascertain whether the differences are significant. Although I understand the value of having \\\"personalized\\\" decisions, one wonders whether this personalization comes with any generalizable measurable gains given the results.\\n\\nThe results in Table 2 need to be clarified and further explained. 1) what are the error bars, considering multiple runs and datasets? 2) How can EDDI be so much better than SING when individual AUICs in Tables 6-11, the only significant difference (accounting for error bars) is on Boston data? 3) according to Tables 6-11, PNP is only the best in 1 of 5 datasets, so how come is the overall beast by a large margin? This being said, the results in Table 2 are at best misleading.\\n\\nIn Table 4, how can PNP-EDDI be so much better than PNP-SING, when in Figure 6 error bars overlap almost everywhere?\\n\\nI enjoyed reading the paper, the motivation is clear and the problem is important. The approach is modestly novel compared to existing approaches and in general well explained despite the fact that the need for multiple recurrent steps is not well justified and the differences between PN and PNP, advantages/disadvantages and when to use each are not described or explored in the experiments.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A nice application model but some unclear points\", \"review\": \"The paper proposes Partial VAE to handle missing data and a variable-wise active learning method. The model combines Partial VAE with the acquisition function to design an intelligent information acquisition system. The paper nicely combines the missing value problem with an active learning strategy to in an acquisition pipeline and demonstrate the effectiveness on several datasets.\\n\\nI have following comments/questions:\\n\\n1. Does p(x_i | z) include parameters? How do these parameters be trained?\\n\\n2. Does sample from p(x_i | x_o) follow by sampling z from q(z|x_o) then sample x_i from p(x_i | z)? How to sample from p(x_\\\\phi | x_i, x_o) in Eq (7)?\\n\\n3. In Eq (9), it uses q(z_i|x_o), q(z_i | x_i, x_o), q(z_i | x_i, x_o, x_\\\\phi) while in Eq (4) it only shows how to learn q(z|x_o). Does it need to learn multiple partial inference networks for all combination of i and \\\\phi ?\\n\\n4. The comparison with similar algorithms seems to be weak in the experiment section. RAND is random feature selection, and SING is global feature selection by using the proposed method. These comparison methods cannot provide enough information on how well the proposed methods performs. There are plenty of works in the area of \\u201cactive feature acquisition\\u201d and also many works in feature selection dated back to Lasso which should be considered as comparison targets.\\n\\n5. In the \\u201cpersonalized\\u201d implementation of EDDI on each data instances, is the model trained independently for each data point or share some parameters across different data? If so, what are the shared parameters?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
H1g6osRcFQ
Policy Transfer with Strategy Optimization
[ "Wenhao Yu", "C. Karen Liu", "Greg Turk" ]
Computer simulation provides an automatic and safe way for training robotic control policies to achieve complex tasks such as locomotion. However, a policy trained in simulation usually does not transfer directly to the real hardware due to the differences between the two environments. Transfer learning using domain randomization is a promising approach, but it usually assumes that the target environment is close to the distribution of the training environments, thus relying heavily on accurate system identification. In this paper, we present a different approach that leverages domain randomization for transferring control policies to unknown environments. The key idea that, instead of learning a single policy in the simulation, we simultaneously learn a family of policies that exhibit different behaviors. When tested in the target environment, we directly search for the best policy in the family based on the task performance, without the need to identify the dynamic parameters. We evaluate our method on five simulated robotic control problems with different discrepancies in the training and testing environment and demonstrate that our method can overcome larger modeling errors compared to training a robust policy or an adaptive policy.
[ "transfer learning", "reinforcement learning", "modeling error", "strategy optimization" ]
https://openreview.net/pdf?id=H1g6osRcFQ
https://openreview.net/forum?id=H1g6osRcFQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJlIqWgme4", "BJeoB_Dn07", "BkxgDQGjC7", "SJxgQshk0m", "SyeODTvoTm", "SyevB3PopX", "SkgOw4vjam", "SyefcQPoTQ", "HJlhOgWZp7", "rkel69Xq3X", "Hkg6Xry5h7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544909198196, 1543432258822, 1543344983786, 1542601496099, 1542319456192, 1542319167064, 1542317151709, 1542316938216, 1541636212359, 1541188280311, 1541170469180 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper665/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper665/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper665/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper665/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper665/Authors" ], [ "ICLR.cc/2019/Conference/Paper665/Authors" ], [ "ICLR.cc/2019/Conference/Paper665/Authors" ], [ "ICLR.cc/2019/Conference/Paper665/Authors" ], [ "ICLR.cc/2019/Conference/Paper665/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper665/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper665/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents quite a simple idea to transfer a policy between domains by conditioning\\nthe orginal learned policy on the physical parameter used in dynamics randomization. CMA-ES then\\nfinds the best parameters in the target domain. Importantly, it is shown to work well, \\nfor examples where the dynamics randomization parameters do not span the parameters that are\\nactually changed, i.e., as is likely common in reality-gap problems.\\n\\nA weakness is the size of the contribution beyond UPOSI (Yu et al. 2017), the closest work.\\nThe authors now explicitly benchmark against this, with (generally) positive results.\", \"ac\": \"It would be ideal to see that the method does truly help span the reality gap, by seeing working sim2real transfer.\\n\\nOverall, the reviewers and AC are in agreement that this is a good idea that is likely to have impact.\\nIts fundamental simplicity means that it can also readily be used as a benchmark in future sim2real work.\\nThe AC recommend it be considered for oral presentation based on its simplicity, the importance of\\nthe sim2real problem, and particularly if it can be demonstrated to work well on actual\\nsim2real transfer tasks (not yet shown in the current results).\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Simple idea for sim2real fine tuning; solid results; with additional actual sim2real results, could be oral\"}", "{\"title\": \"Satisfying revision. Upping my rating to 6 or 7.\", \"comment\": \"Thanks for your detailed reply and revision. I think this strengthens this paper and I'd happily kick my rating up a notch to a 6 or 7. I'm not sure if I can still change my official rating, but I'm assuming the meta-reviewer will review this.\\n\\nIn summary, I like the simplicity of this paper. This approach seems to perform on par with or better than more complicated meta-learning setups and is worthy of publication (it could at least serve as a good benchmark).\"}", "{\"title\": \"Revisions\", \"comment\": \"The revisions make the paper quite a bit stronger and more complete. I'm maintaining my rating of 7-Accept.\"}", "{\"title\": \"responses and revised opinions & scores, based on author's replies?\", \"comment\": \"The detailed reviews are appreciated, as are the author's detailed replies.\\nAs a next step, could the reviewers please advise as to whether the replies have influenced your evaluation \\nand your score for the paper? Thank you in advance!\", \"note\": \"to see the revision differences, select \\\"Show Revisions\\\" on the review page, and then select the check-boxes for the two versions you wish to compare.\\n\\n-- area chair\"}", "{\"title\": \"Summary of paper revisions\", \"comment\": [\"We have revised the paper based on the reviewers' comments. The main changes to the initial paper are the following:\", \"Added comparison to Yu et al. 2017 in Experiments (Figure 2-5).\", \"Added comparison to oracle agents, which are agents trained directly in the target environments (Figure 2-6).\", \"Re-ran SO-CMA for the single target example of half cheetah and quadruped to account for the initialization bias in the CMA-ES experiments (Figure 5, 6).\", \"Added a discussion section for a more detailed discussion of the results from different methods.\", \"Revised Related Work and Conclusion sections to include the work of Tao Cheng et al. 2018\", \"Fixed typos and figure inconsistencies as pointed out by the reviewers.\"]}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for the insightful comments! Below we discuss the questions and comments by the reviewer. We have also revised the text to address the comments.\\n\\n1. Rollout number during fine-tuning\\nDuring the fine-tuning stage, the policies interact with the target environment for 50,000 steps (corresponding to the results in Figure 2, 4 (a), 5 (a) and 6). In the case of fine-tuning Robust, Hist and UPOSI, we run PPO with 2,000 samples at each iteration, resulting in 50 iterations of PPO. \\n\\nIn terms of the length of each rollout or trajectory, it has a maximum of 1,000 steps while the actual rollouts might be shorter due to early terminations. \\n\\nIn our experiments, the fine-tuning phase in general takes between 100-300 rollouts depending on the task. We have also revised the related text (Appendix B.4) to make this more clear.\\n\\n2. SO-CMA sometimes perform well without fine-tuning\\nThe reviewer\\u2019s concern about the SO-CMA sometimes achieving good performance with only one iteration is well taken. Upon further investigation, we think this is partly due to that the initial sampling distribution for CMA is chosen to be a Gaussian with the center of the mu domain as mean and a stdev of 0.25 (we use a mu domain of length 1 in each dimension). For the quadruped example, it turns out that the optimal solution of mu is close to the center of the mu domain and thus even in the first iteration of CMA, it might draw a sample that performs well. To validate this, we re-ran SO-CMA for the quadruped and the halfcheetah with CMA initial distribution to be a Gaussian with its mean randomly sampled and stdev be 0.5. This results in a more reasonable performance curve (as shown in Figure 5(a) and 6(a)) where the initial guess of CMA is sub-optimal and through the iterative optimization process it finds better solutions.\\n\\n3. Performance of Robust in walker2d example\\nFor the walker2d example, fine-tuning a robust policy indeed achieved comparable performance to SO-CMA. We hypothesize that this is because Robust was able to discover a robust bipedal running gait that works near-optimally for a large range of different dynamic parameters mu. However, when the optimal controller is more sensitive to mu, Robust policies may learn to use over-conservative strategies, leading to sub-optimal performance (e.g. in HalfCheetah) or fail to perform the task (e.g. in Hopper).\\n\\nWe do note that the fine-tuning process of the baseline methods relies on having a dense reward signal. In practice, one may only have access to a sparse reward signal in the target environment. Our method, using CMA, naturally handles sparse rewards and thus the performance gap between our method and the baseline methods will likely to grow if a sparse reward is used.\\n\\nWe have added a new section that discusses the performance of baseline methods (Section 6). We refer the reviewer to the revised text for more details.\\n\\n4. Oracle in the target environment\\nWe have trained oracle agents for our examples and added to the results (as seen in Figure 2-5). We trained the oracles for hopper, walker2d and halfcheetah environment for 3 random seeds with 1 million samples using PPO as in [1]. For the quadruped robot, we trained the oracle for 5 million samples as in [2]. Our method is able to achieve comparable or even better performance than the oracle agents.\\n\\n5. Comparison to Tao Chen et al.\\nWe thank the reviewer for pointing out the work by Tao Chen et al. [3], which we missed during literature search. It is very interesting and highly relevant to ours. The most relevant part of the algorithm by Tao Chen et al is the HCP-I policy, where a latent variable representing the variations is trained along with the neural net weights using reinforcement learning. During the transfer stage, HCP-I is fine-tuned in the target environment with another RL process. \\n\\nOur method differs from HCP-I in two aspects. First, our policy takes the dynamic parameters as input, while HCP-I learns a latent representation of them. Second, during the transfer of the policy, we search in the low-dimensional mu space using CMA, instead of fine-tuning the entire neural network. Learning a latent representation of the variations in the dynamics can be more flexible, while searching in the mu space is more sample efficient and allows sparse reward when methods like CMA is used. It is an interesting future direction to see whether HCP-I can overcome large dynamics discrepancies like the ones in our examples and if using CMA for identifying the latent variables in HCP-I can result in a more sample-efficient transfer algorithm. \\n\\nWe have added the HCP-I related discussions in related works section and conclusion section.\\n\\n[1] Schulman, John, et al. \\\"Proximal policy optimization algorithms.\\\".\\n[2] Tan, Jie, et al. \\u201cSim-to-Real: Learning Agile Locomotion For Quadruped Robots.\\u201d\\n[3] Chen, Tao, et al. \\\"Hardware Conditioned Policies for Multi-Robot Transfer Learning.\\\" NIPS, 2018.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the valuable feedback!\\n \\nWe share the reviewer\\u2019s concern that requiring explicit randomization parameters as inputs to the policy can be limiting for some applications. It is an interesting and important future direction to investigate how we can lift this limitation. One possible way is to use the method proposed by Eysenbach et al. [1], where a diverse set of skills is learned by maximizing how well a discriminative model can distinguish between different policies. Another possibility is to use the method in the work by Chen et al [2], as pointed out by Reviewer 2. They learned a latent representation of the environment variations by optimizing a latent input to the policy during the training.\\n\\n\\n[1] Eysenbach, Benjamin, et al. \\\"Diversity is All You Need: Learning Skills without a Reward Function.\\\" arXiv preprint arXiv:1802.06070 (2018).\\n[2] Chen, Tao, et al. \\\"Hardware Conditioned Policies for Multi-Robot Transfer Learning.\\\" NIPS, 2018.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the thoughtful comments! We have revised the paper to address the reviewer\\u2019s concerns, as detailed below.\\n\\n1. Comparison to Yu et al. 2017\\nWe have added the comparison to UPOSI for the hopper, walker and halfcheetah examples (Figure 2, 3, 4 and 5). In general UPOSI transfers better than Hist, as expected. Our proposed method was able to notably outperform UPOSI in the hopper and walker example, while the results for halfcheetah example are comparable.\\n\\n2. Discussion about baselines performances.\\nWe have added a new section (Section 6) that discusses the performance of the baseline methods for each example. Please refer to the revised text for more details. The related text is copied here for easy access:\\n\\n\\u201cWe hypothesize that the large variance in the performance of the baseline methods is due to their sensitivity to the type of task being tested. For example, if there exists a robust controller that works for a large range of different dynamic parameters mu in the task, such as a bipedal running motion in the Walker2d example, training a Robust policy may achieve good performance in transfer. However, when the optimal controller is more sensitive to mu, Robust policies may learn to use overly-conservative strategies, leading to sub-optimal performance (e.g. in HalfCheetah) or fail to perform the task (e.g. in Hopper). On the other hand, if the target environment is not significantly different from the training environments, UPOSI may achieve good performance, as in HalfCheetah. However, as the reality gap becomes larger, the system identification model in UPOSI may fail to produce good estimates and result in non-optimal actions. Furthermore, Hist did not achieve successful transfer in any of the examples, possibly due to two reasons: 1) it shares similar limitation to UPOSI when the reality gap is large and 2) it is in general more difficult to train Hist due to the larger input space, so that with a limited sample budget it is challenging to fine-tune Hist effectively.\\n\\nWe also note that although in some examples certain baseline method may achieve successful transfer, the fine-tuning process of these methods relies on having a dense reward signal. In practice, one may only have access to a sparse reward signal in the target environment, e.g. distance traveled before falling to the ground. Our method, using an evolutionary algorithm (CMA), naturally handles sparse rewards and thus the performance gap between our method (SO-CMA) and the baseline methods will likely be large if a sparse reward is used.\\u201c\\n\\n3. Experimental setup.\\nWe ran each trial with 3 random seeds and report the mean and one standard deviation in the plots. We have modified the first paragraph of the experiments section to emphasize this.\\n\\n4. J in eq 1 undefined.\\nThanks for spotting this! It was indeed due to a typo in the latex file that dropped J in section 3. This has been fixed in the revision.\"}", "{\"title\": \"Simple technique with few assumptions for policy transfer. Questions regarding performance and novelty.\", \"review\": [\"This paper introduces a simple technique to transfer policies between domains by learning a policy that's parametrized by domain randomization parameters. During transfer CMA-ES is used to find the best parameters for the target domain.\", \"Questions/remarks:\", \"If I understand correctly, a rollout of a policy during transfer (i.e. an episode) contains 2000 samples. Hence, 50000 samples in the target environment corresponds to 25 episodes. Is this correct? Does fine-tuning essentially consists of performing 25 rollouts in the target domain?\", \"It seems that for some tasks, there is almost no finetuning happening whereas SO-CMA still outperforms domain randomization (Robust) significantly? How can this be explained? For example, the quadruped task (Fig 6a) has no improvement for the SO-CMA method, yet it is significantly better than the domain randomization result. It seems that during the first episodes of finetuning, domain randomization and SO-CMA should be nearly equivalent (since CMA-ES will be randomly picking parameters mu). A very similar situation can be seen in Fig 5a\", \"Following up on my previous question: fig 4a does show the expected behavior (domain randomization and SO-CMA starting around the same value). However, in this case your method does not outperform domain randomization. Any idea as to why this is the case?\", \"It's difficult to understand how good/bad the performance of the various methods are without an oracle for comparison (i.e. just run PPO in the target environment).\", \"It seems that the algorithm in this work is almost identical to Hardware Conditioned Policies for Multi-Robot (Tao Chen et al. NIPS 2018), specifically section 5.2 in that paper seems very similar. Please comment.\"], \"minor_remarks\": [\"fig 5.a y-axis starts at 500 instead of 0.\", \"The reward for halfcheetah seems low, but this might be due to the custom setup.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Novel approach for adapting domain randomization policy for transfer\", \"review\": \"This paper presents a novel approach for adapting a policy learned with domain randomization to the target domain. The parameters for domain randomization are explicitly used as input to the network learning the policy. When run in the target domain, CMA-ES is used to search over these domain parameters to find the ones that lead to the policy with the best returns in the target domain.\\n\\nThis approach is a novel one in the space of domain randomization and sim2real work. The results show that it improves over learning robust policies and over one version of doing an adaptive policy (feedforward network with history input). This approach could\\n\\nThe paper is well written, clearly explained, has clear results, and also explains and evaluates alternate design choices in the appendix.\", \"pros\": [\"Demonstrated transfer across simulated environments\", \"Outperforms basic robust and adaptive alternatives\", \"Straightforward approach\"], \"cons\": [\"Requires explicit domain randomization parameters as input to network. This restricts it from applying to work where the simulator is learned rather than parameterized in this way.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting work with promising evaluation. Good evaluation.\", \"review\": \"The authors propose a policy transfer scheme which in the source domain simultaneously learns a family of policies parameterised by dynamics parameters and then employs an optimisation framework to select appropriate dynamics parameters based on samples from the target domain. The approach is evaluated on a number of simulated transfer tasks (either transferring from DART to MuJoCo or by introducing deliberate model inaccuracies).\\n\\nThis is interesting work in the context of system identification for policy transfer with an elaborate experimental evaluation. The policy learning part seems largely similar to that employed by Yu et al. 2017 (as acknowledged by the authors). This makes the principal contribution, in the eyes of this reviewer, the optimisation step conducted based on rollouts in the target domain. While the notion of optimising over the space of dynamics parameters is intuitive the question arises whether this optimisation step makes for a substantive contribution over the original work. This point is not really addressed in the experimental evaluation as benchmarking is performed against a robust and an adaptive policy but not explicitly against the (arguably) most closely related work in Yu et al. It could be argued, of course, that Yu et al. essentially use adaptive policy generation but they do explicitly learn dynamics parameters based on recent history of actions and observations. An explicit comparison therefore seems appropriate (or alternatively a discussion of why it is not required).\\n\\nAnother point which would, in my view, add significant value is explicit discussion of the baseline performances observed in the various experiments. For example, in the hopper experiment (Sec 5.2) the authors state that the baseline methods were not able to adapt to the new environment. Real value could be derived here if the authors could elaborate on why this is the case. The same applies in Sec 5.3-5.6. \\n\\n(I would add here, as an aside, that I thought the notion in Sec 5.6 of framing the learning of policies for handling deformable objects as a transfer task based on rigid objects to be a nice idea. And not one this reviewer has come across before - though this could merely be a reflection of limited familiarity with the literature).\\n\\nThe experimental evaluation seems thorough with the above caveat of a seemingly missing benchmark in Yu et al. I would also encourage the authors to add more detail in the experimental section in the main text specifically with regards to number of trials run to arrive at variances in the figures as well as what metric these shaded areas actually signify.\", \"a_minor_point\": \"the J in equ 1 seems (to me at least) undefined. I suspect that it signifies the expected cumulative reward and was meant to be introduced in Sec 3 where the J may have been dropped from the latex?\\n\\nIf the above points were addressed I think this would make a valuable and interesting contribution to the ICLR community. As it stands I believe it is marginally below the acceptance threshold.\\n\\n[ADDENDUM: given the author feedback and addition of the benchmark experiments requested I have updated my score.]\", \"pros\": [\"\\u2014\\u2014\\u2014\", \"interesting work\", \"accessible\", \"effective\", \"thorough evaluation (though potentially missing a key benchmark)\"], \"cons\": [\"\\u2014\\u2014\\u2014\", \"potentially missing a key benchmark (and therefore seems somewhat incremental)\", \"only limited insight offered by the authors in the discussion of the experimental results\", \"some more details needed with regards to the experimental setup\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
B1epooR5FX
Predicted Variables in Programming
[ "Victor Carbune", "Thierry Coppey", "Alexander Daryin", "Thomas Deselaers", "Nikhil Sarda", "Jay Yagnik" ]
We present Predicted Variables, an approach to making machine learning (ML) a first class citizen in programming languages. There is a growing divide in approaches to building systems: using human experts (e.g. programming) on the one hand, and using behavior learned from data (e.g. ML) on the other hand. PVars aim to make using ML in programming easier by hybridizing the two. We leverage the existing concept of variables and create a new type, a predicted variable. PVars are akin to native variables with one important distinction: PVars determine their value using ML when evaluated. We describe PVars and their interface, how they can be used in programming, and demonstrate the feasibility of our approach on three algorithmic problems: binary search, QuickSort, and caches. We show experimentally that PVars are able to improve over the commonly used heuristics and lead to a better performance than the original algorithms. As opposed to previous work applying ML to algorithmic problems, PVars have the advantage that they can be used within the existing frameworks and do not require the existing domain knowledge to be replaced. PVars allow for a seamless integration of ML into existing systems and algorithms. Our PVars implementation currently relies on standard Reinforcement Learning (RL) methods. To learn faster, PVars use the heuristic function, which they are replacing, as an initial function. We show that PVars quickly pick up the behavior of the initial function and then improve performance beyond that without ever performing substantially worse -- allowing for a safe deployment in critical applications.
[ "predicted variables", "machine learning", "programming", "computing systems", "reinforcement learning" ]
https://openreview.net/pdf?id=B1epooR5FX
https://openreview.net/forum?id=B1epooR5FX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryxo7v6rg4", "BkguooveAX", "Byl2ksvgAm", "BJe9P9PeAQ", "SylpG0Y52Q", "Ske788m937", "H1xn7AL8h7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545094946544, 1542646688464, 1542646500043, 1542646369533, 1541213717143, 1541187146557, 1540939299983 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper664/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper664/Authors" ], [ "ICLR.cc/2019/Conference/Paper664/Authors" ], [ "ICLR.cc/2019/Conference/Paper664/Authors" ], [ "ICLR.cc/2019/Conference/Paper664/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper664/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper664/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a framework at the intersection of programming and machine learning, where some variables in a program are replaced by PVars - variables whose values are learned using machine learning from data. The paper presents an API that is designed to support this scenario, as well as three case studies: binary search, quick sort, and caching - all implemented with PVars.\\n\\nThe reviewers and the AC agree that the paper presents and potentially valuable new idea, and shows concrete applications in the presented case studies. They provide example code in the paper, and present a detailed analysis of the obtained results.\\n\\nThe reviewers and AC also not several potential weaknesses - the AC will focus on a subset for the present discussion. The paper is unusual in that it presents a programming API rather than e.g., a thorough empirical comparison, a novel approach, or new theoretical insights. Paper at the intersection of systems and machine learning can make valuable contributions to the ICLR community, but need to provide a clear contributions which are supported in the paper by empirical or theoretical results. The research contributions of the present paper are vague, even after the revision phase. The main contribution claimed is the introduction of the API, and that such an API / system is feasible. This is an extremely weak claim. A stronger claim would be if e.g., the present approach would advance the state of the art beyond an existing such framework, e.g., probabilistic programming, either conceptually or empirically. I want to particularly highlight probabilistic programming here, as it is mentioned by the authors - this is a well developed research area, with existing approaches and widely used tools. The authors dismiss this approach in their related work section, saying that probabilistic programming is \\\"specialized on working with distributions\\\". Many would see the latter as a benefit, so the authors should clearly motivate how their approach improves over these existing methods, and how it would enable novel uses or otherwise provide benefits. At the current stage, the paper is not ready for publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"innovative idea, contributions insufficient\"}", "{\"title\": \"Addressed some of the questions through comments and updated submission.\", \"comment\": \"We thank the reviewer for the comments and questions brought up related to our proposed interface.\\n\\n[hyperparameters become new variables]\\nWe agree that hyperparameters introduce an additional search space, but we consider that navigating through this space is sometimes simpler than in the space of building complex heuristic functions to improve a specific problem, which would be the equivalent of not being able to use machine learning at all through an interface such as PVars. \\n\\n[debugging programs with predicted variables]\\nAs with debugging any complex ML model, predicted variables will likely add additional challenges to debugging. However, because of their natural integration into the programming language, debugging the logic around the predicted variable should not be affected and inspecting the values coming from a predicted variable in a debugger will also be as simple as inspecting a regular predicted variable. \\n\\n[relation to probabilistic programming]\\nProbabilistic programming is a line of work similar to ours but focused on a specific class of models. The interface introduced by the probabilistic programming line of work exposes directly methods required for operating with that class of models, e.g. graphical models, where as PVars leaves that as a solution detail.\\nWe added related work from the probabilistic programming literature to our paper. \\n\\n[interesting questions that aren't discussed much in the current draft]\\nWe have updated our draft to highlight our position related to some of your questions. \\nWe consider this work on predicted variables a first step into an interesting field of research and we hope to be able to address more of these questions in future work.\"}", "{\"title\": \"Clarified contributions in the paper\", \"comment\": \"We thank the reviewer for their insightful comments and the very relevant question about clarifying our contributions. We have tried to clarify and itemized our contribution (see page 2).\\n\\n[no operational cost given] \\nThe main focus of the paper is not to improve specific algorithms but demonstrate that such improvement is possible easily, and illustrate the claim with simple/well-known algorithms examples.\\n\\nWe did not provide an analysis of the computational overhead of our method because we see the three algorithmic problems as tasks to demonstrate that the interface that we provide is expressive and powerful enough to bring ML into normal software development. In many other applications, where predicted variables can be applied, speed is not a relevant metric, e.g user modelling , optimizing UI components, predicting user preference, systems optimization, or content recommendations. We acknowledge that our current implementation is probably slower than the original variant - but as we describe above, we don't consider actual runtime to be the relevant metric here.\\nFurther - we strongly believe that specialized hardware such as GPUs or TPUs are continuously improving the runtime of ML models which will eventually make our proposed implementation practical even for speed sensitive applications (compare also Kraska et al, 2017).\\n\\n[\\\"commands of use\\\"] \\nWe do agree with R2 that the main contribution of this paper is in the novel API that we propose. As we describe in the paper, the experiments are performed to demonstrate that such an API is actually feasible and to indicate how good the state of the art in machine learning supports such an API at this point. \\nThe experiments performed serve as examples of how to apply predicted variables and to demonstrate that they are a viable solution to enable software developers to add ML models into their regular development workflow at a low engineering cost. \\nArguably, the current state of machine learning does not yet make \\\"ML as easy as if statements\\\" which is why we removed that claim from our paper.\"}", "{\"title\": \"Added reproducibility data and incorporated feedback in paper\", \"comment\": \"We thank the reviewer for relevant and insightful comments. We provide responses and, when applicable, pointers to the changes we\\u2019ve done in the paper aiming to address some of the problems related to the technique we introduced.\\n\\n- computation overhead\\nWe did not provide an analysis of the computational overhead of our method because we see the three algorithmic problems as tasks to demonstrate that the interface that we provide is expressive and powerful enough to bring ML into normal software development. In many other applications, where predicted variables can be applied, speed is not a relevant metric, e.g user modelling, optimizing UI components, predicting user preference, systems optimization, or content recommendations. We acknowledge that our current implementation is probably slower than the original variant - but as we describe above, we don't consider actual runtime to be the relevant metric here.\\nFurther - we strongly believe that specialized hardware such as GPUs or TPUs are continuously improving the runtime of ML models which will eventually make our proposed implementation practical even for speed sensitive applications (compare also Kraska et al, 2017).\\n\\n- reproducibility\\nWe acknowledge that the paper does not provide sufficient data related to reproducibility and we present additional reproducibility experiments in the appendix. Similar to other RL work, there are some problems with reproducibility. However, for binary search we obtain positive results (negative cumulative regret) with a reproducibility of 85% (Quicksort: 94%).\\n\\n- applicability\\nWe assume throughout our work that the developer -- algorithm and problem expert -- has domain-specific knowledge that is relevant for the problem being solved. Therefore our interface enables the developer to make use of their expert knowledge without requiring deep machine learning expertise. The developer decides what are the most important contextual signals and what metric to optimize for - The API naturally translates these into observations and rewards for the RL methods applied.\\n\\n- initial function\\nWe thank the reviewer for pointing out the lack of more detailed explanations. The initial function does not serve only for initialization but it plays two other important roles \\n(1) it generates safe experience trajectories from which the off-policy RL algorithm learns and \\n(2) can be reused as a safety net, should the model performance degrade. \\nWe have updated our draft to more clearly express this.\\n\\n- performance/episodes\\nWe are not 100% sure what the reviewer means with the comment about \\\"performance\\\" - we try to respond to this comment as good as we can.\\nAs we describe in the paper, we measure cumulative regret as our main performance metric. A negative cumulative regret indicates that the user benefits from using a predicted variable compared to the baseline. While initially, the predicted variable might perform a bit worse than the baseline, the goal is to outperform the baseline as quickly as possible. Note also, that the use of the initial function in our setup enables us to ensure a certain safety net in the beginning which helps the method to never perform terribly badly.\\n\\n- citations, related work\\nThank you for the reference, we have updated our draft to point out work related specifically to approximate computing, as well as for probabilistic programming.\"}", "{\"title\": \"Potentially interesting idea, not well explained and justified\", \"review\": \"This paper proposes using predicted variables(PVars) - variables that learn\\ntheir values through reinforcement learning (using observed values and\\nrewards provided explicitly by the programmer). PVars are meant to replace\\nvariables that are computed using heuristics.\", \"pros\": [\"Interesting/intriguing idea\", \"Applicability discussed through 3 different examples\"], \"cons\": \"* Gaps in explanation\\n* Exaggerated claims\\n* Problems inherent to the proposed technique are not properly addressed, brushed off as if unimportant\\n\\nThe idea of PVars is potentially interesting and worth exploring; that\\nbeing said, the paper in its current form is not ready for\\npublication.\\n\\nSome criticism/suggestions for improvement:\\n\\nWhile the idea may be appealing and worth studying, the paper does not address several problems inherent to the technique, such as:\\n\\n- overheads (computational cost for inference, not only in\\n prediction/inference time but also all resources necessary to run\\n the RL algorithm; what is the memory footprint of running the RL?)\\n\\n- reproducibility\\n\\n- programming overhead: I personally do not buy that this technique -\\n at least as presented in this paper - is as easy as \\\"if statements\\\"\\n (as stated in the paper) or will help ML become mainstream in\\n programming. I think the programmer needs to understand the\\n underpinnings of the PVars to be able to meaningfully provide\\n observations and rewards, in addition to the domain specific\\n knowledge. In fact, as the paper describes, there is a strong\\n interplay between the problem setting/domain and how the rewards should be\\n designed.\\n\\n- applicability: when and where such a technique makes sense\\n\\nThe interface for PVars is not entirely clear, in particular the\\nmeaning of \\\"observations\\\" and \\\"rewards\\\" do not come natural to\\nprogrammers unless they are exposed to a RL setting. Section 2 could\\nprovide more details such that it would read as a tutorial on\\nPVars. If regular programmers read that section, not sure they\\nunderstand right away how to use PVars. The intent behind PVars\\nbecomes clearer throughout the examples that follow.\\n\\nIt was not always clear when PVars use the \\\"initialization function\\\"\\nas a backup solution. In fact, not sure \\\"initialization\\\" is the right\\nterm, it behaves almost like an \\\"alternative\\\" prediction/safety net.\\n\\nThe examples would benefit from showing the initialization of the PVars.\\n\\nThe paper would improve if the claims would be toned down, the\\nlimitations properly addressed and discussed and the implications of\\nthe technique honestly described. I also think discussing the\\napplicability of the technique beyond the 3 examples presented needs\\nto be conveyed, specially given the \\\"performance\\\" of the technique\\n(several episodes are needed to achieve good performance).\\n\\nWhile not equivalent, I think papers from approximate computing (and\\nperhaps even probabilistic programming) could be cited in the related\\nwork. In fact, for an example of how \\\"non-mainstream\\\" ideas can be\\nproposed for programming languages (and explained in a scientific\\npublication), see the work of Adrian Sampson on approximate computing\", \"https\": \"//www.cs.cornell.edu/~asampson/research.html\\nIn particular, the EnerJ paper (PLDI 2011) and Probabilistic Assertions (PLDI 2014).\", \"update\": \"I maintain my scores after the rebuttal discussion.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting proposal without clear contributions\", \"review\": \"This paper proposes the use of RL as a set of commands to be included as programming instructions in common programming languages. In this aspect, the authors propose to add simple instructions to employ the power of machine learning in general, and reinforcement learning in particular in common programming tasks.\\n\\nIn this aspect, the authors show with three different examples how the use of RL can speed up the performance of common tasks: binary search, sorting and caches.\\n\\nThe paper is easy to read and follow. \\n\\nIn my opinion, the main problem of the paper is that the contributions are not clear. The authors claim that the introduce a new hybrid approach of programming between common programming and ML, however, I do not see many differences between calling APIs and the current proposal. The paper seems to be a wrapper of API calls. Here, the authors should comment existing approaches based on ML and APIs.\\n\\nThe authors introduce the examples to show the advantages of using predictive variables. Many of the advantages are based on increasing the performance of the algorithms using these predictive variables, however, the results do not include the computational costs of learning the models. \\n\\nTherefore, in my opinion the paper should be more focused on detailing the commands of use of predictive variables and emphasising the advantages with respect to existing methods. Currently, the paper gives too relevance to the performance of the experiments, where the novel contributions are not there.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting idea but replaces constants with other constants\", \"review\": \"The paper proposes to include within regular programs, learned parameters that are then tuned in an online manner whenever the program is invoked. Thus learning is continuous, integration with the ML backend seamless. The idea is very interesting however, it seems to me that while we can replace native variables with learned parameters, the hyperparameters involved in the learning become new native variables (e.g. the value of feedback). Perhaps with some effort we can replace the hyperparameters with predicted variables too. Other concerns of mine stem from the programmer in me. I think of a program as something deterministic and predictable. With continuous, online, self-tuning, these properties are gone. How do the authors propose to assuage folks with my kind of mindset? Is debugging programs with predicted variables an issue? Consider a situation where the program showed some behavior with a certain setting of q which has since been tuned to another value and thus the same behavior doesn't show up. I find these to be very interesting questions but don't see much of a discussion in the current draft. Also, how does this work relate to probabilistic programming?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rJg6ssC5Y7
DeepOBS: A Deep Learning Optimizer Benchmark Suite
[ "Frank Schneider", "Lukas Balles", "Philipp Hennig" ]
Because the choice and tuning of the optimizer affects the speed, and ultimately the performance of deep learning, there is significant past and recent research in this area. Yet, perhaps surprisingly, there is no generally agreed-upon protocol for the quantitative and reproducible evaluation of optimization strategies for deep learning. We suggest routines and benchmarks for stochastic optimization, with special focus on the unique aspects of deep learning, such as stochasticity, tunability and generalization. As the primary contribution, we present DeepOBS, a Python package of deep learning optimization benchmarks. The package addresses key challenges in the quantitative assessment of stochastic optimizers, and automates most steps of benchmarking. The library includes a wide and extensible set of ready-to-use realistic optimization problems, such as training Residual Networks for image classification on ImageNet or character-level language prediction models, as well as popular classics like MNIST and CIFAR-10. The package also provides realistic baseline results for the most popular optimizers on these test problems, ensuring a fair comparison to the competition when benchmarking new optimizers, and without having to run costly experiments. It comes with output back-ends that directly produce LaTeX code for inclusion in academic publications. It supports TensorFlow and is available open source.
[ "deep learning", "optimization" ]
https://openreview.net/pdf?id=rJg6ssC5Y7
https://openreview.net/forum?id=rJg6ssC5Y7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryllGpBiyV", "rkeqB9k30Q", "H1lmbe0tC7", "B1g4zLh-C7", "SkgW52MsaQ", "HylEM34GT7", "Byxn8_ae6Q", "Syx2vZqgTQ", "S1xprW9lp7", "Bye00eclTQ", "B1xd12Y037", "r1eC9mIc2X", "r1l6IbIc2X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544408328207, 1543400002503, 1543262202534, 1542731275733, 1542298761426, 1541717003909, 1541621844311, 1541607779960, 1541607748590, 1541607638079, 1541475295895, 1541198741599, 1541198165268 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper663/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper663/Authors" ], [ "ICLR.cc/2019/Conference/Paper663/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper663/Authors" ], [ "ICLR.cc/2019/Conference/Paper663/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper663/Authors" ], [ "ICLR.cc/2019/Conference/Paper663/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper663/Authors" ], [ "ICLR.cc/2019/Conference/Paper663/Authors" ], [ "ICLR.cc/2019/Conference/Paper663/Authors" ], [ "ICLR.cc/2019/Conference/Paper663/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper663/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper663/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The field of deep learning optimization suffers from a lack of standard benchmarks, and every paper reports results on a different set of models and architectures, likely with different protocols for tuning the baselines. This paper takes the useful step of providing a single benchmark suite for neural net optimizers.\\n\\nThe set of benchmarks seems well-designed, and covers the range of baselines with a variety of representative architectures. It seems like a useful contribution that will improve the rigor of neural net optimizer evaluation. \\n\\nOne reviewer had a long back-and-forth with the authors about whether to provide a standard protocol for hyperparameter tuning. I side with the authors on this one: it seems like a bad idea to force a one-size-fits-all protocol here. \\n\\nAs a lesser point, I'm a little concerned about the strength of some of the baselines. As reviewers point out, some of the baseline results are weaker than typical implementations of those methods. One explanation might be the lack of learning rate schedules, something that's critical to get reasonable performance on some of these tasks. I get that using a fixed learning rate simplifies the grid search protocol, but I'm worried it will hurt the baselines enough that effective learning rate schedules and normalization issues come to dominate the comparisons.\\n\\nStill, the benchmark suite seems well constructed on the whole, and will probably be useful for evaluation of neural net optimizers. I recommend acceptance.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"a useful benchmark for deep learning optimizers, but limited research contribution\"}", "{\"title\": \"Fourth Response\", \"comment\": \"We do think that the protocol we suggest is sound and provides insightful analyses, even though we agree that we should strive for even better practices in the future.\", \"regarding_the_hyperparameter_tuning\": \"During the last revision we added the following comment to Chapter 4 (which discusses the baseline results): \\\"While we are using a log grid search, researchers are free to use any other hyperparameter tuning method, however this would require re-running the baselines as well.\\\"\\n\\nUnfortunately, the ICLR revision period has ended, but should this paper be accepted, we would like to put more emphasis on this comment in the camera-ready version. We would also like to add some clarifying comments to Section 2.4 (which discusses hyperparameter tuning) to\\n- explain our choice for a log grid search as the baseline hyperparameter tuning procedure,\\n- point out and cite other hyperparameter tuning methods,\\n- and emphasize that using another hyperparameter tuning method requires re-running the baselines with that protocol.\\n\\nWe thank you for your comments and suggestions and are looking forward to the final decision.\"}", "{\"title\": \"Response to the comments of authors\", \"comment\": \"I agree not to get tied up in details. The discussion about the runtime analysis got too much into details indeed.\\n\\nI do not agree that my criticisms are assimilable to an ideal solution. The goal of this framework is to provide \\\"protocol[s] for the quantitative and reproducible evaluation of optimization strategies for deep learning\\\". A strict minimum for this is that the protocols in question should be sound and provide insightful analyses. To constrain ourselves to current bad practices does not satisfy the strict minimum in my opinion, even though it is done for the sake of simplicity and usability.\\n\\nThe importance of using exactly the same method as the one used for the benchmark to execute hyper-parameter search should be stressed out very clearly in the paper, explaining why and how to do it. This would indeed partly solve one of my main criticism for this paper.\"}", "{\"title\": \"Third Response\", \"comment\": \"Thanks for your continuing interest in the conversation! We answer to your points briefly below, but at this point we think it is important to return to the core premise of the paper and not get tied up in details. We understand that you would like to have the ideal solution. But at this point practice in the field is very far away from what you are asking for. We wanted to provide a benchmark tool that is practical, easy to use, and does not impose unrealistic or counterproductive constraints on researchers. We continue to believe that our work significantly improves the current standards in benchmarking for deep learning.\", \"briefly_on_your_points\": \"1. You asked for \\\"one set of baselines optimized with a specific [hyperparameter optimization] framework defined by you\\\". We are providing that, using the most basic framework possible: a (log) grid search with a fixed budget. You are arguing that this is a bad method and we wholeheartedly agree. But we are sure that you would agree that it might well be the most common method in practice. As you point out yourself, researchers are free to use any other hyperparameter optimization method, given that they re-tune the hyperparameters of the baseline methods (SGD, etc.) using the same method (we will add a paragraph to the paper to point this out more clearly). The baselines provided by us are purely for convenience and that's why we think it makes sense to keep the overhead needed to compare a new method to them as small as possible by choosing a simplistic tuning protocol.\\n\\n2. We acknowledge that a runtime evaluation across hyper-parameter settings would be ideal (many things are ideal, but not all are feasible). We do not enforce this because for many methods (in particular first-order ones) runtime per step is independent of the hyper-parameters, and it seems silly to force people to show this over and over again. Note, though, that anyone can use the provided tools to provide exactly the sweep you suggested. We include a remark in the updated paper, suggesting to estimate runtime as a function of hyper-parameter settings where necessary.\\n\\n3. We plan to include SGD (as well as Momentum and Adam) with learning rate decay schedules as a second, more challenging baseline in the future.\"}", "{\"title\": \"Response to the comments of authors\", \"comment\": \"There seems to be a misconception that hyper-parameter optimization methods increase the computational cost. One of the reasons grid-search is not recommended is precisely because it is an inefficient way of doing hyper-parameter optimization, leading to wasted computations. Otherwise, I certainly agree that the exhaustiveness of the benchmark should be limited so that the computational cost is affordable to most researchers and that the usability should be simple enough to avoid scaring researchers away. I do not believe my comments are in contradiction with this. Our disagreement seems to be about how expensive the hyper-parameter optimization methods are and how simple or complex the runtime evaluation is.\\n\\n1. There is no need to force researchers to use a specific hyper-parameter optimization method. What is needed is a clear procedure about how to optimize them so that baselines are comparable. That means, at least one set of baselines optimized with a specific framework defined by you, and clear guidelines about how baselines should be optimized if researchers want to use a different hyper-parameter optimization method. Although there is no widely accepted framework for the optimization of hyper-parameters, there is agreement that grid-search is the worst method. For the specific case with limited number of hyper-parameters such as step and batch sizes, simple Bayesian optimization methods have proved to work well.\\n\\n2. A better tradeoff would be to reuse the available information about hyper-parameter optimization. Given that hyper-parameter optimization is being executed, we should already have plenty of results. The problem for runtime evaluation is that we need to run the baselines again so that all runtime evaluations are done on the same hardware. If the results of the baseline's hyper-parameter optimization was provided, the runtime evaluation sweep could be easily automated and given that each execution is fairly short (only two epochs), the total execution time would be less than a single training.\\n\\n3. It would be surprising indeed that fine-tuning of the learning rate alone would have a dramatic effect. The main problem is probably the lack of learning rate decay schedule. As suggested in [1], it is better to compare SGD with adaptive gradient methods by using a learning rate decay schedule.\\n\\nI do understand the usefulness of the train eval set. To expand my previous comment, I am wondering why you do not also use a validation set because standard practice is to use it to select the best model (hyper-parameters) and then make final comparisons on the test set. What is done in the current work is both selections and final comparisons on the test set, which is not recommended.\\n\\n[1] Wilson, Ashia C., Rebecca Roelofs, Mitchell Stern, Nati Srebro, and Benjamin Recht. \\\"The marginal value of adaptive gradient methods in machine learning.\\\" In Advances in Neural Information Processing Systems, pp. 4148-4158. 2017.\"}", "{\"title\": \"Second Response\", \"comment\": \"A general remark regarding all your comments. There is a very important trade-off between exhaustive benchmarking (all the information and plots that we want to see) and ease of use/computational cost. When designing DeepOBS we tried to not increase the (computational) effort for the researchers, while substantially increasing the quality of the benchmark.\\n\\n1. We take your point about hyperparameter fitting. But let\\u2019s be clear: There is no widely accepted framework for the adaptation of hyperparameters such as step and batch sizes, etc. Which framework would you recommend we impose on our community to provide a level playing field for *all* ongoing research in deep learning optimization? Hypergradients, Learning to learn, Probabilistic Line searches, or Barzilei-Borwein? If we decided to pick any of these and force people to use it, would that convince you to accept this paper? And do you think it would increase the user base, or rather restrict it?\\n\\n2. What you are describing is definitely something that is desirable. However, a runtime sensitivity would increase the effort to run such a benchmark drastically, especially in the case of many hyperparameters, which is why it is not common in deep learning optimization papers. One possible trade-off would be to estimate the runtime of the optimal hyperparameter setting of each test problem and report it in Table 2 along with the iterations as a relative factor. This would require at most 8 runtime estimations, while still providing some insights into what runtimes are to be expected in practice.\\n\\n3. We are looking into our setup of this test problem to see why it produces worse results than reported in the paper. We don't believe that a drastically better results can be obtained by simply tuning the learning rate more. When looking at Figure 2, we would argue that it seems very unlikely that a drastically improved performance can be found from a learning rate, we didn't test. If sampling the hyperparameters even more (say on a loggrid with 100 points) would convince you, we can update our baselines accordingly.\\n\\nWe realize now that calling it a \\\"train eval *set*\\\" might be confusing. It is not a separate data set (it is just in many ways implemented like one). What we call \\\"train eval set\\\" is just the evaluation of the training data set in the same fashion you would evaluate the test set (same size, not using dropout, etc.) This is a better estimator for training performance, than using the regular mini-batch train loss and train accuracy, we get while training. Since we want to assess training performance, it would not make sense to use a validation set.\"}", "{\"title\": \"Response to the comments of authors\", \"comment\": \"Although I agree that the improvement of DeepOBS in its current state is better than the status quo, I do not believe this is a reason substantial enough to accept the current version of the paper as a conference paper.\\n\\n1. Leaving it to the researcher to do the hyper-parameter search without standard procedures is dangerous. Researchers could easily report results overfitting the test set, although hopefully the use of different seeds could alleviate this problem. What is a realistic comparison should be defined more precisely. In my mind what realistic would mean is that the difficulty of the problems is similar to those tackled in research, that the complexity of the architectures are similar and that the computational budget is one available to most researchers. Bad practices like hyper-parameter optimisation on the test set or unequal hyper-parameter optimisation on different benchmarks should not be included as what is realistic, even though this is unfortunately something common in our community. My point on unequal optimization on different benchmarks is the reason why I do not think we can divide the critique in two aspects. Optimizers should be compared on an equal footing, which means the procedure used to compute the results of the baselines should be the same as the one used for the new optimizers of interest.\\n\\n2. The proposed solution would not scale for many set of hyper-parameters. There should be a measure of runtime sensitivity across many different hyper-parameter values weighted by the corresponding generalization performances. This is also related to my criticism on the limited interpretability of Figure 2, in the sense that such runtime sensitivity could be measured using tools as those presented in [2].\\n\\n3. I agree that state-of-the-art results is not necessarily a prerequisite for such a benchmark. However, if a given setup normally yields better result than what is reported as a baseline, how confident can we be that this baseline is reliable? Could it be that improvements on a new optimizer over the baselines could also be observed on SGD itself given a better hyper-parameter tuning? I would like to reiterate my criticism on interpretability here, which was not addressed in the comment. I strongly believe that the introduction of such a benchmark should be accompanied with improved methods of analysis otherwise the conclusions that one can make from the benchmark are likely to be brittle.\\n\\n4. I agree that the variety of problems presented is a good first step, and would be sufficient as the first step.\\n\\nThank you for the clarification about the \\\"train eval set\\\". I wonder why you do not use a validation set however.\\n\\nI truly appreciate the nature of your work and I sincerely hope my comments are encouraging you progressing further. Although I acknowledge the importance of its nature, I believe the lack of standardized hyper-parameter optimization procedures and analysis methods is a serious issue for such a benchmark.\"}", "{\"title\": \"Response to the Comments of Reviewer 2\", \"comment\": \"Dear Reviewer 2,\\n\\nthank you very much for your constructive review.\\nWe are happy that you agree with us that a benchmarking suite would be an important step. While we acknowledge that the presented solution is not optimal, we would argue that it significantly improves on the current status quo. Just like Reviewer 1, we worry \\\"that people will still find minor quibbles with particular choices or tasks in this suite, and therefore continue to use bespoke comparisons\\\". We believe that the improvement of DeepOBS compared to the status quo (which often is to just use MNIST and CIFAR10 and compare to SGD or Adam) is larger than the step from DeepOBS to where we hope to be.\\n\\nWe also want to address the shortcomings you mentioned in your review.\\n\\n1. We believe that we can split this critique in two aspects. Firstly, the hyper-parameter optimization that we do for our baselines. While, we agree that grid search is not at all an optimal approach, we would argue that it is the method most common in practice (for example [1, 2]). The main goal of our baselines are to be a realistic comparison. We plan to include more sophisticated baselines in the future, for example ones that include learning rate decay schedules. We could tune these schedules with more complex methods than grid search, to provide a more challenging competition.\\nThe second part is that we don't provide a hyperparameter tuning method for the user. We did this on purpose. A hypothetical user of DeepOBS might want to highlight that their new optimization method gets good results using default hyperparameter values, while also showing that tuning those parameters a little bit can give you even better results. Therefore, we believe that the choice of hyperparameter tuning method should be left to the user. As long as they document this tuning process, and report the final hyperparameters on each test problem, the results are still comparable even when different tuning methods are used.\\n2. It is an interesting point you raised here. Indeed we only estimate the runtime for a single set of hyperparameters. However, the used hyperparameters for this estimation is flexible. In the scenario that you describe, the best option for the user of DeepOBS would be to do the estimation step twice for both settings and report both numbers.\\n3. We will indeed double-check the results of the Wide ResNet on SVHN. In contrast to the original paper, we do not use Nesterov momentum, nor a learning rate decay schedule. We also train for less epochs. The point of the test problems is not to provide state of the art results, but to compare the performances of optimization methods. Nevertheless, we will check our SVHN results and are currently running new experiments. Thanks for pointing this out.\\n4. While we agree that the set of test problems is a bit biased towards image classification, we also believe that this set is much more exhaustive than what is currently used in practice (which is often just MNIST and CIFAR10). If there is a specific test problem that you would like us to add, we would gladly do so. We see this set of test problems as a starting point and DeepOBS can be continuously improved and extended.\\n\\nWe also tried to address the notes on clarity and changed the figures accordingly.\\n\\nIn section 2.2 we mention a \\\"train eval set\\\", which is not a standard validation set. We use this train eval set, whenever we want to evaluate our training performance. We distinguish between using the training data to train, and using the training data to evaluate the performance on it. During this \\\"training evaluation phase\\\", we evaluate on the a set that is as large as the test set and also use the neural network in architecture in \\\"evaluation mode\\\" (for example we do not use dropout). This allows for a fairer comparison between test loss and train loss as both are computed in the same way.\\n\\n\\nWe hope that by addressing your points we were able to alleviate some of your concerns. You agree with us and the other reviewers that a benchmarking suite for deep learning optimizer would be a significant step and a useful tool for the field and that currently no such tool exists. We kindly ask you to reconsider your evaluation of the paper in light of this response.\\n\\n\\n[1] Diederik Kingma, and Jimmy Ba. \\\"Adam: A Method for Stochastic Optimziation\\\" Proceedings of 3rd\\nInternational Conference on Learning Representations (ICLR), 2015.\\n\\n[2] Tao Lin, Sebastian Stich, and Martin Jaggi. \\\"Don't Use Large Mini-Batches, Use Local SGD\\\" arxiv, 2018.\"}", "{\"title\": \"Response to the Comments of Reviewer 3\", \"comment\": \"Dear Reviewer 3.\\n\\nthank you for your positive review. We are happy that you agree with us, that benchmarking stochastic optimization methods is a relevant project.\\n\\nWe also want to address some of the points you have raised.\", \"cons\": \"1) We agree, that it might take a large effort to convince others to use and contribute to DeepOBS. We designed DeepOBS to be as easy as possible to add new optimization methods. As long as you can implement your new optimizer in TensorFlow, you can add it to DeepOBS by sending us a pull request. We will invest time to run new optimization methods ourselfs, and, provided they give state-of-the-art performance, add them to the baselines.\\nAdditonally, benchmarking new optimization methods can take a lot of time, from setting up realistic test problems to computing fair baselines. With DeepOBS, this is unnecessary and researchers can spend more time on developing their optimization methods and less time on thinking about the benchmarking aspect. We hope that this is incentive enough to use DeepOBS.\\n2) We agree, that offering DeepOBS in other frameworks could be beneficial. However, we chose TensorFlow as it is arguably the most popular framework at the moment, and we had to start somewhere. We want to note that the actual software implementation is only a part of this paper.\\n3) If you can point us to some examples of bad writing in the paper, we would be very happy to address and re-write them and improve or clarify the sections.\", \"we_also_addressed_the_minor_points_you_mentioned\": \"We changed the names in Figure 1 to be more consistent. We hope that the picture is now more informative.\\nIn the current version, Figure 2 and 3 are switched now.\\nWe fixed the \\\"?\\\" in Table 1. It was the result of a typo in a citation. Thanks for noting this.\\nWe added an explanation for Table 2.\\n\\nPlease note, that by making these changes the paper is now longer than 8 pages. We will work to reduce it to 8 pages again for the final version.\"}", "{\"title\": \"Response to the Comments of Reviewer 1\", \"comment\": \"Dear Reviewer 1.\\n\\nthank you very much for your positive review.\\n\\nWe want to address the minor comments you raised.\\n- We added a remark in section 2.3 regarding the automated estimation of per-iteration cost in DeepOBS.\\n- With the current setup, computing the baseline performances on all 26 test problems would require more than 3500 runs. As these test problems also include the ImageNet data set, this could take quite a while. We therefore doubt, whether we could finish this in time for ICLR. However, we will add these results to the DeepOBS package as soon as they are finished so that the software package has baselines performances for all test problems.\\n- Thank you for the reference. We will look into performance profiles to see how we can use them.\"}", "{\"title\": \"An important _first_ step towards standardized procedures for benchmarking optimizers in deep learning.\", \"review\": \"This paper presents a new benchmark suite to compare optimizer on deep neural networks. It provides a pipeline to help streamlining the analysis of new optimizers which would favor easily reproducible results and fair comparisons.\\n\\nQuality\\n\\nThe paper covers well the problems underlying the construction of such a benchmark, discussing the problems and models selection, runtime estimation, hyper-parameter selection and visualizations. It falls short however in some cases:\\n\\n1. Hyper-parameter optimization\\n While they mention the importance of hyper-parameter tuning for the benchmark, they leave it to the user to tune them without providing any standard procedure. Furthermore, they use grid search to build the baselines while this is known to be a poor optimizer [1].\\n\\n2. Estimated runtime\\n Runtime is estimated for a single set of hyper-parameters of the optimizer, but some optimizer may have similar or roughly similar results for a large set of hyper-parameters that widely affects the runtime. The effect of the hyper-parameters should be taken into account for this part of the benchmark.\\n\\n3. Interpretation\\n Such a benchmark should makes it easier for interpretation of results as the authors suggests. However, the paper does not convey much interpretation in section 4, beside the fact that results are not conclusive for any baseline. Results of the paper seem low, but they are difficult to verify since the plots are not very precise. For instance Wide ResNet-18-8 reports 1.54% test accuracy on SVHN [6] while this paper reports ~ 15% for the Wide ResNet 18-4 version. Figure 2 is a good attempt at making interpretations of sensitivity of optimizers' hyper-parameters but has limited interpretability compared to what can be found in the literature [2].\\n\\n4. Problems\\n There is an effort to provide varied types of problem, including classical optimization functions, image classification, image generation and language modeling. The number of problems consists mostly of image classification however and is very limited for image generation and language modeling.\\n\\nClarity\\n\\nThe paper is well written and easy to understand in general. \\n\\nOn a minor note, most figures are difficult to read. Side nodes on figure 1 does not divide clearly without any capital letter or punctuation at the end of sentence. Figure 2 should be self contained with its own legend. Figure 3 is useful for a visual impression of the speed of convergence but a histogram would be necessary for a better visual comparison of the different performances.\\n\\nSection 2.2 has a confusing terminology for the \\\"train valid set\\\". Is it a standard validation set? \\n\\nOriginality\\n\\nThere is virtually no benchmarks for optimizers available for the community. I believe a standardized procedure for comparing optimizers can be viewed as an original contribution. \\n\\nSignificance\\n\\nReproducibility is a problem in machine learning [3, 4] and optimizers' efficiency on deep neural networks generalization performance is still not very well understood [5]. Therefore, there is a strong need for a benchmark for sound comparisons and to favor better reproducibility.\\n\\nConclusion\\n\\nThe benchmark presented in this paper would be an important contribution to the community but lacks a few important features in my opinion, in particular, sound hyper-parameter optimization procedure and sound interpretation tools. On a skeptical note, I doubt the benchmark will be used extensively if the results it provides yield no conclusive interpretation as reported for the baselines. As I feel there is more work needed to support the goals of the paper, I would suggest this paper for a workshop. Nevertheless, I would not be upset if it was accepted because of the importance of the subject and the originality of this work.\\n\\n[1] Bergstra, James, and Yoshua Bengio. \\\"Random search for hyper-parameter optimization.\\\" Journal of Machine Learning Research 13, no. Feb (2012): 281-305.\\n[2] Biedenkapp, Andre, Joshua Marben, Marius Lindauer and Frank Hutter. \\u201cCAVE : Configuration Assessment , Visualization and Evaluation.\\u201d In International Conference on Learning and Intelligent Optimization (2018).\\n[3] Lucic, Mario, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. \\u201cAre GANs Created Equal? A Large-Scale Study.\\u201d arXiv preprint arXiv:1711.10337 (2017).\\n[4] Melis, G\\u00e1bor, Chris Dyer, and Phil Blunsom. \\u201cOn the state of the art of evaluation in neural language models.\\u201d arXiv preprint arXiv:1707.05589 (2017).\\n[5] Wilson, Ashia C., Rebecca Roelofs, Mitchell Stern, Nati Srebro, and Benjamin Recht. \\\"The marginal value of adaptive gradient methods in machine learning.\\\" In Advances in Neural Information Processing Systems, pp. 4148-4158. 2017.\\n[6] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016\\n\\n-----------\\nRevision\\n-----------\\n\\nIn light of the discussion with the authors, the revision made to chapter 4 and in particular the proposed modifications to section 2.4 for a camera-ready paper, I revise my score to 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good initiative at its beginning stage\", \"review\": \"As the paper claims there is no common accept system for benchmarking deep learning optimizer. It is also hard to repeat others' results. The paper describes a benchmarking framework for deep learning optimizer. It proposes three performance indicators, and includes 20 test problems and a core set of benchmarks.\", \"pro\": \"1) It is a very relevant project. There is a need for unified benchmarking framework. In traditional optimization field, benchmarking is well studied and architectured. See an example at http://plato.asu.edu/bench.html\\n2) The system is at its early stage, but its design seems complete\\n3) The paper shows some performance of vanilla SGD, momentum, and Adam\", \"con\": \"1) It will take tremendous efforts to convince others to join the party and contribute\\n2) It only support tensorflow right now\\n3) Writing can be better\\n\\nIn Figure 1, make sure the names of components are consistent: either all start with nouns or verbs. The whole picture is not too illustrative. \\n\\n\\n\\nCan switch the order of Figure 2 and Figure 3?\\n\\nIn Table 1, the description of ALL-CNN-C has a '?'. Is it intended?\\n\\nWhy not explain Table 2?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An important and useful tool for the field.\", \"review\": \"The authors propose a benchmark for optimization algorithms specific to deep learning called DeepOBS. They provide code to evaluate an optimizer against a suite of standard tasks in deep learning, and provide well tuned baselines for a comparison. The authors discuss important considerations when comparing optimizers, including how to measure speed and tunability of an optimizer, what metric(s) to compare against, and how to deal with stochasticity.\\n\\nA clear, standardized optimization benchmark suite would be very valuable for the field. As the others clearly state in the introduction, there have been many proposed optimization algorithms, but it is hard to compare many of these due to differences in how the optimizers were evaluated in the original papers. In general, people have different requirements for what the expect from an optimizer. However, this paper does a good job of discussing most of the factors that people should consider when choosing or comparing optimizers. Providing a set of well tuned baselines would save people a lot of time in making comparisons with a new optimizer, as well as providing a canonical set of tasks to evaluate against. I particularly appreciated the breadth and diversity of the included tasks.\\n\\nI am a little worried that people will still find minor quibbles with particular choices or tasks in this suite, and therefore continue to use bespoke comparisons, but I think this benchmark would be a valuable resource for the community.\", \"some_minor_comments\": [\"In section 2.3, there is a recommendation for how to estimate per-iteration cost. I would mention in this section that this procedure is automated and part of the benchmark suite.\", \"I wanted to see how the baselines performed on all of the tasks in the suite (not just on the 8 tasks in the benchmark sets). Perhaps those figures could be included in an appendix.\", \"The authors might want to consider including an automated way of generating performance profiles (https://arxiv.org/abs/cs/0102001) across tasks as part of DeepOBS, as a way of getting a sense of how optimizers performed generally across all tasks.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
H1e6ij0cKQ
EFFICIENT SEQUENCE LABELING WITH ACTOR-CRITIC TRAINING
[ "Saeed Najafi", "Colin Cherry", "Greg Kondrak" ]
Neural approaches to sequence labeling often use a Conditional Random Field (CRF) to model their output dependencies, while Recurrent Neural Networks (RNN) are used for the same purpose in other tasks. We set out to establish RNNs as an attractive alternative to CRFs for sequence labeling. To do so, we address one of the RNN’s most prominent shortcomings, the fact that it is not exposed to its own errors with the maximum-likelihood training. We frame the prediction of the output sequence as a sequential decision-making process, where we train the network with an adjusted actor-critic algorithm (AC-RNN). We comprehensively compare this strategy with maximum-likelihood training for both RNNs and CRFs on three structured-output tasks. The proposed AC-RNN efficiently matches the performance of the CRF on NER and CCG tagging, and outperforms it on Machine Transliteration. We also show that our training strategy is significantly better than other techniques for addressing RNN’s exposure bias, such as Scheduled Sampling, and Self-Critical policy training.
[ "Structured Prediction", "Reinforcement Learning", "NLP" ]
https://openreview.net/pdf?id=H1e6ij0cKQ
https://openreview.net/forum?id=H1e6ij0cKQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SkgBH-9gxE", "ByeZJ0I_C7", "SygQFc2_pX", "rJxxaVYCh7", "rygn_-YR37", "BkgkqFBq3Q", "H1e59mLw27", "HJxbDOnEnm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544753469212, 1543167449105, 1542142586899, 1541473463698, 1541472627766, 1541196166660, 1541002130135, 1540831321508 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper662/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper662/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper662/Authors" ], [ "ICLR.cc/2019/Conference/Paper662/Authors" ], [ "ICLR.cc/2019/Conference/Paper662/Authors" ], [ "ICLR.cc/2019/Conference/Paper662/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper662/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper662/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"this is an interesting approach to use reinforcement learning to replace CRF for sequence tagging, which would potentially be beneficial when the tag set is gigantic. unfortunately the conducted experiments do not really show this, which makes it difficult to see whether the proposed approach is indeed a viable alternative to CRF for sequence tagging with a large tag set. this sentiment was shared by all the reviewers, and R1 especially pointed out major and minor issues with the submission and was not convinced by the authors' response.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"mismatched goals, evaluation and comparison\"}", "{\"title\": \"Clarity\", \"comment\": \"\\\"- Our main goal is to replace CRF in tagging, especially for tasks with large number of labels. `\\\"\\n\\nBut then you should look at tagging tasks which is what (neural) CRF approaches are good for; transliteration, as pointed out in the introduction of the paper is not such a task. \\n\\n- \\\"Combining MLE and RL requires two forward computation in decoder RNN, one conditioning on gold standard, another on model-generated tokens. \\\"\\n\\nAnd why is this a problem? Assuming it is the extra computation, then there should be a careful comparison on processing time needed between the methods. \\n\\n\\\"- Reward reshaping can be used to convert non-decomposable loss functions such as BLEU into step-size rewards. Reward(t) = BLEU(t) - BLEU(t-1)\\\"\\n\\nBut this is an approximation, so whether it will work depends on many factors. In any case, this sounds more like an argument on BLEU's flexibility not of the proposed method.\\n\\n- \\\" There is no preprocess alignment done for CRF in transliteration. \\\"\", \"the_paper_states\": \"\\\"To do so, we pad both sequences with extra end symbols up to a\\nfixed maximum length, and let CRF decode until the end of the padded source sequence.\\\"\\nAs CRF is a tagging method, somehow there is an alignment of input to output characters, which is why you need the padding.\"}", "{\"title\": \"Clarity\", \"comment\": [\"Thank so much for your insightful comments.\", \"Our main goal is to replace CRF in tagging, especially for tasks with large number of labels. `\", \"MIXER uses REINFORCE, table 1illustrates that REINFORCE family fails compared to actor-critic.\", \"We both have a regressor as critic, but MIXER doesn't bootstrap its estimates in the computed returns.\", \"Combining MLE and RL requires two forward computation in decoder RNN, one conditioning on gold standard, another on model-generated tokens.\", \"The prediction is correct, but the advantage given by the critic is wrong, so we skip the update on actor.\", \"Average over 20 runs, on epoch 13, adjusted actor-critic are better than actor-critic and reinforce models.\", \"Reward reshaping can be used to convert non-decomposable loss functions such as BLEU into step-size rewards.\", \"Reward(t) = BLEU(t) - BLEU(t-1)\", \"End-to-End training time and the required GPU memory for one training epoch on CCG supertagging.\", \"We tried the open source code for AC of Bahdanau on transliteration, but it completely failed. We only obtained dev and test split from Leblonde et al. (2018) on spelling correction dataset.\", \"We will review the mentioned paper.\", \"Scheduled Sampling is inspired by Dagger.\", \"Character embeddings are fine-tuned during training.\", \"There is no preprocess alignment done for CRF in transliteration.\"]}", "{\"title\": \"Clarity\", \"comment\": \"Our main goal is to replace CRF in tagging, especially for tasks with large number of labels. `\\nWe would appreciate if we can be referred to an existing paper comparing Seq2Seq with encoder RNN + CRF decoding.\"}", "{\"title\": \"Clarity\", \"comment\": \"1- The concern is well understood, though related works have already defined it clearly. The bias is originated from the method of training, not the RNN itself.\", \"2_throughout_the_paper\": \"\", \"crf\": \"LSTM encoder + CRF decoding with MLE training\", \"rnn\": \"LSTM encoder + LSTM decoder with MLE training\", \"ac_rnn\": \"LSTM encoder + LSTM decoder with MLE & Actor-Critic training\"}", "{\"title\": \"Actor critic for sequence labeling; not very novel but good results on transliteration; inadequate comparison\", \"review\": \"The authors propose actor-critic method for sequence labeling and show that it performs better (is more stable than) other RL approaches and also outperforms other techniques for countering exposure bias like scheduled sampling.\\n\\nThe results show very small improvement in tagging tasks like NER and CCG supertagging compared to other approaches ; but they show good improvement in the transliteration task which is more of a transduction task than a tagging task. \\n\\nThis authors also discus the adjusted training procedure which accounts for bad performance of the critic model in the initial stages of training. The approach is not very novel because actor-critic for more general sequence-to-sequence models (arguably more complex than tagging) has already been explored in the literature (Bahdanau et al., cited by the authors). Major difference in the proposed approach is the use of stepwise hamming-loss based reward and it is unclear whether this is a major contribution which sets it apart from the previous work on AC for sequence modeling. For example, a good comparison would be to do tagging in seq2seq style and use the approach proposed in the existing AC work to show the value of the approach proposed here.\\n\\nAlso, minor claims about thoroughness of comparison with CRF are ill-founded as previous published work on tagging has indeed compared CRFs, independent, LSTM/RNN based models.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The paper pretenses reinforcement learning algorithms for dealing with the \\\"exposure bias\\\" problem of RNNs in sequence labeling tasks. The paper suffers from clarity issues - for example it was hard to understand the exposure bias problem. I also miss an important comparison in the experimental section - to the LSTM-CRF model.\", \"review\": \"The paper pretenses reinforcement learning algorithms for dealing with the \\\"exposure bias\\\" problem of RNNs in sequence labeling problems. While I admire the thoroughness of both the algorithmic work and experimental setup, I am afraid the paper suffers from two major problems:\\n\\n1. The paper suffers from serious clarity issues. Particularly, the main problem the paper deals with - exposure bias- is not well explained. I admit that while I am working with RNNs on a regular basis, I was not familiar with this problem. Unfortunately, I was also not able to understand it from the paper. This may be a very basic concept, but a paper must be self-contained. Unfortunately, after reading the paper, front to cover, I cannot tell what is the problem the authors are trying to solve (except, of course, from providing a better training algorithm for RNNs).\\n\\n2. As the authors say already in the abstract, one of the best performing models on structured NLP tasks is LSTM-CRF, which combines the power of both the neural and the structured prediction frameworks. However, the authors do not compare their solution to LSTM-CRF, but only to LSTM and to CRF. This is a very important baseline, and without a proper comparison it is hard to evaluation the contribution of this paper.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Unclear focus of the paper: tagging or sequence generation? Comparisons not informative\", \"review\": [\"I found the paper difficult to follow. The method proposed is not well motivated, and the literature review explains well the novelty. Here are some questions/points for discussion:\", \"the token-level MLE training is not what causes the exposure bias: one can train with MLE and still avoid it by generating appropriate sequences using the RNN, as in scheduled sampling. The problem with MLE (or cross entropy) is that the labels to be predicted might not be the correct ones. See the paper by Ranzato et al. (ICLR2016) for a good discussion of the issue: https://arxiv.org/pdf/1511.06732.pdf\", \"The criticism against previous works for not comparing agains CRFs seems odd: CRFs are given the number of labels, words, etc. to predict, typically the same as the number of words to be tagged. If one has this, as well as binary rewards for each decision, then there is little benefit for RL/IL based approaches to be used. The point for them is the use of non-decomposable loss functions such as BLEU, which are not common in tagging, but in tasks like MT, where CRFs can't be used. In fact, for the transliteration experiments in the paper, the CRF approach is padded to perform the task, which highlights that it is not the right comparison.\", \"the approach proposed seems very similar to MIXER, which also learns a regressor to predict the reward for each action. A direct comparison both in terms of how the approaches operate and empirically is needed.\", \"why is it a problem that previous works by Ranzato, Bahdanau and Paulus combine MLE and RL? You are using the same supervision, ie. the labeled corpus.\", \"the adjusted training seems to essentially not reward correct predictions (top branch in the equation). Why is this a good idea?\", \"In figure 1 it is not clear at all that the proposed approach works; depending on the epoch the ranking among the three variants differs\", \"what does it mean for one method to surpass the other in flexibilty? If anything the requirement for immediate rewards after every action restricts flexibility, as one can't use non-decomposable loss functions such as BLEU which are prety common in NLP.\", \"How is the training efficiency measured in the paper?\", \"Why not compare against MIXER, as well as more recent work by Leblonde et al. (2018): https://arxiv.org/abs/1706.04499 ? I don't see why the Rennie et al. 2017 method is picked for comparison.\", \"It is not true that in IL one needs a gold standard policy, one can learn with sub-optimal policies, see Sun et al. (2018): https://arxiv.org/pdf/1703.01030.pdf\", \"It is odd to say that an approach proposed earlier (Dagger) reduces to a variant of a later proposed one (Scheduled sampling), the reduction should be the other way around\", \"are the randomly initialized character embeddings for transliteration tuned during training?\", \"How were the alignments for training the CRF obtained?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
SkgToo0qFm
Transferrable End-to-End Learning for Protein Interface Prediction
[ "Raphael J. L. Townshend", "Rishi Bedi", "Ron O. Dror" ]
While there has been an explosion in the number of experimentally determined, atomically detailed structures of proteins, how to represent these structures in a machine learning context remains an open research question. In this work we demonstrate that representations learned from raw atomic coordinates can outperform hand-engineered structural features while displaying a much higher degree of transferrability. To do so, we focus on a central problem in biology: predicting how proteins interact with one another—that is, which surfaces of one protein bind to which surfaces of another protein. We present Siamese Atomic Surfacelet Network (SASNet), the first end-to-end learning method for protein interface prediction. Despite using only spatial coordinates and identities of atoms as inputs, SASNet outperforms state-of-the-art methods that rely on hand-engineered, high-level features. These results are particularly striking because we train the method entirely on a significantly biased data set that does not account for the fact that proteins deform when binding to one another. Demonstrating the first successful application of transfer learning to atomic-level data, our network maintains high performance, without retraining, when tested on real cases in which proteins do deform.
[ "transfer learning", "protein interface prediction", "deep learning", "structural biology" ]
https://openreview.net/pdf?id=SkgToo0qFm
https://openreview.net/forum?id=SkgToo0qFm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HyxfwCuZxE", "H1lU_ZlqaX", "BklQEZg5TX", "H1etg-g5pX", "B1gm0gxqp7", "S1gJdPVn3m", "Skl6siujnm", "SkeItzcq3m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544814170404, 1542222189613, 1542222122854, 1542222064969, 1542222026839, 1541322599411, 1541274532535, 1541214846295 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper661/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper661/Authors" ], [ "ICLR.cc/2019/Conference/Paper661/Authors" ], [ "ICLR.cc/2019/Conference/Paper661/Authors" ], [ "ICLR.cc/2019/Conference/Paper661/Authors" ], [ "ICLR.cc/2019/Conference/Paper661/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper661/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper661/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"Two out of three reviews for this paper were provided in detail, but all three reviewers agreed unanimously that this paper is below the acceptance bar for ICLR. The reviewers admired the clarity of writing, and appreciated the importance of the application, but none recommended the paper for acceptance due largely to concerns on the experimental setup.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"No strong reviewer support\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"Response: We thank the reviewer for their comments. We would like to clarify one of the central points of this paper, as the cons presented are built upon a misunderstanding of this point. We are not proposing a new transfer learning model -- we are demonstrating the transferrability of the atomic features we have learned. We train our structural features on C_r and show that with no re-training they can achieve state-of-the-art results of C_p. Applying a classical transfer learning algorithm might improve performance even further, as then we could fine-tune results on C_p. Though this is an interesting direction, it is outside the scope of the work we present here, which concerns itself with the learned representations themselves. Thus, instead of comparing transfer learning methods, we evaluate the transferrability of both our own structural features as well as those of competitors.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for their comments. We address their comments individually below.\\n\\n> The work is more suitable for a bioinformatics audience though, as the bigger contribution is on the particular application, rather than the model / method itself. The main contribution of this paper is the representation of the protein interaction data in the input layer of the CNN\", \"response\": \"We can demonstrate consistent performance at different ratios of positive:negative examples. Running tests on C_p^{test} at 1:3, 1:5, and 1:10 ratios demonstrate no significant impact on performance (0.889 [0.882 +/- 0.012], 0.889 [0.882 +/- 0.011], and 0.895 [0.886 +/- 0.015], respectively). The AUROC metric we use is insensitive to class imbalance, and thus is a good measure to use when evaluating on datasets with varying amounts of imbalance.\"}", "{\"title\": \"Response to Reviewer 3 [2/2]\", \"comment\": \"> Moreover, it is the prediction performance that matters to such task, but the authors remove the non-structure features from the compared methods. Results and discussion about how the previous methods with full features perform compared to SASNet, and also how we can include those features into SASNet should complete the paper.\", \"response\": \"We have removed the SASNet ensemble from the paper, as it was based on C_p^{val} and confuses the point we are making about minimally relying on C_p for training and validation. We could definitely investigate further why this mild ensembling yields a small performance increase, but we see this as tangential to the overarching points of the paper.\"}", "{\"title\": \"Response to Reviewer 3 [1/2]\", \"comment\": \"We thank the reviewer for their comments. We address their comments individually below.\\n\\n> My overall concern is that the experiment result doesn\\u2019t really fully support the claim in the two aspects: 1) the SASNet takes the enriched dataset as input to the neural net but it also uses the complex (validation set) to train the optimal parameters, so strictly it doesn\\u2019t really fit in the \\u201ctransfer\\u201d learning scenario.\", \"response\": \"As we discuss above, we believe our experimental setup and analysis is sufficient to demonstrate that our atomic representation transfers much better across atomic tasks. We have also added to our discussion, making clear that our method represents a significant advantage over competing methods when detailed atomic information is available. Competitors rely on amino acid-level features that fail to capture specific atomic positions but can be better when the structural is less detailed or accurate.\"}", "{\"title\": \"Nice writing. Lack of significant contribution. Insufficient experimental evidence.\", \"review\": \"For the task of predicting interaction contact among atoms of protein complex consisting of two interacting proteins, the authors propose to train a Siamese convolutional neural network, noted as SASNet, and to use the contact map of two binding proteins\\u2019 native structure.\\nThe authors claim that the proposed method outperforms methods that use hand crafted features; also the authors claim that the proposed method has better transferability.\\n\\nMy overall concern is that the experiment result doesn\\u2019t really fully support the claim in the two aspects: 1) the SASNet takes the enriched dataset as input to the neural net but it also uses the complex (validation set) to train the optimal parameters, so strictly it doesn\\u2019t really fit in the \\u201ctransfer\\u201d learning scenario. Also, the compared methods don\\u2019t really use the validation set from the complex data for training at all. Thus the experiment comparison is not really fair. 2) The experiment results include standard errors for different replicates where such replicates correspond to different training random seeds (or different samples from the enriched set?), however, it doesn\\u2019t include any significance of the sampling. Specifically, the testing dataset is fixed. A more rigorous setting is to, for N runs, each run splitting the validation and testing set differently.\\n\\nSince this paper is an application paper, rather than a theoretical paper that bears theoretical findings, I would expect much more thorough experimental setup and analysis. Currently it is still missing discussion such as, when SASNet would perform better and when it would perform worse, what it is that the state of the art features can\\u2019t capture while SASNet can. Moreover, it is the prediction performance that matters to such task, but the authors remove the non-structure features from the compared methods. Results and discussion about how the previous methods with full features perform compared to SASNet, and also how we can include those features into SASNet should complete the paper.\\n\\nOverall the paper is well written, and I do think the paper could be much stronger the issues above are addressed.\", \"some_minor_issues\": \"1)\\ton page 4, Section 3, the first paragraph, shouldn\\u2019t \\u201cC_p^{val} of 55\\u201d be \\u201cC_p^{test} of 55\\u201d?\\n\\n2)\\tIt is not clear what the \\u201creplicates\\u201d refer to in the experiments.\\n\\n3)\\tSome discussion on why the \\u201cSASNet ensemble\\u201d would yield better performance would be good; could it be overfitting?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Decent application paper and setup for siamese networks\", \"review\": \"Summary:\\nThis paper uses siamese networks to define a discriminative function for predicting protein-protein interaction interfaces. They show improvements in predictive performance over some other recent deep learning methods. \\nThe work is more suitable for a bioinformatics audience though, as the bigger contribution is on the particular application, rather than the model / method itself.\", \"novelty\": \"The main contribution of this paper is the representation of the protein interaction data in the input layer of the CNN\", \"clarity\": [\"The paper is well written, with ample background into the problem.\"], \"significance\": [\"Their method improves over prior deep learning approaches to this problem. However, the results are a bit misleading in their reporting of the std error. They should try different train/test splits and report the performance.\", \"This is an interesting application paper and would be of interest to computational biologists and potentially some other members of the ICLR community\", \"Protein conformation information is not required by their method\"], \"comments\": [\"The authors should include citations and motivation for some of their choices (what sequence identity is used, what cut-offs are used etc)\", \"The authors should compare to at least some popular previous approaches that use a feature engineering based methodology such as - IntPred\", \"The authors use a balanced ratio of positive and negative examples. The true distribution of interacting residues is not balanced -- there are several orders of magnitude more non-interacting residues than interacting ones. Can they show performance at various ratios of positive:negative examples? In case there is a consistent improvement over prior methods, then this would be a clear winner\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"good idea but unclear model\", \"review\": \"This manuscript applies transfer learning for protein surface prediction. The problem is important and the idea is novel and interesting. However, the transfer learning model is unclear.\", \"pros\": \"interesting and novel idea\", \"cons\": \"unclear transfer learning model, insufficient experiments.\", \"detail\": \"section 4 describes the transfer learning model used in the work, but the description is unclear. It is unknown the used model is a new model or existing model. Besides, in the experiments, the proposed method is not compared to other transfer learning methods. Thus, the evidence of the experiments is not enough.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SJe2so0qF7
Learning data-derived privacy preserving representations from information metrics
[ "Martin Bertran", "Natalia Martinez", "Afroditi Papadaki", "Qiang Qiu", "Miguel Rodrigues", "Guillermo Sapiro" ]
It is clear that users should own and control their data and privacy. Utility providers are also becoming more interested in guaranteeing data privacy. Therefore, users and providers can and should collaborate in privacy protecting challenges, and this paper addresses this new paradigm. We propose a framework where the user controls what characteristics of the data they want to share (utility) and what they want to keep private (secret), without necessarily asking the utility provider to change its existing machine learning algorithms. We first analyze the space of privacy-preserving representations and derive natural information-theoretic bounds on the utility-privacy trade-off when disclosing a sanitized version of the data X. We present explicit learning architectures to learn privacy-preserving representations that approach this bound in a data-driven fashion. We describe important use-case scenarios where the utility providers are willing to collaborate with the sanitization process. We study space-preserving transformations where the utility provider can use the same algorithm on original and sanitized data, a critical and novel attribute to help service providers accommodate varying privacy requirements with a single set of utility algorithms. We illustrate this framework through the implementation of three use cases; subject-within-subject, where we tackle the problem of having a face identity detector that works only on a consenting subset of users, an important application, for example, for mobile devices activated by face recognition; gender-and-subject, where we preserve facial verification while hiding the gender attribute for users who choose to do so; and emotion-and-gender, where we hide independent variables, as is the case of hiding gender while preserving emotion detection.
[ "Machine learning", "privacy", "adversarial training", "information theory", "data-driven privacy" ]
https://openreview.net/pdf?id=SJe2so0qF7
https://openreview.net/forum?id=SJe2so0qF7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1e98F5Nx4", "Skg-3CDukE", "SJlJvdaSCm", "rJlYm_aSA7", "BylZF46rCm", "Hyl2M7THCX", "H1xAaMTSR7", "HJlxjQO6nQ", "BJgbp4wT2Q", "Syxle0BY3Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545017682025, 1544220328886, 1542998103160, 1542998048934, 1542997113049, 1542996756063, 1542996677978, 1541403544249, 1541399736770, 1541131752173 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper660/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper660/Authors" ], [ "ICLR.cc/2019/Conference/Paper660/Authors" ], [ "ICLR.cc/2019/Conference/Paper660/Authors" ], [ "ICLR.cc/2019/Conference/Paper660/Authors" ], [ "ICLR.cc/2019/Conference/Paper660/Authors" ], [ "ICLR.cc/2019/Conference/Paper660/Authors" ], [ "ICLR.cc/2019/Conference/Paper660/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper660/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper660/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper addresses data sanitization, using a KL-divergence-based notion of privacy. While an interesting goal, the use of average-case as opposed to worst-case privacy misses the point of privacy guarantees, which must protect all individuals. (Otherwise, individuals with truly anomalous private values may be the only ones who opt for the highest levels of privacy, yet this situation will itself leak some information about their private values).\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Misses the point of privacy\"}", "{\"title\": \"Additional response\", \"comment\": \"We would like to thank the reviewer for their comments on the updated manuscript.\\n\\nWe addressed the typo in Equation 20.\", \"regarding_the_comments_on_stronger_privacy_guarantees\": \"Indeed we would like to get tighter and stronger guarantees, this is a starting point. We do feel that the type of privacy concerns that can be addressed in a way similar to the one presented in this paper can be of use in real-world scenarios. We will continue working in exactly that direction, that of providing formal guarantees and bounds on privacy provided by these types of approaches.\"}", "{\"title\": \"Response to Reviewer continued\", \"comment\": \"#REVIEWER QUOTE:\\n[Con #2] I had trouble understanding the motivation for the Subject within Subject case study in Section 5.1. The authors describe the problem as follows: \\\"Imagine a subset of users wish to unlock their phone using facial identification, while others opt instead to verify their right to access the phone using other methods; in this setting, we would wish the face identification service to work only on the consenting subset of users, but to respect the privacy of the remaining users.\\\" The proposed solution (Figure 3) applies minor perturbations to the pictures of consenting subjects while editing the photos of the non-consenting users to leave only their silhouettes. A simple baseline would be to remove the photos of the non-consenting users from the dataset entirely. The case study would greatly benefit from a discussion of why the baseline is insufficient. It's also perfectly reasonable to say that the section is meant as a way to check whether the objective function from Section 4 can lead to reasonable behavior in practice, but if so, the intent should be clarified.\\n#\\n\\n*RESPONSE:\\nWe modified paragraph 1 of the Experiments and results section to better motivate the scenarios shown, and explain how the same framework and architecture can achieve very different but consistent results when faced with different privacy tasks. \\n\\nParagraph 3 on page 7 was also modified to better motivate the subject-within-subject example in particular. The goal of this task is to essentially make the phone incapable of collecting data on non-consenting users after the privacy filter is deployed.\\n\\n#REVIEWER QUOTE:\\n[Con #3] As far as I can tell, the practical experiments in Section 5 assume that the party who perturbs the dataset knows exactly what algorithm an attacker will use to infer secret information. They also seem to assume that the attacker cannot switch to a different algorithm -- or even retrain an existing machine-learned model -- to try and counter the perturbation heuristics. From the beginning of Section 5: \\\"Initially, we assume that the secret algorithm is not specifically tailored to attack the proposed privatization, but instead is a robust commonly used algorithm trained on raw data to infer the secret.\\\" Unless I missed something, it seems like this assumption is used throughout the experimental evaluation.\\nTo the authors' credit, the submission states this assumption explicitly in Section 5. From a security perspective, however, this seems like a dangerous assumption to rely on, as it leaves \\\"sanitized\\\" data vulnerable to attacks. For example, an attacker with knowledge of the perturbation algorithm can retrain the model they use to extract sensitive information, using perturbed images in place of the original images in their training dataset.\\nMy main practical concern is that the security guarantees provided by the submission seem fragile. It may be much easier to build a perturbation algorithm that is resistant to a single (known) attack than to remove the sensitive information from the dataset entirely. Right now, the empirical results in the submission seem to focus on the former.\\n#\\n\\n*RESPONSE:\\nWe apologize for the confusion, all experiments except the subject vs gender experiment were done while adversarially training the secret, exactly for the reasons you stated above. The phrase you highlighted was modified in Paragraph 1 in Experiments and Results (page 6) since the original one was clearly confusing. (the original comment was alluding to the initialization of the networks before starting the sanitization learning algorithm). Paragraph 1 in Page 7 was added to clarify this is the only experiment shown with fixed secret inference. We hope this clarifies this issue now.\\n\\nTo conclude, we have addressed all the reviewer\\u2019s comments, in particular, the 3 mentioned Cons, and we hope he/she will now support accepting this paper now. While as with most papers there is still significant work to be done, the paper proposes a new important framework for privacy with new results and theory (the reviewer him/herself points out to the importance of this work). The reviewer clearly states he/she likes the ideas of the paper, and the cons mentioned (all very constructive, thanks) have all been carefully addressed in the revision.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"We are very excited with the very positive and enthusiastic support of all the reviewers, and their outstanding feedback. We thank the reviewers for the very constructive comments. We have taken all the reviewers' points into consideration and have modified the paper accordingly, see blue text in revised document for the major changes (all related to clarifications, following the ICLR policy). A detailed point-by-point answer is provided below. Overall, the paper was revised for clarity, adding additional intuition to the theory and applicability of the framework. We also clarified the purpose of some of the experiments, and added material in the Supplementary section to show additional examples on controlled situations (to further explain the bounds and their relevance), as well as extensive implementation details.\\n\\n#REVIEWER QUOTE:\\n\\u2018[Key Comments] I'm of two minds about this paper. On the whole, I found the problem statement compelling. However, I had serious reservations about the implementation. First: I had trouble understanding the experimental setup based on the limited information provided in Section 5, and the results seem difficult to reproduce from the information in the paper. Second and more seriously: the security guarantees provided in practice seem very weak. At the very least, the authors should check whether their perturbations are robust against an adversary who retrains their model from scratch on perturbed data. This experiment would significantly strengthen the submission, but would still leave open the possibility that a clever adversary could extract more sensitive information than expected from the perturbed data.\\u2019\\n#\\n\\n*RESPONSE:\\nWe appreciate your comments. First, we added a detailed implementation section to make explicitly clear how anyone can implement and reproduce the results shown in the paper, this is now shown in Section 7.3.\\n\\nAs for the second point, the secret inference algorithm is indeed retrained from scratch on perturbed data, in all but one of the presented experiments, we apologize for not making this clearer from the submission. Paragraph 1 in Experiments and Results (page 6) was modified accordingly, since the original passage was confusing (the original comment was alluding to the initialization of the networks before starting the sanitization learning algorithm). Paragraph 1 in Page 7 was added to clarify this is the only experiment shown with fixed secret inference.\\n\\nNote that we perform very different experiments and utility/privacy cases with the same proposed framework.We expect this and the multiple clarifications and additions in the revised version (see next) addresses all these constructive comments.\\n\\n#REVIEWER QUOTE:\\n\\u2018[Pro #1] The idea of perturbing an input \\u2026\\u2019\\n\\u2018[Pro #2] The idea of perturbing a dataset \\u2026\\u2019\\n\\u2019[Pro #3] The paper combines theoretical results with \\u2026\\u2018\\n#\\n\\n*RESPONSE:\\nWe thank the reviewer for pointing out these pros, which we believe (in particular now that all the cons have been clarified/addressed) significantly outpaces the cons below.\\n\\n#REVIEWER QUOTE:\\n\\u2018[Con #1] Few details are provided about the experimental setup used in Section 5, and it was difficult for me to understand how the theoretical results in Section 4 were actually being applied. There's typically a lot of work that goes into turning a theoretical objective function (e.g., Equation 10 in Section 4.2) into a practical experimental setup. This could be a major contribution of the paper. But right now, I feel like there aren't enough details about the implementation for me to reproduce the experiments.\\n#\\n\\n*RESPONSE:\\nWe agree with the sentiment that the experimental implementation was not sufficiently explained, to that effect, we added Section 7.3 in the supplementary material to show exactly how the loss is converted into an experimental setup. We apologize for not providing details in the original version. Code will also be released with the paper publication.\"}", "{\"title\": \"Response to reviewer\", \"comment\": \"We are very excited about the very positive and enthusiastic support of all the reviewers, and their outstanding feedback. We thank the reviewers for the very constructive comments. We have taken all the reviewers' points into consideration and have modified the paper accordingly, see blue text in the revised document for the major changes (all related to clarifications, following the ICLR policy). A detailed point-by-point answer is provided below. Overall, the paper was revised for clarity, adding additional intuition to the theory and applicability of the framework. We also clarified the purpose of some of the experiments and added material in the Supplementary section to show additional examples on controlled situations (to further explain the bounds and their relevance), as well as extensive implementation details.\\n\\n# REVIEWER QUOTE:\\n\\u2018This paper studies the problem of representing data records with potentially sensitive information about individuals in a privacy-preserving fashion such that they can be later used for training learning models. Informally, it is expected from the transformed output of data record, one should be able to learn about a desired hidden variable, but should not be able to learn anything about a sensitive hidden variable. To that end, the paper proposes a KL divergence based privacy notion, and an algorithmic approach to learn a representation while balancing the utility privacy trade-off.\\n\\nI am excited about the choice of the problem, but I have reservations about the treatment of privacy in the paper. First, KL divergence is a very weak (average case) notion privacy that can be easily broken. Second, the algorithm that is outlined in the paper gives an empirical way to compute the representation while balancing the utility-privacy trade-off (Eq. 6). However, there is no formal privacy guarantee for the algorithm. It is important to remember that unlike the utility, privacy is a worst-case notion and should formally hold in all occasions.\\u2019\\n#\\n\\n*RESPONSE:\\nSince this work deals with data-driven privacy, it is not possible to know beforehand the exact model used to generate the observed data, this is a common occurrence in real scenarios, which, in our opinion, makes it an interesting problem (we nevertheless added results on data with known distributions in the revised version). Under those constraints, it is challenging to provide guarantees similar to the ones made by differential privacy, this work is an initial step in that direction and something we want to pursue in the future.\\n\\nThe reviewer correctly observes that this framework provides privacy in expectation, we are currently investigating how the variance in this type of privacy can be measured and bounded.\\n\\nFinally, we also agree that privacy is a worst-case notion, which is why the secret inference algorithm is trained adversarially, to test whether any possible attacker can still learn anything meaningful from the sanitized representation.\\n\\nThese comments have now been incorporated into the revised manuscript. Clarifications on the adversarial issues were added in paragraph 1 in Experiments and Results (page 6), Experiments on known distributions are shown in Supplementary Material (Section 7.2), and comments were added to Concluding remarks reflecting the comments above.\\n\\nTo conclude, we have addressed all the reviewer\\u2019s comments and we hope he/she will now support accepting this paper, reflecting the statement \\u201cI am excited about the choice of the problem.\\u201d While as with most papers there is still significant work to be done, the paper proposes a new important framework for privacy with new results and theory, as stated by the reviewer as well.\"}", "{\"title\": \"Response to reviewer continued\", \"comment\": \"#REVIEWER QUOTE:\\n'Can different user have different secret? '\\n#\\n\\n*RESPONSE:\\nYes, different users may define different secrets, a key concept of this work is to have users in control of their privacy desires and needs. This is made scalable by ensuring the sanitized images work on the existing utility-providing networks (a many-to-one relationship between secrecy preferences and utility); this was clarified in the abstract and paragraphs 2 and 6 in the Introduction.\\n\\n#REVIEWER QUOTE:\\n'In the experiment, it might be better to try different models/algorithms for the utility and secrecy inferring algorithm, to demonstrate how the privatizer protects secrecy under different scenarios.'\\n#\\n\\n*RESPONSE:\\nWe agree that the results could be strengthened by testing against various utility-inference algorithms. We did, however, train against secrecy adversaries that were constantly adapting to the sanitation strategy, this provides some guarantee that a yet-unobserved secrecy inferring algorithm cannot violate privacy, especially since the secrecy adversaries came from a sufficiently rich parametric family (DNN).\\n\\n#REVIEWER QUOTE:\\n'I think there might be some related work on the field of fairness and transparency where we sometimes want the machine learning models to learn without looking at some sensitive features. It would be nice to add more related work on that side. '\\n#\\n\\n*RESPONSE:\\nWe are looking into that as a future research direction. Indeed be believe these concepts are related (though note that a system doesn\\u2019t need to be private to be fair). We also believe that a unified theory of privacy, fairness, and transparency will be a superb contribution to the community. We have added a comment on this in the revised version.\\n\\n#REVIEWER QUOTE:\\n'It\\u2019s better to give more intuition and explanation than formulas in Section 3.'\\n#\\n\\n*RESPONSE:\\nParagraph 1 and 2 in Section 4 were modified to give a better summary of the main ideas behind the loss functions. Note also that we added additional experiments with known data distributions to further stress the loss functions and the bounds.\\n\\n#REVIEWER QUOTE:\\n'There are a few typos (e.g. Page2, 3rd paragraph, last sentence: \\u201cout\\u201d-> \\u201cour\\u201d; Equation (4), I(S, Q) should be I(S; Q)?; Page 8, 2nd paragraph, 1st line \\u201cFigures in 5\\u201d -> \\u201cFigure 5\\u201d) that need to be addressed. Texts in some figures, like Figure 2 and 3, might be enlarged.'\\n#\\n\\n*RESPONSE:\\nTypos have been addressed. Thanks.\\n\\nTo conclude, we have addressed all the reviewer\\u2019s comments and we hope he/she will further support accepting this paper. While as with most papers there is still significant work to be done, the paper proposes a new important framework for privacy with new results and theory.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"We are very excited about the positive and enthusiastic support of all the reviewers, and their outstanding feedback. We thank the reviewers for the constructive comments. We have taken all the reviewers' points into consideration and have modified the paper accordingly, see blue text in the revised document for the major changes (all related to clarifications, following the ICLR policy). A detailed point-by-point answer is provided below. Overall, the paper was revised for clarity, adding additional intuition to the theory and applicability of the framework. We also clarified the purpose of some of the experiments and added material in the Supplementary section to show additional examples on controlled situations (to further explain the bounds and their relevance), as well as extensive implementation details.\\n\\nWe modified the first paragraph of the Experiments and results section to better motivate the scenarios shown, the third paragraph on page 7 was also modified to better motivate the subject-within-subject example (a critical application for mobile devices and smart speakers for example). The first two paragraphs of section 4 on page 5 were modified to better motivate the proposed loss. Section 7.2 in Supplementary material added experiments under controlled conditions to better analyze the performance of the algorithm in a dataset where the underlying distributions are known. Section 7.3 (see supplementary) shows a detailed overview of the training algorithms and architectures used throughout the paper. All these additions resulted from the excellent feedback from the reviewers.\\n\\n#REVIEWER QUOTE:\\n\\u2018More detailed comments:\\nDo the user and privatizer need to know what the machine learning task is when doing the sanitization? Is it ok for the privatizer to define utility in a different way as the machine learning task? For example, as a user, I may want to hide my emotion, but I\\u2019m ok with publishing my gender and age. In this case, can I use a privatizer which defines secret as gender and utility as (gender, age)? And will the synthetic data generated by such a privatizer be equally useful for a gender classifier (or an age classifier)? It would be good if it is, as we don\\u2019t need to generate task-specific synthetic data then.\\u2019\\n#\\n\\n*RESPONSE:\\nThe abstract and paragraphs 2 and 6 in the Introduction were modified to clarify this. The privatizer does not necessarily need to know the utility inference mechanism when performing sanitization. It is, however, extremely advantageous to know the algorithm used, this allows the privatizer and service provider to collaborate, this is what enables scalability in the way you describe (multiple privacy tasks served by a single utility algorithm)\\n\\n#REVIEWER QUOTE:\\n'I think it might be interesting to see the effect of the privatizer when utility and secrecy are correlated (with a potentially different level of correlation). '\\n#\\n\\n*RESPONSE:\\nSection 7.3 was added to show this behavior on data with known properties and distributions. Importantly, it shows that the privacy learning mechanism effectively approaches the bounds regardless of correlation levels on this data. We should also add that the examples here introduced are challenging in the sense that privacy is often a significantly easier task than the utility (e.g., gender detection is easier than person identification).\\n\\n#REVIEWER QUOTE:\\n'It\\u2019s not clear to me where the privatizer comes into the picture in the subject-within-subject example. It seems like users here are people whose face appear in front of the mobile device, so they probably won\\u2019t be able to privatize their face image, yet the device won\\u2019t be able to tell if users are in the consenting group without looking at their faces. I think it\\u2019s better if more clarification on how each of the three scenarios fits into the proposed framework is provided.'\\n#\\n\\n*RESPONSE:\\nThe subject-within-subject example was meant to illustrate that a filter such as this, applied as close as possible to the sensor level, can essentially provide assurances that non-consenting subjects that stand close to the phone would have their privacy preserved. This would be a two-stage process where the image is first sanitized in a trusted environment (a closed box that can be certified to not disclose anything other than the sanitized image), and then the sanitized image is disclosed to the utility-provider, in this case, the phone-unlocking app. Paragraph 3 in page 7 was modified to reflect this. Here \\u201cprivacy\\u201d is not an attribute of a subject but the subject him/herself. Same for utility. Also, note that this example illustrates how the same theoretical and computational framework can address very different problems.\"}", "{\"title\": \"Nice idea. Need more clarification.\", \"review\": \"This paper proposes a privacy framework where a privatizer, according to the utility and secret specified by users, provides a sanitized version of the user data which lies in the same space as the original data, such that a utility provider can run the exact algorithm it uses for unsanitized data on the sanitized data to provide utility without sacrificing user privacy. The paper shows an information theoretic bound on the privacy loss and derives a loss function for the privatizer to use. It then proposes an algorithm for the privatizer, evaluates its performance on three scenarios.\\n\\nThe paper investigated on an interesting problem and proposed a nice solution for synthetic data generation. However, I think the proposed framework and how the example scenarios fit into the framework needs to be described clearer. And more experimental evaluations would also help make the result more solid.\", \"more_detailed_comments\": [\"Do the user and privatizer need to know what the machine learning task is when doing the sanitization? Is it ok for the privatizer to define utility in a different way as the machine learning task? For example, as a user, I may want to hide my emotion, but I\\u2019m ok with publishing my gender and age. In this case, can I use a privatizer which defines secret as gender and utility as (gender, age)? And will the synthetic data generated by such a privatizer be equally useful for a gender classifier (or an age classifier)? It would be good if it is, as we don\\u2019t need to generate task-specific synthetic data then.\", \"I think it might be interesting to see the effect of the privatizer when utility and secrecy are correlated (with a potentially different level of correlation).\", \"It\\u2019s not clear to me where the privatizer comes into the picture in the subject-within-subject example. It seems like users here are people whose face appear in front of the mobile device, so they probably won\\u2019t be able to privatize their face image, yet the device won\\u2019t be able to tell if users are in the consenting group without looking at their faces. I think it\\u2019s better if more clarification on how each of the three scenarios fits into the proposed framework is provided.\", \"Can different user have different secret?\", \"In the experiment, it might be better to try different models/algorithms for the utility and secrecy inferring algorithm, to demonstrate how the privatizer protects secrecy under different scenarios.\", \"I think there might be some related work on the field of fairness and transparency where we sometimes want the machine learning models to learn without looking at some sensitive features. It would be nice to add more related work on that side.\", \"It\\u2019s better to give more intuition and explanation than formulas in Section 3.\", \"There are a few typos (e.g. Page2, 3rd paragraph, last sentence: \\u201cout\\u201d-> \\u201cour\\u201d; Equation (4), I(S, Q) should be I(S; Q)?; Page 8, 2nd paragraph, 1st line \\u201cFigures in 5\\u201d -> \\u201cFigure 5\\u201d) that need to be addressed. Texts in some figures, like Figure 2 and 3, might be enlarged.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Privacy preserving data representation\", \"review\": \"This paper studies the problem of representing data records with potentially sensitive information about individuals in a privacy-preserving fashion such that they can be later used for training learning models. Informally, it is expected from the transformed output of data record, one should be able to learn about a desired hidden variable, but should not be able to learn anything about a sensitive hidden variable. To that end, the paper proposes a KL divergence based privacy notion, and an algorithmic approach to learn a representation while balancing the utility privacy trade-off.\\n\\nI am excited about the choice of the problem, but I have reservations about the treatment of privacy in the paper. First, KL divergence is a very weak (average case) notion privacy that can be easily broken. Second, the algorithm that is outlined in the paper gives an empirical way to compute the representation while balancing the utility-privacy trade-off (Eq. 6). However, there is no formal privacy guarantee for the algorithm. It is important to remember that unlike the utility, privacy is a worst-case notion and should formally hold in all occasions.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting ideas, some major practical limitations\", \"review\": \"[Second update] I'd like to thank the authors for their detailed response. The authors have made changes that I believe improve the overall quality of the submission. I now lean towards accepting the paper, and have increased my rating from a 5 to a 6.\", \"most_notably\": \"(i) they clarified that their secret-detection model was retrained on sanitized data in their experiments, (ii) they added details about their experimental setup and the algorithms used for their experimental evaluation, and (iii) they added experiments to the appendix of the submission that evaluated their framework on synthetic data. I do, however, still have some concerns about how well the privacy guarantees of the proposed algorithm would hold up in practice against a motivated adversary (since formal privacy guarantees appear to be relatively weak right now).\\n\\nAs a minor comment, there may be a typo in Equation 20 of Section 7.2: the case (u, s) = (1, 0) is handled twice, whereas the case (u, s) = (0, 0) is never handled at all.\\n\\n[First update] I find the authors' problem statement appealing, but share concerns with Reviewer 1 about the privacy guarantees offered by the proposed method, and with Reviewer 3 about need to clarify the experimental evaluation. No author response was provided; I've left my score for the paper unchanged. (Note: this update was posted a few days before the end of the rebuttal period; the submission was subsequently updated.)\\n\\n[Summary] The authors consider a problem related to de-identification, where the goal is to perturb a dataset X in a way that makes it possible to infer some useful information U about each example in the dataset while obscuring some sensitive information S. For example, the authors consider the problem of perturbing pictures of people's faces to obfuscate the subjects' emotions while making it possible to infer their genders. The concrete approach explored in the paper's experimental evaluation ensures that an existing model trained model on the original dataset will continue to work when applied to the perturbed data.\\n\\nOn the theory side, the authors derive information-theoretic lower bounds on the extent to which one can disclose useful information about a dataset without leaking sensitive information, and propose concrete minimization problems that can perturb the data to trade off between the two objectives. On the practical side, the authors evaluate the minimization setup on three different problems.\\n\\n[Key Comments] I'm of two minds about this paper. On the whole, I found the problem statement compelling. However, I had serious reservations about the implementation. First: I had trouble understanding the experimental setup based on the limited information provided in Section 5, and the results seem difficult to reproduce from the information in the paper. Second and more seriously: the security guarantees provided in practice seem very weak. At the very least, the authors should check whether their perturbations are robust against an adversary who retrains their model from scratch on perturbed data. This experiment would significantly strengthen the submission, but would still leave open the possibility that a clever adversary could extract more sensitive information than expected from the perturbed data.\\n\\n[Details]\\n[Pro #1] The idea of perturbing an input in order to optimize bounds on how much \\\"useful\\\" versus \\\"secret\\\" information is disclosed by the output seems intuitively appealing. In that context, the theory from Sections 2 and 3 seems well-motivated. Section 3.2 (\\\"defining a trainable loss metric\\\") is especially well-motivated. It provides a concrete objective function which, when minimized, can obfuscate data in a way that trades off between utility and secrecy.\\n\\n[Pro #2] The idea of perturbing a dataset in a way that allows existing useful algorithms to continue working without modifications seems like an interesting and novel contribution. I found the following excerpt from the introduction especially compelling: \\\"it is important to design collaborative systems where each user shares a sanitized version of their data with the service provider in such a way that user-defined non-sensitive tasks can be performed but user-defined sensitive ones cannot, without the service provider requiring to change any data processing pipeline otherwise.\\\"\\n\\n[Pro #3] The paper combines theoretical results with empirical case studies on three different problems. Based on visual inspection, the outputs of the perturbation heuristics shown in Section 5 / Figure 3 and Figure 4 seem reasonable.\\n\\n[Con #1] Few details are provided about the experimental setup used in Section 5, and it was difficult for me to understand how the theoretical results in Section 4 were actually being applied. There's typically a lot of work that goes into turning a theoretical objective function (e.g., Equation 10 in Section 4.2) into a practical experimental setup. This could be a major contribution of the paper. But right now, I feel like there aren't enough details about the implementation for me to reproduce the experiments.\\n\\n[Con #2] I had trouble understanding the motivation for the Subject within Subject case study in Section 5.1. The authors describe the problem as follows: \\\"Imagine a subset of users wish to unlock their phone using facial identification, while others opt instead to verify their right to access the phone using other methods; in this setting, we would wish the face identification service to work only on the consenting subset of users, but to respect the privacy of the remaining users.\\\" The proposed solution (Figure 3) applies minor perturbations to the pictures of consenting subjects while editing the photos of the non-consenting users to leave only their silhouettes. A simple baseline would be to remove the photos of the non-consenting users from the dataset entirely. The case study would greatly benefit from a discussion of why the baseline is insufficient. It's also perfectly reasonable to say that the section is meant as a way to check whether the objective function from Section 4 can lead to reasonable behavior in practice, but if so, the intent should be clarified.\\n\\n[Con #3] As far as I can tell, the practical experiments in Section 5 assume that the party who perturbs the dataset knows exactly what algorithm an attacker will use to infer secret information. They also seem to assume that the attacker cannot switch to a different algorithm -- or even retrain an existing machine-learned model -- to try and counter the perturbation heuristics. From the beginning of Section 5: \\\"Initially, we assume that the secret algorithm is not specifically tailored to attack the proposed privatization, but instead is a robust commonly used algorithm trained on raw data to infer the secret.\\\" Unless I missed something, it seems like this assumption is used throughout the experimental evaluation.\\n\\nTo the authors' credit, the submission states this assumption explicitly in Section 5. From a security perspective, however, this seems like a dangerous assumption to rely on, as it leaves \\\"sanitized\\\" data vulnerable to attacks. For example, an attacker with knowledge of the perturbation algorithm can retrain the model they use to extract sensitive information, using perturbed images in place of the original images in their training dataset.\\n\\nMy main practical concern is that the security guarantees provided by the submission seem fragile. It may be much easier to build a perturbation algorithm that is resistant to a single (known) attack than to remove the sensitive information from the dataset entirely. Right now, the empirical results in the submission seem to focus on the former.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
B1lnjo05Km
Graph Spectral Regularization For Neural Network Interpretability
[ "Alexander Tong", "David van Dijk", "Jay Stanley", "Guy Wolf", "Smita Krishnaswamy" ]
Deep neural networks can learn meaningful representations of data. However, these representations are hard to interpret. For example, visualizing a latent layer is generally only possible for at most three dimensions. Neural networks are able to learn and benefit from much higher dimensional representations but these are not visually interpretable because nodes have arbitrary ordering within a layer. Here, we utilize the ability of the human observer to identify patterns in structured representations to visualize higher dimensions. To do so, we propose a class of regularizations we call \textit{Graph Spectral Regularizations} that impose graph-structure on latent layers. This is achieved by treating activations as signals on a predefined graph and constraining those activations using graph filters, such as low pass and wavelet-like filters. This framework allows for any kind of graph as well as filter to achieve a wide range of structured regularizations depending on the inference needs of the data. First, we show a synthetic example that the graph-structured layer can reveal topological features of the data. Next, we show that a smoothing regularization can impose semantically consistent ordering of nodes when applied to capsule nets. Further, we show that the graph-structured layer, using wavelet-like spatially localized filters, can form localized receptive fields for improved image and biomedical data interpretation. In other words, the mapping between latent layer, neurons and the output space becomes clear due to the localization of the activations. Finally, we show that when structured as a grid, the representations create coherent images that allow for image-processing techniques such as convolutions.
[ "autoencoder", "interpretable", "graph signal processing", "graph spectrum", "graph filter", "capsule" ]
https://openreview.net/pdf?id=B1lnjo05Km
https://openreview.net/forum?id=B1lnjo05Km
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1lb5fiByE", "rJlPu3wi2m", "SJgJVDb5nm", "HkgL68343m" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544037001296, 1541270638836, 1541179175103, 1540830909941 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper659/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper659/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper659/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper659/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The work presents a method of imposing harmonic structural regularizations to layers of a neural network. While the idea is interesting, the reviewers point out multiple issues.\", \"pros\": [\"Interesting method\", \"Hidden layer coherence tends to improve\"], \"cons\": [\"Deficient comparisons to baselines or context with other works.\", \"Insufficient assessment of impact to model performance.\", \"Lack of strategy to select regularizers\", \"Lack of evaluation on more realistic datasets\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Structural regularizations imposed on layers.\"}", "{\"title\": \"Interesting technique, Lack of Related work\", \"review\": \"Authors present a novel regularizer to impose graph structure upon hidden layers of a neural Network. The intuition is that Neural Networks has typically symmetric computation among different channels in one layer. Due to the lack of order, visually inspecting the hidden representation is not feasible. By adding edges one can impose a structure upon nodes in one layer and add for example a Laplacian regularizer rather than simple L2 norm regularizer to force the activations to follow the imposed structure.\", \"pros\": \"Interesting idea for bringing some benefits of graphical models into Neural Networks using a regularizer.\\n\\nExperiments verify that one can successfully improve the intrepretability of hidden representations. Also, they provide examples of use cases for such technique like aligning the capsule dimmensions.\", \"cons\": \"The major flaw is the lack of comparison with ``any'' of the related work on interpretability or the prior work on imposing structure upon hidden representations. Also, the manuscripts lacks a clear discussion of where does this work stands in the literature like structured VAEs, graphical models, sum product nets + factor graphs. \\n\\nAlso, in none of the experiments authors mention how the added regularizer affects the model performance. Whether imposing the grid structure on CNN (last experiment) drops the CNN accuracy or has no effect? Same for the CapsNet.\\n\\nFurthermore, the feasibility of calculating the Laplacian for larger scale hidden layers or approximating it is not addressed.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Latent structure through spectral regularization.\", \"review\": \"The paper introduces a spectral regularization with the aim of obtaining representations\\nthat are easier to interpret.\\n\\nSome sentences are often confusing and, in general, clarity needs to be improved.\\n\\nThe motivation of the work is not very strong in my opinion, in particular by adding such\\na prior the space of possible solutions greatly shrinks and I am afraid\\nthat interesting solutions will be lost. I think one should focus on properties\\nrather than visual inspection.\\nAlso, isn't it that if we can clearly see the pattern, perhaps that pattern is\\nlinear and of easy discovery also by simpler models?\\n\\nMore importantly, it seems that all experiments are performed on tasks where the\\nunderlying structure is known, however this is almost never the case in practice.\\nAssuming one uses the proposed spectral regularization, how would one interpret\\nit in such cases?\\n\\nIn section 2 please clarify the paragraph on bounded Lp norm.\\n\\nI am sorry but why isn't there a relation, for convolutional nets,\\nbetween neurons in different channels? Each element in the feature map represents\\nthe input surrounding that location in a k dimensional space.\\n\\nThe authors state that the usual bottleneck for autoencoders is composed of 2/3\\nneurons, this is simply not true. There has been extensive work on\\novercomplete representations that shows that is better to have many more dimensions\\nbut only few degrees of freedom.\\n\\nThe spectral bottleneck should cite VQVAE as the approach is very similar and the \\nauthors should compare to it.\\n\\nFor the topological inference experiment it is assumed that one knows the structure,\\nbut how to address the more general problem?\\nMore practically, the regularization enforces smoothing (if few eigenfunctions\\nare used, which is never explained in the paper) between connected nodes, did\\nthe authors try to have a simple L2 penalty instead? E.g. minimize the difference\\nbetween activations in the group.\\n\\nRegarding the capsule network example, when you write that without regularization\\neach digit responds differently to perturbation of the same dimension, isn't it\\npossibly true only up to a, unknown, permutation of the neurons?\\n\\nTo summarize, while the idea sounds interesting, I miss to find the easy interpretability\\nof results and also the overall motivation sounds a bit weak. \\nMore importantly the selection of W, crucial for defining structure, is not discussed at all in the paper.\\nExperiments are performed on toy examples only whereas here, given that we can\\npossibly interpret the results I would have liked something more involved to\\nbetter show that this kind of interpretability is needed.\", \"missing_cites\": \"[1] van den Oord et al, Neural Discrete Representation Learning.\\n[2] Koutnik et al, Evolving neural networks in compressed weight space.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"The usefulness of graph spectral regularizer is shown, but the key points in practice are not considered.\", \"review\": \"Authors highlight the contribution of graph spectral regularizer to the interpretability of neural networks. Specifically, authors consider the Laplacian smoothing regularizer to enhance the local consistency/smoothness between a neuron and its neighbors. Furthermore, by extending the graph Fourier transformation to overcomplete dictionary representation, authors further propose a spectral bottleneck regularizer. Experimental results show that when suitable structural information and corresponding regularizers are imposed, the interpretability of the intermediate layers is improved.\\n\\nMy main concern is that the power of Graph-based regularizer has been well-known in the ML community for a long time. It is not surprising that adding such regularizers to the training process of neural networks can help to get more structural activations. The key points are \\n\\n1) How to define the Laplacian graph for the neurons? For the simple case shown in Figures 1 and 2, the topology of the neurons has been predefined and the functionality of them is predefined implicitly. For more challenging cases, how to build the Laplacian graph reasonably? \\n\\n2) How to add the regularizers with good scalability? The complexity of the proposed regularizers is O(N^2) where N is the number of neurons. When the layers contains thousands of neurons or more, how to add the regularizers efficiently?\\n\\n3) Which regularizer should be selected? Authors propose a class of graph spectral regularizers and their performance is different in different tasks. Is there any strategy helping us to select suitable regularizers for specific tasks?\\n\\nUnfortunately, authors provide little analysis on these key points.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
BJxhijAcY7
signSGD with Majority Vote is Communication Efficient and Fault Tolerant
[ "Jeremy Bernstein", "Jiawei Zhao", "Kamyar Azizzadenesheli", "Anima Anandkumar" ]
Training neural networks on large datasets can be accelerated by distributing the workload over a network of machines. As datasets grow ever larger, networks of hundreds or thousands of machines become economically viable. The time cost of communicating gradients limits the effectiveness of using such large machine counts, as may the increased chance of network faults. We explore a particularly simple algorithm for robust, communication-efficient learning---signSGD. Workers transmit only the sign of their gradient vector to a server, and the overall update is decided by a majority vote. This algorithm uses 32x less communication per iteration than full-precision, distributed SGD. Under natural conditions verified by experiment, we prove that signSGD converges in the large and mini-batch settings, establishing convergence for a parameter regime of Adam as a byproduct. Aggregating sign gradients by majority vote means that no individual worker has too much power. We prove that unlike SGD, majority vote is robust when up to 50% of workers behave adversarially. The class of adversaries we consider includes as special cases those that invert or randomise their gradient estimate. On the practical side, we built our distributed training system in Pytorch. Benchmarking against the state of the art collective communications library (NCCL), our framework---with the parameter server housed entirely on one machine---led to a 25% reduction in time for training resnet50 on Imagenet when using 15 AWS p3.2xlarge machines.
[ "large-scale learning", "distributed systems", "communication efficiency", "convergence rate analysis", "robust optimisation" ]
https://openreview.net/pdf?id=BJxhijAcY7
https://openreview.net/forum?id=BJxhijAcY7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJe3qk6YlN", "Bylc8ckyxN", "SkgZHKcaJE", "SJg-hh75C7", "SJg5q27qAX", "r1g4OhQqRQ", "Byx4HczqC7", "r1lxIcJxpm", "S1eaXqJgpX", "r1lMfYylTQ", "SygCGdygTQ", "B1eUDnU037", "HkgS6925hX", "r1eEYTEq2m", "BJezmXgjt7" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1545355156307, 1544645201990, 1544558904944, 1543285928623, 1543285905528, 1543285867928, 1543281212077, 1541564999764, 1541564964704, 1541564682439, 1541564438504, 1541463133785, 1541225149155, 1541193084051, 1538093850502 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper658/Authors" ], [ "ICLR.cc/2019/Conference/Paper658/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper658/Authors" ], [ "ICLR.cc/2019/Conference/Paper658/Authors" ], [ "ICLR.cc/2019/Conference/Paper658/Authors" ], [ "ICLR.cc/2019/Conference/Paper658/Authors" ], [ "ICLR.cc/2019/Conference/Paper658/Authors" ], [ "ICLR.cc/2019/Conference/Paper658/Authors" ], [ "ICLR.cc/2019/Conference/Paper658/Authors" ], [ "ICLR.cc/2019/Conference/Paper658/Authors" ], [ "ICLR.cc/2019/Conference/Paper658/Authors" ], [ "ICLR.cc/2019/Conference/Paper658/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper658/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper658/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper658/Authors" ] ], "structured_content_str": [ "{\"title\": \"We tested higher precision QSGD\", \"comment\": \"Dear AC and AnonReviewer1,\\n\\nWe have finished running 2, 4 and 8 bit QSGD. Per iteration, on our CIFAR-10 benchmark, we see:\\n\\n- max QSGD shows a tiny (insignificant) improvement at higher precision.\\n- L2 QSGD shows larger improvement but is still roughly 2x slower than Majority Vote even at 8-bit precision.\\n\\nTherefore the claims in the paper still stand. We will add these results to the paper.\"}", "{\"metareview\": \"The Reviewers noticed that the paper undergone many editions and raise concern about the content. They encourage improving experimental section further and strengthening the message of the paper.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper but requires revisions\"}", "{\"title\": \"Message to AC and AnonReviewer1\", \"comment\": \"Dear AC and AnonReviewer1,\\n\\nThe reviewers\\u2019 scores show a consensus to accept. Still, AnonReviewer1 raises important points that we want to address here.\\n\\n1. QSGD precision. We agree, thanks for pointing it out. We are running experiments on 2 and 4bit QSGD and will add these to the paper.\\n\\n2. Bulyan. We disagree. We believe this comparison is unnecessary for the following reasons:\\n\\u2014\\u2014\\u2014(A) our comparison with Krum is \\u201cgood\\u201d\\u2014Krum successfully detects and eliminates the adversaries in our experiments. The only drawback of Krum is that is has a requirement for total num workers \\u201cn\\u201d to exceed num adversaries \\u201cf\\u201d by n > 2f + 2, therefore for n=7, Krum already breaks down at num adversaries f=3, whereas majority vote still works at f=3.\\n\\u2014\\u2014\\u2014(B) Bulyan, on the other hand, only tolerates up to 25% adversaries, requiring n > 4f + 3. For our case of 7 workers this means it only tolerates 1 adversary (f=1). Clearly Bulyan will perform worse than Krum on these experiments.\\n\\u2014\\u2014\\u2014(TL;DR) Krum already \\u201caces\\u201d our experiments, except for the fact it has max security level f=2, therefore we didn\\u2019t see the need to compare to Bulyan which only serves to lower the max security level to f=1.\\n\\u2014\\u2014\\u2014(extra) There is another drawback of Krum and Bulyan, in that they throw away workers even when there are no adversaries\\u2014they have a \\u201cparanoid\\u201d regime. Majority vote does not do this. But this effect was not visible in our experiments (probably the batch size was too large to see it).\\n\\nWe therefore see no reason why the paper should not be accepted for this round of submission. In particular we think presenting the small batch theory (Theorem 1) would be an important and timely contribution to the understanding of adaptive gradient methods like Adam, which closely relate to signSGD. The paper may also spur further research into the combination of gradient compression and fault tolerance, which seem like a natural mix for large scale distributed learning.\\n\\nFinally, we want to thank all the reviewers for their thorough, critical and constructive reviews.\"}", "{\"title\": \"Revised the paper\", \"comment\": \"Dear AnonReviewer3,\\n\\nWe have updated your individualised response, and also summarised our revisions to the paper in a post above.\\n\\nBest wishes,\\nAnonAuthors\"}", "{\"title\": \"Revised the paper\", \"comment\": \"Dear AnonReviewer2,\\n\\nWe have updated your individualised response, and also summarised our revisions to the paper in a post above.\\n\\nBest wishes,\\nAnonAuthors\"}", "{\"title\": \"Revised the paper\", \"comment\": \"Dear AnonReviewer1,\\n\\nWe have updated your individualised response, and also summarised our revisions to the paper in a post above.\\n\\nBest wishes,\\nAnonAuthors\"}", "{\"title\": \"Summary of revisions\", \"comment\": \"Dear AnonReviewers and AC,\\n\\nWe have updated the paper. The new version includes the following additions:\\n\\n1. Added comparison to the Multi-Krum [1] Byzantine fault tolerant method (p9)\\n2. Added comparison to the QSGD [2] gradient compression method (p9)\\n3. Added natural language task benchmark (QRNN [3] model on the Wikitext-103 dataset) (p8)\\n4. Extended the robustness theorem to an entire class of adversaries that we term \\\"blind multiplicative adversaries\\\" (p7) \\n\\nWe are grateful to Rev1 and Rev3 for encouraging us to run the additional experiments, and for Rev2 for encouraging us to extend the robustness theorem.\", \"we_will_now_go_into_more_detail\": \"1. Multi-Krum experiment. Multi-Krum is a Byzantine fault tolerant method that defines a security level f, and always removes f workers from the gradient aggregation (even when there are no adversaries present). Majority Vote in contrast always keeps all workers. We found that when the number of adversaries exceeds f, Multi-Krum deteriorates dramatically, whereas Majority Vote deteriorates more gracefully.\\n\\n2. QSGD experiment. For a resnet-18 model on Cifar-10, we found that majority vote converges much faster than the \\\"theory version\\\" [2, p5] of the QSGD algorithm, but it converges at similar rate to the \\\"experiment version\\\" [2, p7] where the QSGD authors normalise by the max instead of the L2 norm. We found the max-norm version of QSGD had about 5x higher compression than the 32x compression of signSGD for this problem, but this gain represents a diminishing return since the cost of backpropagation has already started to dominate at that compression level. \\n\\nTo be explicit, for this network with SGD and NCCL, one epochs costs \\n=========> 6 sec computing + 12 sec communicating = 18 sec\\nFor signSGD a very efficient implementation should reduce communication by 32x, therefore we expect one epoch to cost\\n=========> 6 sec computing + 12/32 sec communicating = 6.375 sec\\nFor QSGD a very efficient implementation should reduce communication by (32x5)x, therefore one epoch should cost\\n=========> 6 sec computing + 12/(32x5) sec communicating = 6.075 sec\\nAnd we see the marginal gain of QSGD is small, whilst the algorithm is much more complicated.\\n\\n3. Natural language experiment. We found that using signSGD with majority vote to train QRNN led to a 3x speedup per epoch over Adam with NCCL. That said, there was a deterioration in the converged solution. This meant that overall the performance after 2 hours of training was very similar.\\n\\n4. Extended the robustness theorem. We show that Majority vote is robust to an entire class of adversaries that we call \\\"blind multiplicative adversaries\\\". This class includes adversaries that invert or randomise their gradient estimate as special cases. We are particularly interested in randomised attacks as a model of network faults. This class of adversaries is more rigorous than the class of \\\"non-cooperative\\\" adversaries that we discussed previously.\\n\\n[1] https://papers.nips.cc/paper/6617-machine-learning-with-adversaries-byzantine-tolerant-gradient-descent\\n[2] https://papers.nips.cc/paper/6768-qsgd-communication-efficient-sgd-via-gradient-quantization-and-encoding\\n[3] https://openreview.net/forum?id=H1zJ-v5xl\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Dear AnonReviewer2,\\n\\nThank you for your clear and thorough review. We appreciate your comment that the paper is a \\u201cnice addition to our understanding of signSGD\\u201d. \\n\\nWe will first contest the criticism about the significance of the work. We will then respond to the other comments in detail.\\n\\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\\n>>>> On matters of significance >>>\\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\\n\\n> \\u201cit heavily restricts what an adversary worker machine can do\\u201d\\nWe have now formulated an entire class of adversaries that our algorithm is robust to. Please see our revisions above. This class contains machines that send random bits as a special case.\\n\\n> \\u201cTheorem 1 is a minor refinement\\u201d.\\nWhilst \\\"algebraically\\\" the result is a minor refinement, conceptually it is a larger shift. It brings the signSGD work in line with modern machine learning practice. And we expect that it has ramifications on other active areas of ML research. For example:\\n\\nReddi et al. (2018) showed how bimodal noise distributions can lead to divergence of Adam. This leaves a major outstanding question in the community: if Adam generally diverges, why does it work so well in practice? Theorem 1 shows how signSGD---a special limit of Adam---may be guaranteed to converge in natural settings such as Gaussian noise distributions. It suggests that we may be able to prove convergence of Adam for Gaussian noise distributions.\\n\\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\\n>>>>>>> Minor comments >>>>>>>\\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\\n\\n> \\u201csignSGD converges to a critical point of the objective\\u201d\\nTo clarify, we mean convergence in the sense that the gradient norm goes to zero as N increases, which is exactly what Theorem 1 tells us. Points with zero gradient norm are critical points. The mixed norm on the left hand side is unusual, but by inspection it is clear that the mixed norm shrinking to zero implies that the L2-norm shrinks to zero. We will clarify this in the paper.\\n\\n> \\u201care you assuming g_i > 0 here\\u201d\\nThanks for mentioning this. We did not signpost it, but we assumed, without loss of generality, that g_i > 0. (The case that g_i < 0 follows by totally analogous reasoning.)\\n\\n> The claim \\u201csignSGD cannot converge for these noise distributions\\u201d is only \\u201cbased on intuitive arguments\\u201d. \\nThank you for pointing this out, we decided to simplify the discussion by just giving a simple example.\\n\\n> \\u201dThe theoretical results are about signSGD while the experiments are about sigNUM\\u201d\\nSee [1, Appendix, Figure A.4] for experiments across a range of momentum values. [1] also discusses the theoretical relation between Signum and signSGD. In general we suggest practitioners use Signum instead of signSGD in practice since it is only fair to give our algorithm as many hyperparameters as momentum SGD.\\n\\n[1] signSGD, compressed optimisation for non-convex problems https://arxiv.org/abs/1802.04434.\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Dear AnonReviewer3,\\n\\nThank you for your positive review. We really appreciate the remarks that our \\u201cexperiments are extensive\\u201d and our paper is \\u201csolid and interesting\\u201d.\\n\\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\\n>>>>>>> More experiments >>>>>>\\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\\n\\n> \\u201cMore experiments on different tasks and DNN architectures could be performed\\u201d\\n\\nThanks for the suggestion, we have added experiments training the QRNN language model on the Wikitext-103 dataset. Please see the revisions above\\n\\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\\n>>>>>>>> Further thoughts >>>>>>>\\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\\n\\n> \\u201csome workers might be lost during one iteration\\u201d\\nIntuitively, dropping workers will slow down convergence but not prevent it. You can see this immediately since a dropped worker is strictly better for convergence than an adversarial worker. This is one of the reasons we are excited about our Byzantine fault tolerance results.\\n\\n> what \\u201cregularization technique would be suitable for signed update kind of method\\u201d?\\nWe are particularly excited about this question for future work, thanks for suggesting it.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Dear AnonReviewer1,\\n\\nThank you for your clear and precise review. We appreciate the comment that our work \\u201ccould be a great paper\\u201d if we add some comparisons during the rebuttal. We want to contest your take on the weakness of our adversarial model, yet wholeheartedly agree with the need for adequate experimental comparisons to other techniques.\\n\\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\\n>>>>>>> Comparison expts >>>>>>\\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\\n\\nWe have added comparisons to QSGD (compression) Multi-Krum (Byzantine fault tolerance). Please see the revisions in the post above.\\n\\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\\n>>>>>>> Adversarial model >>>>>>\\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>\\n\\n> the adversary is \\u201cvery weak\\u201d since it \\u201conly sends back the opposite sign of the local stochastic gradient\\u201d\\nWe have formulated an entire class of adversaries that our algorithm is robust to. Please see our revisions above.\\n\\nThank you for pointing us to the paper [b1] saying that \\u201cconvergence is not enough\\u201d since, for example, a powerful adversary can steer convergence to bad local minimisers. This is a great point. For this reason we do not recommend using our algorithm to protect against \\u201comniscient\\u201d adversaries. But for \\u201cmere mortal\\u201d adversaries, our results are interesting. An example of a \\u201cmere mortal\\u201d adversary could be a broken machine that sends random bits or stale gradients.\"}", "{\"title\": \"Summary of reviews\", \"comment\": \"Dear AnonReviewers,\\n\\nThank you for your thoughtful and thorough reviews. We will summarise the content of the reviews here.\", \"first_some_high_notes\": \"Rev3 says our \\u201cexperiments are extensive\\u201d and our paper is \\u201csolid and interesting\\u201d. Rev2 says the paper is a \\u201cnice addition to our understanding of signSGD\\u201d. Rev1 says our work \\u201ccould be a great paper\\u201d if we add sufficient comparisons during the rebuttal.\\n\\nThe reviewers\\u2019 main concerns:\\n\\n1. Rev1 and Rev2 question the strength of the adversarial model;\\n2. Rev1 asks for comparison experiments for communication and/or Byzantine property;\\n3. Rev3 would like to see additional datasets and network architectures.\"}", "{\"title\": \"A distributed implementation of signSGD with majority vote as aggregation. An interesting idea, that however is lacking comparisons with state of the art.\", \"review\": \"The authors present a distributed implementation of signSGD with majority vote as aggregation. The result is a communication efficient and byzantine robust distributed training method. This is an interesting and relevant problem. There are two parts in this paper: first the authors prove a convergence guarantee for signSGD, and then they prove that under a weak adversary attack signSGD will be robust to a constant fraction of adversarial nodes. The authors conclude with some limited experiments.\\n\\nOverall, the idea of combining low-communication methods with byzantine resilience is quite interesting. That is, by limiting the domain of the gradients one expects that the power of an adversary would be limited too. The application of the majority vote on the gradients is an intuitive technique that can resolve weak adversarial attacks. Overall, I found the premise quite interesting.\\n\\nThere are several issues that if fixed this could be a great paper, however I am not sure if there is enough time between rebuttals to achieve this for this round of submissions. I will summarize these key issues below.\\n\\n\\n1) Although the authors claim that this is a communication efficient technique, signSGD (on its communication merit) is not compared with any state of the art communication efficient training algorithm, for example:\\n- 1Bit SGD [1]\\n- QSD [2]\\n- TernGrad [3]\\n- Deep Gradient compression [4]\\nI think it is important to include at least one of those algorithms in a comparison. Due to the lack of comparisons with state of the art it is hard to argue on the relative performance of signSGD.\\n\\n2) Although the authors claim byzantine resilience, this is against a very weak type of adversary, eg one that only sends back the opposite sign of the local stochastic gradient. An omniscient adversary can craft attacks that are significantly more sophisticated, for which a simple majority vote would not work. Please see the results in [b1].\\n\\n3) The authors although reference some limited literature on byzantine ML, they do not compare with other byzantine tolerant ML methods. For example check [eg, b1-b4] below. Again, due to the lack of comparisons with state of the art it is hard to argue on the relative performance of signSGD.\\n\\nOverall, although the presented ideas are promising, a substantial revision is needed before this paper is accepted for publication. I think it is extremely important that an extensive comparison is carried out with respect to both communication efficient algorithms, and/or byzantine tolerant algorithms, since signSGD aims to be competitive with both of these lines of work. This is a paper that has potential, but is currently limited by its lack of appropriate comparisons.\\n\\n\\n\\n[1] https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/IS140694.pdf\\n[2] https://papers.nips.cc/paper/6768-qsgd-communication-efficient-sgd-via-gradient-quantization-and-encoding.pdf\\n[3] https://papers.nips.cc/paper/6749-terngrad-ternary-gradients-to-reduce-communication-in-distributed-deep-learning.pdf\\n[4] https://arxiv.org/pdf/1712.01887.pdf\\n\\n[b1] https://arxiv.org/pdf/1802.07927.pdf\\n[b2] https://arxiv.org/pdf/1803.01498.pdf\\n[b3] https://dl.acm.org/citation.cfm?id=2933105\\n[b4] https://arxiv.org/pdf/1804.10140.pdf\\n[b5] https://arxiv.org/pdf/1802.10116.pdf\\n\\n########################\\n\\nI would like to commend the authors for making a significant effort in revising their manuscript. Specifically, I think adding the experiments for QSGD and Krum are an important addition. However, I still have a few major that in my opinion are significant:\\n\\n- The experiments for QSGD are only carried for the 1-bit version of the algorithm. It has been well observed that this is by far the least well performing variant of QSGD. That is, 4 or 8 bit QSGD seems to be significantly more accurate for a given time budget. I think the goal of the experiments should not be to compare against other 1-bit algorithms (though to be precise, 1-bit QSGD is a ternary algorithm) , but against the fastest low-communication algorithm. As such, although the authors made an effort in adding more experiments, I am still not convinced that signSGD will be faster than 4 or 8 bit QSGD. I want to also acknowledge in this comment the fact that these experiments do take time, and are not easy to run, so I commend them again for this effort.\\n\\n- My second comment relates to comparisons with state of the art algorithms in byzantine ML. The authors indeed did compare against Krum, however, as noted in my original review there are many works following Blanchard et al.\", \"for_example_as_i_noted_https\": \"//arxiv.org/pdf/1802.07927.pdf (the Bulyan algorithm) shows that there exist significantly stronger defense mechanisms for byzantine attacks. I think it would have been a much stronger comparison to compare with Bulyan.\\n\\nOverall, I think the paper has good content, and the authors significantly revised their paper according to the reviews. However, several more experiments are needed for convincing a potential reader of the main claims of the paper, i.e., that signSGD is a state of the art communication efficient and byzantine tolerant algorithm. \\n\\nI will increase my score from 5 to 6, and I will not oppose the paper being rejected or accepted. My personal opinion is that a resubmission for a future venue would yield a much stronger and more convincing paper assuming more extensive and thorough comparisons are added.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"good work but can be improved\", \"review\": \"This paper continues the study of the signSGD algorithm due to (Balles & Hennig, Bernstein et al), where only the sign of a stochastic gradient is used for updating. There are two main results: (1) a slightly refined analysis of two results in Bernstein et al. The authors proved that signSGD continues to converge at the 1/sqrt(T) rate even with minibatch size 1 (instead of T as in Bernstein et al), if the gradient noise is symmetric and unimodal; (2) a similar convergence rate is obtained even when half of the worker machines flip the sign of their stochastic gradients. These results appear to be relatively straightforward extensions of those in Bernstein et al.\", \"clarity\": \"The paper is mostly nicely written, with some occasionally imprecise claims.\\n\\nPage 5, right before Remark 1: it is wrongly claimed that signSGD converges to a critical point of the objective. This cannot be inferred from Theorem 1. (If the authors disagree, please give the complete details on how the random sequence x_t converges to some critical point x^*. or perhaps you are using the word \\\"convergence\\\" differently from its usual meaning?)\\n\\nPage 6, after Lemma 1. The authors claimed that \\\"the bound is elegant since ... even at low SNR we still have ... <= 1/2.\\\" In my opinion, this is not elegant at all. This is just your symmetric assumption on the noise, nothing more...\\n\\nEq (1): are you assuming g_i > 0 here? this inequality is false as you need to discuss the two cases. \\n\\n\\\"Therefore signSGD cannot converge for these noise distributions, ..... point in the wrong direction.\\\" This is a claim based on intuitive arguments but not a proven fact. Please refrain from using definitive sentences like this.\", \"footnote_1\": \"where is the discussion?\", \"originality\": \"Compared to the existing work of Bernstein et al, the novelty of the current submission is moderate. The main results appear to be relatively straightforward refinements of those in Bernstein. The observation that majority voting is Byzantine fault tolerant is perhaps not very surprising but it is certainly nice to have a formal justification.\", \"quality\": \"At times this submission feels like half-baked:\\n-- The theoretical results are about signSGD while the experiments are about sigNUM\\n-- The adversaries must send the negation of the sign? why can't they send an arbitrary bit vector?\\n-- From the authors' discussion \\\" we will include this feature in our open source code release\\\", \\\"plan to run more extensive experiments in the immediate future and will update the paper...\\\", and \\\"should be possible to extend the result to the mini-batch setting by combining ...\\\"\", \"significance\": \"This paper is certainly a nice addition to our understanding of signSGD. However, the current obtained results are not very significant compared to the existing results: Theorem 1 is a minor refinement of the two results in Bernstein et al, while Theorem 2 at its current form is not very interesting, as it heavily restricts what an adversary worker machine can do. It would be more realistic if the adversaries can send random bits (still non-cooperated though).\\n\\n\\n\\n##### added after author response #####\\nI appreciate the authors' efforts in trying to improve the draft by incorporating the reviewers' comments. While I do like the authors' continued study of signSGD, the submission has gone through some significant revision (more complete experiments + stronger adversary).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"interesting distributed optimization algorithm based on signSGD\", \"review\": \"The paper proposes a distributed optimization method based on signSGD. Majority vote is used when aggregating the updates from different workers.\\n The method itself is naturally communication efficient. Convergence analysis is provided under certain assumptions on the gradient. It also theoretically shows that it is robust up to half of the workers behave independently adversarially. Experiments are carried out on parameter server environment and are shown to be effective in speeding up training. \\n\\nI find the paper to be solid and interesting. The idea of using signSGD for distributed optimization make it attractive as it is naturally communication efficient. The work provides theoretical convergence analysis under the small batch setting by further assuming the gradient is unimodal and symmetric, which is the main theoretical contribution. Another main theoretical contribution is showing it is Byzantine fault tolerant. The experiments are extensive, demonstrating running time speed-up comparison to normal SGD. \\n\\nIt is interesting to see a test set gap in the experiments. It remains to be further experimented to see if the method itself inherently suffer from generalization problems or it is a result of imperfect parameter tuning. \\n\\nOne thing that would be interesting to explore further is to see how asynchronous updates of signSGD affect the convergence both in theory and practice. For example, some workers might be lost during one iteration, how will this affect the overall convergence.\\nAlso, it would be interesting to see the comparison of the proposed method with SGD + batch normalization, especially on their generalization performance. It might be interesting to explore what kind of regularization technique would be suitable for signed update kind of method. \\n\\nOverall, I think the paper proposes a novel distributed optimization algorithm that has both theoretical and experimental contribution. The presentation of the paper is clear and easy to follow.\", \"suggestions\": \"I feel the experiments part could still be improved as also mentioned in the paper to achieve competitive results. More experiments on different tasks and DNN architectures could be performed.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Jupyter notebook\", \"comment\": \"Dear anonReviewers,\\n\\nHere's a Jupyter notebook in case you'd like to play with the algorithm: https://colab.research.google.com/drive/1PlD2jXoXr2a8e57aIDINCw1-7RIttRTt\\n\\nIt can be run in the browser, or you can just download it and run on your machine.\\n\\nBest wishes,\\nanonAuthors\"}" ] }
Hylnis0qKX
Task-GAN for Improved GAN based Image Restoration
[ "Jiahong Ouyang", "Guanhua Wang", "Enhao Gong", "Kevin Chen", "John Pauly and Greg Zaharchuk" ]
Deep Learning (DL) algorithms based on Generative Adversarial Network (GAN) have demonstrated great potentials in computer vision tasks such as image restoration. Despite the rapid development of image restoration algorithms using DL and GANs, image restoration for specific scenarios, such as medical image enhancement and super-resolved identity recognition, are still facing challenges. How to ensure visually realistic restoration while avoiding hallucination or mode- collapse? How to make sure the visually plausible results do not contain hallucinated features jeopardizing downstream tasks such as pathology identification and subject identification? Here we propose to resolve these challenges by coupling the GAN based image restoration framework with another task-specific network. With medical imaging restoration as an example, the proposed model conducts additional pathology recognition/classification task to ensure the preservation of detailed structures that are important to this task. Validated on multiple medical datasets, we demonstrate the proposed method leads to improved deep learning based image restoration while preserving the detailed structure and diagnostic features. Additionally, the trained task network show potentials to achieve super-human level performance in identifying pathology and diagnosis. Further validation on super-resolved identity recognition tasks also show that the proposed method can be generalized for diverse image restoration tasks.
[ "Task-GAN: Improving Generative Adversarial Network for Image Restoration" ]
https://openreview.net/pdf?id=Hylnis0qKX
https://openreview.net/forum?id=Hylnis0qKX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJeimXVvkE", "SyeiqXvxTm", "Hkgnx5Gtn7", "rJl0F0-r2Q" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544139554590, 1541596051355, 1541118451827, 1540853382382 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper657/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper657/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper657/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper657/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This work presents a reconstruction GAN with an additional classification task in the objective loss function. Evaluations are carried out on medical and non-medical datasets.\", \"reviewers_raise_multiple_concerns_around_the_following\": [\"Novelty (all reviewers)\", \"Inadequate comparison baselines (all reviewers)\", \"Inadequate citations. (R2 & R3)\", \"Authors have not offered a rebuttal. Recommendation is reject. Work may be more suitable as an application paper for a medical conference or journal.\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reconstruction GAN with additional classification task, but lacking novelty, evaluation, and references.\"}", "{\"title\": \"An interesting paper but incremental technical contribution\", \"review\": \"In this paper, the authors propose a novel method of Task-GAN of image coupling by coupling GAN and a task-specific network, which alleviates to avoid hallucination or mode collapse. In general, the paper is addressing an important problem but I still have several concerns as follows:\\n1. The technical contribution is rather incremental since there exist numerous works on introducing another discriminator to GAN, such as Triple-GAN. \\n\\n2. Actually, as the authors mentioned, GAN is not an appropriate model for image restoration when accurate image completion is required. The authors are expected to make comparison with methods not based on GAN framework. \\n\\n3. The authors should clarify the details on the Task network since it is non-trivial to model a task.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Novelty is limited and explanation is not clear\", \"review\": \"Authors propose to augment GAN-based image restoration with another task-specific branch such as classification tasks for further improvement.\\n\\nHowever, the novelty is limited and not well explained.\\n1. The idea of adding a task-specific branch has been proposed in Huang et al\\u2019s work.\\nRui Huang, Shu Zhang, Tianyu Li, Ran He, Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis, ICCV 2017.\\n\\n2. It is not clear why for task-specific loss authors use mse loss instead of cross-entropy loss.\\n3. It is not clear how much data is used to train the super-resolution model and whether there is overlap between training data for super-resolution task and test data for recognition task.\\n4. The proposed method is not compared with other super-resolution methods.\\n5. There are typos with citations. There should be parenthesis around citations.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting applications but limited novelty and poorly selected baseline methods.\", \"review\": \"This paper proposed a new method for image restoration based a task-discriminator in addition to the GAN network. It shows superior performance than the baseline methods without such task-discriminator on medical image restoration and image super-resolution. While the results are better, the idea seems straightforward and has limited novelty. Please see the following comments:\\n\\n1. Adding an task-discriminator in a GAN network seems straightforward to improve the specific task. And this idea has already used in existing papers, e.g. Cycada. \\n\\nHoffman, J., Tzeng, E., Park, T., Zhu, J.Y., Isola, P., Saenko, K., Efros, A.A. and Darrell, T., 2017. Cycada: Cycle-consistent adversarial domain adaptation. ICML, 2018\\n\\n2. On the application side, the results are not very convincing because the baseline methods were not selected properly. For medical image reconstruction and image super-resolution, the proposed method was not compared with any of the state-of-the-art methods, but only with the same method without a task-discriminator as a baseline. For those tasks, there are many traditional methods and deep nets with different losses. For example, a simple L1/L2 or perceptual loss probably leads to better PSNR than the GAN loss, which is not compared at all. See the attached references. \\n\\n\\nLedig, C., Theis, L., Husz\\u00e1r, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A.P., Tejani, A., Totz, J., Wang, Z. and Shi, W., Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In CVPR 2017.\\n\\nJohnson, J., Alahi, A. and Fei-Fei, L., Perceptual losses for real-time style transfer and super-resolution. In ECCV 2016.\\n\\nKim, J., Kwon Lee, J. and Mu Lee, K., Accurate image super-resolution using very deep convolutional networks. In CVPR 2016.\\n\\n3. Some questions about medical image datasets. For the low-dose PET dataset, the input was randomly undersampled by a factor of 100. What is the random pattern? Is it uniform? In addition, why not acquire real low-dose data and show the quality results using the proposed model? For the multi-constast MRI data, how is the input generated and what is the ground-truth?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
Hkgnii09Ym
Set Transformer
[ "Juho Lee", "Yoonho Lee", "Jungtaek Kim", "Adam R. Kosiorek", "Seungjin Choi", "Yee Whye Teh" ]
Many machine learning tasks such as multiple instance learning, 3D shape recognition and few-shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the permutation of elements of the set, models used to address them should be permutation invariant. We present an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set. The model consists of an encoder and a decoder, both of which rely on attention mechanisms. In an effort to reduce computational complexity, we introduce an attention scheme inspired by inducing point methods from sparse Gaussian process literature. It reduces computation time of self-attention from quadratic to linear in the number of elements in the set. We show that our model is theoretically attractive and we evaluate it on a range of tasks, demonstrating increased performance compared to recent methods for set-structured data.
[ "attention", "meta-learning", "set-input neural networks", "permutation invariant modeling" ]
https://openreview.net/pdf?id=Hkgnii09Ym
https://openreview.net/forum?id=Hkgnii09Ym
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJgXZuvxl4", "HkepSRl1JN", "ryl1nBJ0pQ", "SklNCMkRp7", "rkxjwGJRTQ", "B1lmlfyRa7", "BJec0UPRhm", "B1eH_an927", "rkx3ROgchX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544742907378, 1543601732670, 1542481318931, 1542480587639, 1542480482934, 1542480363373, 1541465809754, 1541225836691, 1541175508469 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper656/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper656/Authors" ], [ "ICLR.cc/2019/Conference/Paper656/Authors" ], [ "ICLR.cc/2019/Conference/Paper656/Authors" ], [ "ICLR.cc/2019/Conference/Paper656/Authors" ], [ "ICLR.cc/2019/Conference/Paper656/Authors" ], [ "ICLR.cc/2019/Conference/Paper656/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper656/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper656/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper introduces set transformer for set inputs. The idea is built upon the transformer and introduces the attention mechanism. Major concerns on novelty were raised by the reviewers.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Novelty is limited.\"}", "{\"title\": \"Response after edits\", \"comment\": \"Thank you very much for raising the score. Sorry for not updating Table 1, we were aware of it but forgot to update it when we uploaded our revision. We will correct it upon our acceptance. We will also try to discuss more about permutation equivariant layers as you suggested.\"}", "{\"title\": \"Revision updated\", \"comment\": \"Dear reviewers,\\n\\nThanks for your comments. According to your opinion, we added three baselines to all experiments (mean pooling based permutation equivariant deep set , max pooling based permutation equivariant deep set (Zaheer et al, 2017), dot product attention based pooling (Yang et al., 2018, Ilse et al., 2018)). We've also added some extra experiments to see the scalability of the set transformer on large scale clustering experiments. Right now we are running the point cloud experiments with 5,000 pts, and the results will be updated as soon as it is completed. \\n\\nThere has been common concern about the novelty of our work. We want to emphasize again that our architecture is not a simple combination of existing works or naive adaptation of attention mechanism. Please refer to our comment to Reviewer 3 regarding the originality. Thanks.\"}", "{\"title\": \"Added permutation equivariant baselines\", \"comment\": \"Thanks for your constructive comments.\\ni) Consider permutation equivariant mappings (Zaheer et al).\\nThanks for pointing this out. We added permutation equivariant architectures with both mean pooling and max pooling (rFFp-mean and rFFp-max) as baselines for all experiments, and have updated the paper. Our overall observation is that these permutation equivariant baselines do help, but the performance gain was not as significant as the gains achieved by SAB, ISAB and PMA.\\n\\nii) Cite and consider Muandet et al. and Oliva et al.\\nThanks for mentioning the related works. Muandet et al. was cited and mentioned in the introduction in the submitted version of our paper. We have revised to include Oliva et al.\\n\\niii) Add modelnet w/5000; will code be available?\\nWe had no time to conduct experiments with 5,000 pts during our first submission. \\nRight now we are running experiments with 5,000 pts and they are going to be added to the appendix as soon as it is completed. The code will definitely be available open source.\"}", "{\"title\": \"About ablation studies\", \"comment\": \"Thanks for your constructive comments.\\nIn our experiments, we compare (rFF+Pooling, SAB+Pooling, ISAB+Pooling, rFF+PMA, SAB+PMA, ISAB+PMA).\\nEach of those variants are the Set Transformer with some (or no) components removed, so the experiments do report ablation results. We also added extra baselines (rFFp_mean + Pooling, rFFp_max + Pooling, rFF + Dotprod), and comparison to these methods supports our claim on the importance of having self-attention mechanism.\"}", "{\"title\": \"Clarification for the novelty and additional experiments\", \"comment\": \"Thanks for your constructive comments.\\n\\ni) Clarify originality\\nOur method is not a simple combination of [1,2,3]. [1,3] uses dot product attention, where the transformed features are fed into a FF layer to produce softmax weights to be used to pool the features via weighted average. Hence, these methods do not take into account pairwise/higher-order interactions between elements in sets. We added dot-product attention based pooling as another baseline for all experiments. As we reviewed in the related works section, there are works using transformer-type self attention mechanism in encoder part of the model [2,4], but none of them were presented in context of permutation invariant set-taking neural nets. We summarize the novelty of our model below.\\n\\n- We adapted transformer based self-attention mechanism for *both* encoder and decoder part of permutation invariant set networks. \\n\\n- We introduce ISAB, which allows us to implement self-attention mechanism with reduced runtime complexity. This is an original contribution that was not present in previous works.\\n\\n- We introduce PMA, which differs from the dot-product attention-based pooling schemes presented in previous works. Especially, having multiple seed vectors and applying self-attention among them is a novel idea that we found to be very effective, especially for clustering-like problems, where modeling of output interactions (such as explaining away) is important.\\n\\n[1] Yang et al. 2018, Attentional aggregation of deep feature sets for multi-view 3d reconstruction.\\n[2] Mishra et al. 2018, A simple neural attentive meta-learner.\\n[3] Ilse et al. 2018, Attention-based deep multiple instance learning.\\n[4] Ma et al. 2018, Attend and interact: higher-order object interactions for video understanding.\\n\\nii) Runtime concerns; can Set Transformer scale up?\\nISABs should be able to scale up since they require O(n) memory and time, where n is the number of points in a set. In fact this is precisely why we introduced ISAB. We have added additional experiments to demonstrate actual running time of ISAB and SAB, and the tradeoff between accuracy and running time with respect to the different number of inducing points: see Appendix C.1 and Figure 5 in the revised paper.\\n\\niii) Is attention useful when the set size is large and the embedding is expressive?\\nFirst of all, please note that ISAB + Pooling is also our contribution, which performed the best in Table 6. We presume that the reason why the set transformer was not as effective as ISAB + Pooling in Table 6 was due to the nature of the problem. In point-cloud classification, once we encode interactions between elements via the self-attention mechanism, decoding them into label vectors does not require complex architectures like PMA. To verify this, we conducted extra experiments on clustering, where we used up to 5,000 data points per set. See Appendix B.3.2 and Table 12. In this experiment, where the PMA plays an important role, set transformer works extremely well with as few as 32 inducing points.\"}", "{\"title\": \"A good paper but need some clarifications and improvements\", \"review\": \"This paper presented an attention-based neural network, namely set transformer, a new neural model\\nbased on original transformer designed for set inputs. The basic idea is to introduce the attention\\nmechanism in both learning the feature embeddings of the set inputs during \\u201cencoding\\u201d and aggregating \\nthese embeddings during \\u201cdecoding\\u201d. The paper is written clearly and well motivated. The extensive \\nset of experiments were conducted to demonstrate the effectiveness of the proposed method. In general, \\nI like reading this paper but there are some limitations or unclear parts I need authors to clarify\\nand explain. \\n\\ni) The proposed architecture is mainly adopted from the original transformer but it is highly related\\nto the baselines used in the experiments. For instance, it seems like that the current set \\ntransformer is a simple combination of Yang et al.(2018) and Mishra et al.(2018) (using Stack of\\nSABs) in encoder side and of Ilse et al.(2018) (using PMA and stack of SABs) in the decoder side. \\nThis simple combination makes the novelty of this paper unclear. I would like authors to clarify \\nmore on the originality w.r.t. these previous works. \\n\\nii) Although authors proposed a variant of SABs - ISABs using landmark points to accelerate the \\ncomputation, there are no any runtime comparisons between SABs and ISABs by fixing other components. \\nIt would be interesting to see that ISABs can approach the performance SABs and how it approaches it. \\nFor instance, shall we expect that ISABs approach the performance of SABs when increasing the number\\nof landmark points (inducing points)? Since in practice most of datasets are relatedly large, I think\\nunderstanding the behavior of ISABs is a more interesting problem. \\n\\niii) After seeing the results in table 6, I have quite concerned about the practical performance of\\nset transformer on relatively large datasets (like 1000 points each class in the settings.) It looks\\nto me that not only set transformer may have computational issues to scale up, but more importantly\\nthat when encoder learned really expressive embeddings with a relatively large number of the set \\ninputs it might be little need to leverage attention in pooling anymore. I would like authors to \\nconduct some other experiments on relatively large datasets to verify this hypothesis, which is \\nimportant for the practical applications of the proposed model.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting paper that uses attention for set inputs but needs more ablation study\", \"review\": \"The paper proposes several variants of attention-based algorithms for set inputs. Compared with previous approach that processes each instance separately and then pooling, the proposed algorithm models the interactions among the instances within the set and performs better on tasks where such properties are important.\\n\\nThe experiments seem promising. The paper compares SAB and ISAB to rFF + pooling over multiple different tasks and SAB and ISAB outperform rFF + pooling in many tasks.\\n\\nOne drawback of the paper which limits its significance is that there are seemingly too many components and it is not clear which components are most important and which are not unnecessary. The authors can conduct some ablation study by removing some components and compare the performance to understand which parts are essential to the improvements.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Missing comparisons to permutation equivariant DeepSets\", \"review\": \"This paper looks at stacking attention mechanism for learning over sets.\\n\\nI think that the paper is well written overall. The architecture put forth is a fairly straightforward implementation of attention. Thus the methodological contribution is incremental. Still, it is nice to see some implementation of an attention model be considered for permutation invariant set embeddings.\\n\\nHowever, there are some core misrepresentations and omissions that make publication difficult. The main problem is that the paper completely ignores the permutation equivariant mappings discussed in DeepSets (Zaheer 2017). See (4) and (23) of https://arxiv.org/pdf/1703.06114.pdf: \\\"Since composition of permutation equivariant functions is also permutation equivariant, we can build deep models by stacking layers.\\\"\\nIn practice this is often done by mapping points x_i in a set as x_i -> \\\\phi(x_i) - max_j \\\\phi(x_j). Stacking this layer works surprisingly well, typically better than just with a single pool. Thus, the permutation equivalent mappings of Zaheer 2017, which do have higher-order interactions and are linear in the number of points, are a glaring omission of table 1 and all of the experiments. Furthermore, the omission leads to a misrepresentation of the work.\", \"another_unfortunate_omission_is_previous_work_that_considers_set_and_distribution_data_through_kernels_and_other_nonparametric_methods_such_as\": \"Muandet, Krikamol, et al. \\\"Learning from distributions via support measure machines.\\\" Advances in neural information processing systems. 2012.\\nOliva, Junier, Barnab\\u00e1s P\\u00f3czos, and Jeff Schneider. \\\"Distribution to distribution regression.\\\" International Conference on Machine Learning. 2013.\\n\\nIt is also odd that the paper compared to DeepSets on modelnet with 100 and 1000 points but not with 5000 points. Will there be code available?\\n\\nWithout a better description of and comparison to permutation equivariant mappings I would feel hesitant to recommend publication.\", \"edit\": \"In light of the revised experiments and inclusion of permutation equivariant deepset layers, I'm inclined to recommend publication. However, if I could nitpick further, I think it would be nice to make some edit (or addition) to Table 1 to include permutation equivariant deepsets. Moreover, it would be nice to have some additional description of permutation equivariant layers in Section 2.1.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
HkljioCcFQ
MARGINALIZED AVERAGE ATTENTIONAL NETWORK FOR WEAKLY-SUPERVISED LEARNING
[ "Yuan Yuan", "Yueming Lyu", "Xi Shen", "Ivor W. Tsang", "Dit-Yan Yeung" ]
In weakly-supervised temporal action localization, previous works have failed to locate dense and integral regions for each entire action due to the overestimation of the most salient regions. To alleviate this issue, we propose a marginalized average attentional network (MAAN) to suppress the dominant response of the most salient regions in a principled manner. The MAAN employs a novel marginalized average aggregation (MAA) module and learns a set of latent discriminative probabilities in an end-to-end fashion. MAA samples multiple subsets from the video snippet features according to a set of latent discriminative probabilities and takes the expectation over all the averaged subset features. Theoretically, we prove that the MAA module with learned latent discriminative probabilities successfully reduces the difference in responses between the most salient regions and the others. Therefore, MAAN is able to generate better class activation sequences and identify dense and integral action regions in the videos. Moreover, we propose a fast algorithm to reduce the complexity of constructing MAA from $O(2^T)$ to $O(T^2)$. Extensive experiments on two large-scale video datasets show that our MAAN achieves a superior performance on weakly-supervised temporal action localization.
[ "feature aggregation", "weakly supervised learning", "temporal action localization" ]
https://openreview.net/pdf?id=HkljioCcFQ
https://openreview.net/forum?id=HkljioCcFQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJgflZ32J4", "H1lBtyH214", "r1xzqeXtAm", "r1lyqwftCQ", "SkgHR0-FAQ", "HkgzxyE0hm", "Syec-wnqn7", "rkxNicb5nm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544499434138, 1544470396750, 1543217290284, 1543214983241, 1543212749348, 1541451497713, 1541224194218, 1541180060403 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper655/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper655/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper655/Authors" ], [ "ICLR.cc/2019/Conference/Paper655/Authors" ], [ "ICLR.cc/2019/Conference/Paper655/Authors" ], [ "ICLR.cc/2019/Conference/Paper655/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper655/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper655/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a new attentional pooling mechanism that potentially addresses the issues of simple attention-based weighted averaging (where discriminative parts/frames might get disportionately high attentions). A nice contribution of the paper is to propose an alternative mechanism with theoretical proofs, and it also presents a method for fast recurrent computation. The experimental results show that the proposed attention mechanism improves over prior methods (e.g., STPN) on THUMOS14 and ActivityNet1.3 datasets. In terms of weaknesses: (1) the computational cost may be quite significant. (2) the proposed method should be evaluated over several tasks beyond activity recognition, but it\\u2019s unclear how it would work.\\n\\nThe authors provided positive proof-of-concept results on weakly supervised object localization task, improving over CAM-based methods. However, CAM baseline is a reasonable but not the strongest method and the weakly-supervised object recognition/segmentation domains are much more competitive domains, so it's unclear if the proposed method would achieve the state-of-the-art by simply replacing the weighted-averaging-attentional-pooling with the proposed attention mechanism. In addition, the description on how to perform attentional pooling over images is not clearly described (it\\u2019s not clear how the 1D sequence-based recurrent attention method can be extended to 2-D cases). However, this would not be a reason to reject the paper. \\n\\nFinally, the paper\\u2019s presentation would need improvement. I would suggest that the authors give more intuitive explanations and rationale before going into technical details. The paper starts with Figure 1 which is not really well motivated/explained, so it could be moved to a later part. Overall, there are interesting technical contributions with positive results, but there are issues to be addressed.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"metareview\"}", "{\"title\": \"Response to Author Rebuttal\", \"comment\": \"I appreciate the updated results on weakly-supervised object localization on images. Overall, I think the paper has reasonable contributions. The improvement in THUMOS14 dataset over STPN is not significant, but the results on ActivityNet look promising and the results on weakly-supervised object localization are convincing to believe that the proposed method can be generally useful to address the challenge of weakly-supervised localization where the model focuses on the most discriminative regions. For these reasons, I maintain my review score as weakly accept.\"}", "{\"title\": \"Some clarifications about the qualitative and quantitative results\", \"comment\": \"Thanks very much for the valuable comments and suggestions. We have clarified some questions listed below:\", \"q\": \"Results.\\nThe quantitative results show the improvement of our methods compared with the baselines in the experiments. It means that our method can bring more true positives than the false positives. We also show more qualitative results on image object localization task (Appendix F in the updated paper).\"}", "{\"title\": \"Experiments on weakly-supervised object localization task\", \"comment\": \"Thanks very much for the valuable comments and suggestions. \\n\\nWe have applied the idea to weakly-supervised image object localization task. As suggested, similar to CAM (Zhou et al. 2016), we plug the proposed MAA pooling method on top of the CNN feature map instead of global average pooling. Besides compared with global average pooling, we have also compared with the weighted average pooling. The specific experimental settings and results are shown in Appendix F in the updated paper. \\n\\nAs for the time complexity, we use 20 snippets in the training phase. At the test phase for localization, we forward each snippet i to the trained model and compute the p_i but not the lamda_i, as shown in Equation (14) (the proof is demonstrated in Proposition 2 in Section 2.2). Therefore, the time complexity at test phase is indeed O(T), which can also be easily parallelized O(1).\\n\\nWe have corrected the citation as suggested.\"}", "{\"title\": \"Clarifying experimental setting and show more experiments on other tasks\", \"comment\": \"Thanks very much for the comments and suggestion on other localization tasks. \\n\\nActually, many works are based on the model pre-trained on other datasets like ImageNet and Kinetics (Carreira and Zisserman 2017). The compared STPN model in this paper has also used the I3D model pre-trained on Kinetics dataset. We compare the proposed MAAN with STPN and other baseline models on the exact same experimental settings. \\n\\nThe proposed feature aggregator can be used in other weakly-supervised learning tasks. For example, we have applied the proposed method on weakly-supervised image object localization task. The experimental settings and results are shown in the Appendix F in the updated paper.\"}", "{\"title\": \"Could this paper be used for other tasks beyond video action understanding?\", \"review\": \"This paper considers the problem of weakly-supervised temporal action localization. It proposes a marginalized average attention network (MAAN) to suppress the effect of overestimating salient regions. Theoretically, this paper proves that the learned latent discriminative probabilities reduce the difference of responses between the most salient regions and the others. In addition, it develops a fast algorithm to reduce the complexity of constructing MAA to O(T^2). Experiments are conducted on THUMOST14 and ActivityNet 1.3.\\n\\nI like the theoretical part of this paper but have concerns about the experiments. More specifically, my doubts are\\n\\n- The I3D network models are not trained from scratch. The parameters are borrowed from (Carreira and Zisserman 2017), which in fact make the attention averaging very easy. I don\\u2019t know whether the success is because the proposed MAAN is working or because the feature representation is very powerful.\\n\\n- If possible, I wish to see the success of the proposed method for other tasks, such as image caption generation, and machine translation. If the paper can show success in any of such task, I would like to adjust my rating to above acceptance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Well executed paper on a reasonable idea\", \"review\": \"Summary\\nThis paper proposed a stochastic pooling method over the temporal dimension for weakly-supervised video localization problem. The main motivation is to resolve a problem of discriminative attention that tends to focus on a few discriminative parts of an input data, which is not desirable for the purpose of dense labeling (i.e. localization). The proposed stochastic pooling method addressed this problem by aggregating all possible subsets of snippets, where each subset is constructed by sampling snppets from learnable sampling distribution. The proposed method showed that such approach learns more smooth attention both theoretically and empirically.\", \"clarity\": \"The paper is well written and easy to follow. The ideas and methods are clearly presented.\", \"originality_and_significance\": \"The proposed stochastic pooling is novel and demonstrated that empirically useful. Given that the proposed method can be generally applicable to other tasks, I think the significance of the work is also reasonable. One suggestion is applying the idea to semantic segmentation, which also shares a similar problem setting but easier to evaluate its impact than videos. Similar to (Zhou et al. 2016), you can plug the proposed pooling method on top of CNN feature map instead of global average pooling, which might be doable with the more affordable computational cost since the number of hidden units for pooling is much smaller than the length of videos (N < T). \\n\\nOne downside of the proposed method is its computational complexity (O(T^2)). This is much higher than the one for other feedforward methods (O(T)), which can be easily parallelized (O(1)). This can be a big problem when we have to handle very long sequences too (increasing the length of snippets could be one alternative, but it is not desirable for localization at the end). Considering this disadvantage, the performance gain by the proposed method may not be considered attractive enough.\", \"experiment\": \"Overall, the experiment looks convincing to me.\", \"minor_comments\": \"\", \"citation_error\": \"Wrong citation: Nguyen et al. CVPR 2017 -> CVPR 2018\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Overly-complicated explanation of method, qualitative and quantitative results do not clearly reflect the proposed contribution.\", \"review\": \"In this paper the authors focus on the problem of weakly-supervised action localization. The authors state that a problem with weakly-supervised attention based methods is that they tend to focus on only the most salient regions and propose a solution to this which reduces the difference between the responses for the most salient regions and other regions. They do this by employing marginalized average aggregation to averaging a sample a subset of features in relation to their latent discriminative probability then calculating the expectation over all possible subsets to produce a final aggregation.\\n\\nThe problem is interesting, especially noting that current attention methods suffer from paying attention to the most salient regions therefore missing many action segments in action localization. The authors build upon an existing weakly-supervised action localization framework, having identified a weakness of it and propose a solution. The work also pays attention to the algorithm's speed which is practically useful. The experiments also compare to several other potential feature aggregators.\\n\\nHowever, there are several weakness of the current version of the paper:\\n\\n- In parts the paper feels overly complicated, particularly in the method (section 2). It would be good to see more intuitive explanations of the concepts introduce here. For instance, the author's state that c_i captures the contextual information from other video snippets, it would be good to see a figure with an example video and the behaviour of p_i and c_i as opposed to lamba_i. I found it difficult to map p_i, c_i to z and lambda used elsewhere.\\n\\n- The experimental evidence does not show where the improvement comes from. The authors manage to acheieve a 4-5% improvement over STPN through their re-implemenation of the algorithm, however only have a ~2% improve with their marginalized average attention on THUMOS. I would like to know the cause in the increase over the original STPN results: is it a case of not being able to replicate the results of STPN or do the different parameter choices, such as use of leakly RELU, 20 snippets instead of 400 and only rejecting classes whose video-level probabilities are below 0.01 instead of 0.1, cause this big of an increase in results? There is also little evidence that the actual proposal (contextual information) is the reason for the reported improvement.\\n\\n- There seems to be several gaps in the review of current literature. Firstly, the authors refer to Wei et al. 2017 and Zhang et al. 2018b as works which erase the most salient regions to be able to explore regions other than the most salient. The authors state that the problem with these methods is that they are not end-to-end trainable, however Li et al. 2018 'Tell Me Where to Look': Guided Attention Inference Network' proposes a method which erases regions which is trainable end-to-end. Secondly, the authors do not mention the recent work W-TALC which performs weakly-supervised action localization and outperforms STPN. It would be good to have a baseline against this method.\\n\\n- The qualitative results in this paper are confusing and not convincing. It is true that the MAAN's activation sequence shows peaks which correspond to groundtruth and are not present in other methods. However, the MAAN activation sequence also shows several extra peaks not present in other methods and also not present in the groundtruth, therefore it looks like it is keener to predict the presence of the action causing more true positives, but also more false positives. It would be good to see some discussion of these failure cases and/or more qualitative results. The current figure could be easily compressed by only showing one instance of the ground-truth instead of one next to each method.\\n\\nI like the idea of the paper however I am currently unconvinced by the results that this is the correct method to solve the problem.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rJEjjoR9K7
Learning Robust Representations by Projecting Superficial Statistics Out
[ "Haohan Wang", "Zexue He", "Zachary C. Lipton", "Eric P. Xing" ]
Despite impressive performance as evaluated on i.i.d. holdout data, deep neural networks depend heavily on superficial statistics of the training data and are liable to break under distribution shift. For example, subtle changes to the background or texture of an image can break a seemingly powerful classifier. Building on previous work on domain generalization, we hope to produce a classifier that will generalize to previously unseen domains, even when domain identifiers are not available during training. This setting is challenging because the model may extract many distribution-specific (superficial) signals together with distribution-agnostic (semantic) signals. To overcome this challenge, we incorporate the gray-level co-occurrence matrix (GLCM) to extract patterns that our prior knowledge suggests are superficial: they are sensitive to the texture but unable to capture the gestalt of an image. Then we introduce two techniques for improving our networks' out-of-sample performance. The first method is built on the reverse gradient method that pushes our model to learn representations from which the GLCM representation is not predictable. The second method is built on the independence introduced by projecting the model's representation onto the subspace orthogonal to GLCM representation's. We test our method on the battery of standard domain generalization data sets and, interestingly, achieve comparable or better performance as compared to other domain generalization methods that explicitly require samples from the target distribution for training.
[ "domain generalization", "robustness" ]
https://openreview.net/pdf?id=rJEjjoR9K7
https://openreview.net/forum?id=rJEjjoR9K7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Syxg2653JV", "S1xWkt7QCX", "BJet3_XXRQ", "S1gEYuQXC7", "SJxqbOQmRX", "rJxbWynhhm", "H1ehlWduhm", "HJee7cfwh7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544494503744, 1542826201175, 1542826160711, 1542826107971, 1542825985735, 1541353208887, 1541075188360, 1540987415677 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper654/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper654/Authors" ], [ "ICLR.cc/2019/Conference/Paper654/Authors" ], [ "ICLR.cc/2019/Conference/Paper654/Authors" ], [ "ICLR.cc/2019/Conference/Paper654/Authors" ], [ "ICLR.cc/2019/Conference/Paper654/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper654/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper654/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents a new approach for domain generalization whereby the original supervised model is trained with an explicit objective to ignore so called superficial statistics present in the training set but which may not be present in future test sets. The paper proposes using a differentiable variant of gray-level co-occurrence matrix to capture the textural information and then experiments with two techniques for learning feature invariance. All reviewers agree the approach is novel, unique, and potentially high impact to the community.\\n\\nThe main issues center around reproducibility as well as the intended scope of problems this approach addresses. The authors have offered to include further discussions in the final version to address these points. Doing so will strengthen the paper and aid the community in building upon this work.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Oral)\", \"title\": \"Original work for domain generalization with strong experimental evidence\"}", "{\"title\": \"Reply to Reviewer 3\", \"comment\": \"Thank you for the strong positive assessment of our work. We\\u2019re glad that you appreciated the originality of our approach, the value of our new datasets, and the quality of our exposition. We will continue to improve the draft in the camera-ready version.\"}", "{\"title\": \"Reply to Reviewer 2\", \"comment\": \"Thanks for a detailed review. We are grateful both for your big-picture feedback and for your extensive granular suggestions to improve the exposition of our paper. We were glad to see that you appreciated our creativity in using GLCM and recognized the modularity of our design. Your question regarding F_L and F_P is insightful and we\\u2019re glad that you identified this missing detail in the paper. We compared evaluation with F_L and F_P and discovered that performance was equivocal. This favors the use of F_P, allowing us to use the machinery of the GLCM at training time but discarding it at test time. We promise to add this discussion and supporting experiments to the camera-ready version. Additionally, we will revise the first paragraph of 3.2 per your suggestions and fix the numerous small typos and type-setting corrections that you identified. Thanks again for your generous feedback and attention to detail.\"}", "{\"title\": \"Reply to Reviewer 1\", \"comment\": [\"Thank you very much for these comments. We are glad that you appreciated the paper\\u2019s overall aims and recognized the general applicability of the methodology that we propose. We are also grateful for your constructive suggestions:\", \"To address your concerns about reproducibility we will add an appendix providing extensive detail about all heuristics employed during training. Additionally, we plan to release open source version of all of our code upon publication.\", \"Regarding Table 2: thanks for pointing this out. We agree that while an argument can be found in the main text, Table 2 is poorly described in the caption and must be better presented in the camera-ready version. In short, domains D and W here overlap significantly. Therefore a model trained on one and evaluated on the other perform well, and we conjectured that discarding the superficial information can actually degrade performance.\"]}", "{\"title\": \"General Reply to Reviews\", \"comment\": \"We would like to thank all of the reviewers for their constructive reviews. Overall, we are glad to see that all three reviewers champion the paper, appreciating the paper\\u2019s overall aim, creativity in revisiting GLCM, proposed experiment set-ups, and the strength of the empirical results. We are also grateful for the reviewers\\u2019 constructive suggestions which will help to improve the camera-ready version of the paper. Please find comments We will answer the reviewers\\u2019 comments individually.\"}", "{\"title\": \"A domain generalization approach is introduced to reveal semantic (relevant) information based on a linear projection scheme from CNN and NGLCM ouput layers.\", \"review\": \"The paper is clear regarding motivation, related work, and mathematical foundations. The introduced cross-local intrinsic dimensionality- (CLID) seems to be naive but practical for GAN assessment. Notably, the experimental results seem to be convincing and illustrative.\\n\\nThe domain generalization idea from CNN-based discriminative feature extraction and gray level co-occurrence matrix-based high-frequency coding (superficial information), is an elegant strategy to favor domain generalization. Indeed, the linear projection learned from CNN, and GLCM features could be extended to different real-world applications regarding domain generalization and transferring learning. So, the paper is clear to follow and provides significant insights into a current topic.\", \"pros\": [\"Clear mathematical foundations.\", \"The approach can be applied to different up-to-date problems.\", \"-Though the obtained results are fair, the introduced approach would lead to significant breakthroughs regarding domain generalization techniques.\"], \"cons\": \"-Some experimental results can be difficult to reproduce. Indeed, authors claim that the training heuristic must be enhanced.\\n-Table 2 results are not convincing.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Revisit old image processing idea, add parameters, make differentiable. Show that it can be used to ignore background textures. Extensive experiments on domain adaptation.\", \"review\": \"Summary:\\nThe paper proposes an unsupervised approach to identify image features that are not meaningful for image classification tasks. The goal is to address the domain adaptation (DA)/domain generalization (DG) issue. The paper introduces a new learning task where the domain identity is unavailable during training, called unguided domain generalization (UDG). The proposed approach is based on an old method of using gray level co-occurence matrix, updated to allow for differentiable training. This new approach is used in two different ways to reduce the effect of background texture in a classification task. The paper introduces a new dataset, and shows extensive and carefully designed experiments using the new data as well as existing domain generalization datasets.\\n\\nThis paper revisits an old idea from image processing in a new way, and provides an interesting unsupervised method for identifying so called superficial features. The proposed block seems to be very modular in design, and can be plugged into other architectures. The main weakness is that it is a bit unclear exactly what is being assumed as \\\"background texture\\\" by the authors.\", \"overall_comments\": [\"Some more clarity on what you mean by superficial statistics would be good. E.g. by drawing samples. Are you assuming the object is centered? Somehow filling the image? Different patch statistics? How about a texture classification task?\", \"please derive why NGLCM reduces to GLCM in the appendix. Also show the effect of dropping the uniqueness constraint.\", \"Section 3.2: I assume you are referring to an autoencoder style architecture here. Please rewrite the first paragraph. The current setup seems to indicate that you doing supervised training, since you have labels y, but then you talk about decoder and encoder.\", \"Section 3.2: Please expand upon why you use F_L for training but F_P during testing\", \"Minor typos/issues:\", \"Last bullet in Section 1: DG not yet defined, only defined in Section 2.\", \"page 2, Section 2, para 1: data collection conduct. Please reword.\", \"page 2, Section 2, para 2: Sentence: For a machine learning ... There is no object in this sentence. Not sure what you are trying to define.\", \"page 2, Section 2, para 2: Is $\\\\mathcal{S}$ and $\\\\mathcal{T}$ not intersecting?\", \"page 2, Section 2.1: Heckman (1977), use \\\\citep\", \"page 2, Section 2.1: Manski, citep and missing year\", \"page 3, Section 2.1: Kumagai, use citet\", \"page 3, Section 3.1: We first expand ... --> We first flatten A into a row vector\", \"page 4, Section 3.1: b is undefined. I assume you mean d?\", \"page 4, Section 3.1: twice: contrain --> constraint\", \"page 4, Section 3.2: <X,y> --> {X,y} as used in Section 3.1.\", \"page 4, Section 3.2, just below equation: as is introduced in the previous section. New sentence about MLP please. And MLP not defined.\", \"page 4, Section 3.2, next paragraph: missing left bracket (\", \"page 4, Section 3.2: inferred from its context.\", \"page 5, Section 4: popular DG method (DANN)\", \"page 7: the rest one into --> the remaining one into\", \"page 8: rewrite: when the empirical performance interestingly preserves.\", \"page 8, last sentence: GD --> DG\", \"A2.2: can bare with. --> can deal with.\", \"A2.2: linear algebra and Kailath Variant. Unsure what you are trying to say.\", \"A2.2: sensitive to noises --> sensitive to noise.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The proposal is well structured and written. The quality of the paper is excellent in terms of novelty and originality.\", \"review\": \"The paper proposed a novel differentiable neural GLCM network which captures the high reference textural information and discard the lower-frequency semantic information so as to solve the domain generalisation challenge. The author also proposed an approach \\u201cHEX\\u201d to discard the superficial representations. Two synthetic datasets are created for demonstrating the methods advantages on scenarios where the domain-specific information is correlated with the semantic information. The proposal is well structured and written. The quality of the paper is excellent in terms of novelty and originality. The proposed methods are evaluated thoroughly through experiments with different types of dataset and has shown to achieve good performance.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HJMsiiRctX
Probabilistic Program Induction for Intuitive Physics Game Play
[ "Fahad Alhasoun" ]
Recent findings suggest that humans deploy cognitive mechanism of physics simulation engines to simulate the physics of objects. We propose a framework for bots to deploy similar tools for interacting with intuitive physics environments. The framework employs a physics simulation in a probabilistic way to infer about moves performed by an agent in a setting governed by Newtonian laws of motion. However, methods of probabilistic programs can be slow in such setting due to their need to generate many samples. We complement the model with a model-free approach to aid the sampling procedures in becoming more efficient through learning from experience during game playing. We present an approach where a myriad of model-free approaches (a convolutional neural network in our model) and model-based approaches (probabilistic physics simulation) is able to achieve what neither could alone. This way the model outperforms an all model-free or all model-based approach. We discuss a case study showing empirical results of the performance of the model on the game of Flappy Bird.
[ "intuitive physics", "probabilistic programming", "computational cognitive science", "probabilistic models" ]
https://openreview.net/pdf?id=HJMsiiRctX
https://openreview.net/forum?id=HJMsiiRctX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ByxqNOUGxN", "BJe77yJc27", "Skg3AIdYn7", "SJehimfIj7" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544869937552, 1541168923309, 1541142227597, 1539871652356 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper653/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper653/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper653/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper653/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents the combination of a model-based (probabilistic program representing the physics) and model-free (CNN trained with DQN) to play Flappy Bird.\\n\\nThe approach is interesting, but the paper is hard to follow at times, and the solution seems too specific to the Flappy Bird game. This feels more like a tech report on what was done to get this score on Flappy Bird, than a scientific paper with good comparisons on this environment (in terms of models, algorithms, approaches), and/or other environments to evaluate the method. We encourage the authors to do this additional work.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Not enough supporting evidence\"}", "{\"title\": \"Unconvincing Results\", \"review\": \"The authors present an algorithm that incorporates deep learning and physics simulation, and apply this algorithm to the game Flappy Bird. The algorithm uses a convolutional network trained on agent play to predict the agent\\u2019s own actions given a sequence of frames. Using this action estimator output as a prior over an action distribution (parameterized by a Dirichlet process), the algorithm iteratively updates the action by rolling out a ground-truth physics simulator of the environment, observing whether this ground-truth simulation yields negative reward, and updating the action accordingly.\\n\\nWhile I find the authors' introductory philosophy largely compelling (it draws inspiration from developmental psychology, learning to model the physical world, and the synthesis of model-based and model-free learning), I have concerns with most other aspects of the paper. Specifically, here are a few points:\\n\\n1) The authors only apply their algorithm to a single game (Flappy Bird), a game that has no previously established benchmarks. In fact, while there is no prior work in the literature on this game (perhaps because it is considered very easy), some unofficial results suggest that it is solvable by a straightforward application of existing methods (see this report: http://cs229.stanford.edu/proj2015/362_report.pdf). The authors do apply one baseline (out-of-the-box DQN) to this game, but the reported scores are suspiciously low, particularly in light of the report linked above. No training curves or additional baselines are shown, and no prior work on this game in the literature exists to compare against.\\n\\n2) The authors\\u2019 algorithm uses privileged information which eliminates the possibility for a fair comparison to baselines. Specifically, their algorithm uses ground-truth state (not just image input), and a ground-truth physics simulator (which should be an enormous advantage). Their one baseline (DQN) does not have either of these sources of privileged information, hence cannot be a fair comparison.\\n\\n3) The authors\\u2019 algorithm is not general-purpose. Because the algorithm itself uses a ground-truth environment-specific state, a ground-truth environment-specific simulator, and relies on a \\u201ccrash boolean\\u201d (whether the bird hit a tree) specific to this game, it cannot be applied out-of-the-box on a different environment.\\n\\n4) The authors make some claims that are too strong in light of the reported results. For example, they claim that \\u201cthe performance of the model outperforms all model-free and model-based approaches\\u201d (section 1), while they do not even compare against any model-based baselines (and only a single model-free baseline, DQN, which is not state-of-the-art anymore).\\n\\nOverall, I would recommend the authors choose a game or set of games that has/have established baselines in the literature, come up with a general-purpose algorithm which doesn\\u2019t rely on a ground-truth physics simulator, and more rigorously compare to existing methods.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Official Review: combines a model-free approach and a model-based approach for one game, Flappy Bird. Very few details about program induction or the physics engine.\", \"review\": \"This paper combines a model-free approach and a model-based approach for the game of Flappy Bird. The model-free approach is a CNN on the screen snapshots, in the same fashion as DQN was used for Atari games. The model-based approach is a probabilistic model of Newtonian laws of motion.\\nThe combination of model-free and model-based approaches is definitely one very relevant issue in machine learning, especially in interactive situations, such as reinforcement learning and robotics. The ideas that this paper combines are state-of-the-art and hence is it informative to see how these two particular techniques from each paradigm work together. The most interesting part is that the probabilistic model restricts the CNN restricts the sample action for the model.\\nI\\u2019m not sure how novel this particular combination is, but other systems have also used a model (or a solver) to restrict the possibilities of the exploration. For instance, AlphaGo combines the model-free approach with the rules of the game (at least in the first versions of AlphaGo), and others use Montecarlo Tree Search in a similar vain.\\nThe paper is generally well-written, with some typos occasionally. However, I think that some parts of the process are no well explained, or explained in the wrong order.\\nFor instance, the title and the abstract misled me for quite a while. The title says program induction, and this is then said to be in a fashion similar to Ghahramani 2015, but no further details are given. Is this using Julia? In any case, where is the induction? Later on, it is said that \\u201cthe model learns the distributions\\u201d. but the exact way is completely missing. In other words, the model-based part is not described and encapsulated in a cryptic PHYSICSSIMULATION. In any case, if the model just learn the parameter, I wouldn\\u2019t call this \\u201cprogram induction\\u201d, at least in the same way as Lake and others use it, or in the way it is used in the area of \\u201cinductive programming\\u201d. \\nThe game should be describe at the start as \\u201cunwanted collisions\\u201d are meaningless for a reader who doesn\\u2019t know the goal of the game (which is explained in the last paragraph of the paper).\\nThe main problem is that the physics simulation is not learnt and hence rewritten for other games with other physics. This should be solved, in order to see significant progress for benchmarks such as ALE (and properly compare with DQN and many other variants). Perhaps robotics is a better application area, as the physics are always the same (true physics).\\nRegarding the experiments, they are not very conclusive, especially because the difference is not that large and it is only one single game. The application to other games would be needed, especially if the physics is different. Also, the parameters are different (the ms) and I didn\\u2019t understand if these are the best choices, or the possible choices given the computational limitations. In other words, I don\\u2019t know if all techniques are compared in the same compute and data conditions. In Figure 3, the yaxis should be explained.\\n(hum 2017) I imagine that this refers to the authors, otherwise it is a typo.\\nI didn\\u2019t understand the future work. I didn\\u2019t parse the bit: \\u201clearn about rewards structures in games of physics intuition\\u201d. Actually, the final paragraph gives hints about how much assistance and specialisation the approach is given and hence the limit of generalisations to other problems. These limitations should be stated from the beginning.\", \"pros\": [\"Important integration of model-free and model-based approaches.\", \"State-of-the-art techniques\"], \"cons\": [\"I don\\u2019t see program induction, despite being in the title.\", \"Experiments are limited to one game\", \"The physics engine is specialised for this game, and hence the approach is difficult to generalise (automatically) for a range of problems where the models should be different.\"], \"typos\": \"Gharmani -> Ghahramani\\novers -> over\\nbird is required to chose -> the bird is required to choose\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Writing and results should be improved\", \"review\": \"This paper solves Flappy bird by combining DQN and probabilistic programming. I think this is in general a good avenue to explore.\\n\\nHowever I found the paper to be poorly written. For example, notation is not properly introduced, there are many mathematical mistakes and typos in the written text and citations. This makes it very hard to understand what is actually going on.\\n\\nIt is also not clear what is the probabilistic program and what are we conditioning on? What is the inference algorithm? Maybe it's useful to expand more on how this ties to the \\\"RL as inference\\\" framework (see e.g. Levine, 2018). It seems like we are doing rejection sampling where the condition is \\\"no collision\\\". As a result, I'm not sure whether sampling from prior is a competitive baseline.\\n\\nFor the DQN experiment, the learning curve seems very noisy in a way that it's unclear whether a fair conclusion can be drawn only from one run (as it appears to be done).\\n\\nThe experiments also feel a bit contrived to make a strong case for probabilistic programming + DQN.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SyGjjsC5tQ
Stable Opponent Shaping in Differentiable Games
[ "Alistair Letcher", "Jakob Foerster", "David Balduzzi", "Tim Rocktäschel", "Shimon Whiteson" ]
A growing number of learning methods are actually differentiable games whose players optimise multiple, interdependent objectives in parallel – from GANs and intrinsic curiosity to multi-agent RL. Opponent shaping is a powerful approach to improve learning dynamics in these games, accounting for player influence on others’ updates. Learning with Opponent-Learning Awareness (LOLA) is a recent algorithm that exploits this response and leads to cooperation in settings like the Iterated Prisoner’s Dilemma. Although experimentally successful, we show that LOLA agents can exhibit ‘arrogant’ behaviour directly at odds with convergence. In fact, remarkably few algorithms have theoretical guarantees applying across all (n-player, non-convex) games. In this paper we present Stable Opponent Shaping (SOS), a new method that interpolates between LOLA and a stable variant named LookAhead. We prove that LookAhead converges locally to equilibria and avoids strict saddles in all differentiable games. SOS inherits these essential guarantees, while also shaping the learning of opponents and consistently either matching or outperforming LOLA experimentally.
[ "multi-agent learning", "multiple interacting losses", "opponent shaping", "exploitation", "convergence" ]
https://openreview.net/pdf?id=SyGjjsC5tQ
https://openreview.net/forum?id=SyGjjsC5tQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Bke6hKRlgN", "SJg1HSY30m", "S1lFlwbjR7", "S1g8jRe9AX", "r1gygsCu07", "S1l_xkOXa7", "SkgljTPmTQ", "Ske-EpPQTm", "HkeiQRSTnQ", "H1l2AeXanm", "BkeyQHEcnQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544772020922, 1543439670640, 1543341809476, 1543274142354, 1543199463354, 1541795568131, 1541795224003, 1541795113399, 1541393955226, 1541382355808, 1541190934660 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper652/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper652/Authors" ], [ "ICLR.cc/2019/Conference/Paper652/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper652/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper652/Authors" ], [ "ICLR.cc/2019/Conference/Paper652/Authors" ], [ "ICLR.cc/2019/Conference/Paper652/Authors" ], [ "ICLR.cc/2019/Conference/Paper652/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper652/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper652/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This paper provides interesting results on convergence and stability in general differentiable games. The theory appears to be correct, and the paper reasonably well written. The main concern is in connections to an area of related work that has been omitted, with overly strong statements in the paper that there has been little work for general game dynamics. This is a serious omission, since it calls into question some of the novelty of the results because they have not been adequately placed relative to this work. The authors should incorporate a thorough discussion on relations to this work, and adjust claims about novelty (and potentially even results) based on that literature.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Correct and reasonably well-written paper with some concerns on missing literature\"}", "{\"title\": \"Thank you for these references\", \"comment\": \"Thank you for these important references. Unfortunately we were not aware of this literature, especially the monograph of Facchinei and Kanzow and the older work mentioned. Thanks also for linking to the preprint on general games with continuous action sets. This is a great starting point to explore further in this area: apologies for having been unaware of this in the first place. We will be sure to cite and discuss a number of these related works in a revision of the paper.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for reading through our clarifications in detail, and for providing further comments.\\n\\n- We will be sure to clarify the question of twice/thrice differentiability in a note following Definition 1.\\n\\n- The application of Cauchy-Schwartz goes as follows. Writing $u = -\\\\alpha \\\\chi$ and $v = \\\\xi_0$, we have $-||u|| * ||v|| \\\\leq <u, v>$ by (one half of) the Cauchy-Schwartz inequality. Taking opposites and inverses on both sides, we obtain $1/(||u|| * ||v||) \\\\leq -1/<u, v>$. This is how the negative sign disappears from one fraction to the next. We will add an extra step in the equation to make this clearer.\\n\\nWe will add derivations and check our proofs/equations before submitting a final revision. Thanks again for your helpful and thorough review.\"}", "{\"comment\": \"I was disappointed to see that the authors make no reference to the rich literature on continuous games where the positive-definiteness of the Hessian has been explored quite extensively as a stability criterion.\\n\\nThe role of this condition dates back (at least) to the work of Rosen in the 60's (Econometrica, 1965), wherein it was introduced precisely as a stability criterion for the convergence of first-order learning methods in N-player games with continuous action sets.\\n\\nFor a more recent take, the authors might also want to consult the monograph of Facchinei and Kanzow (Annals of OR, 2010): Hessian stability is discussed extensively in Section 5 of said paper (Algorithms), and plays the same role as in the current paper.\\n\\nIt should be noted that the above papers concern a model which is (in at least one sense) even more general than that of the authors, because the admissible actions of a player may depend on the actions of all other players (hence the term \\\"generalized Nash equilibrium problem\\\"/GNEP). Also, even though the above papers concern games with individually convex loss functions, the extension to non-convex games under local stability conditions has also been explored in the literature - see e.g., the recent preprint https://arxiv.org/abs/1608.07310.\\n\\nThe above goes to show that statements like \\\"the only theoretical work on general game dynamics is Symplectic Gradient Adjustment (SGA) by Balduzzi et al. (2018)\\\" are not representative of the state of the art in the subject. The same also holds for the authors' complete lack of references to this literature in Section 2.2 (and, to be clear, the papers mentioned above comprise but a small sample of a very active literature on games with continuous action spaces).\\n\\nTo state things frankly, the field is not a virgin territory only recently discovered, so I would urge the authors to take this into account in their bibliographical policy - the papers above could provide a starting point in that respect, so they should be properly cited and discussed.\", \"title\": \"Related literature on stable equilibria and continuous games\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thank you for addressing all the comments.\\n\\n- I am satisfied with the explanation from the authors regarding Theorem D.4 and the revision adequately addresses most of the comments.\\n\\n- Regarding differentiability, it is fine to retain the original definition of differentiable games while your result requiring thrice differentiable losses. However, the justification you provided in your response needs to be added in to the paper and preferably as a note immediately after Definition 1 to avoid the misinterpretation. \\n\\n- Another question Lemma D.7 - Towards the end of proof, the application of Cauchy Schwartz is not clear. You show that ||-\\\\alphaX(\\\\theta)|| = \\\\alpha^2||X(\\\\theta)|| < c. However, it is not clear how the equation below that holds? Specifically, why does the negative sign in the first fraction disappear and somehow the overall term becomes >= \\\\alpha||\\\\Psi_0||/||-\\\\alphaX||.\\n\\nIt is recommended that the authors proofread all the proofs and equations and try to use notations and show derivations without making them confusing for the overall presentation. It would also help to number equations for quick reference.\\n\\nOverall the paper presents strong theoretical results with adequate empirical evidence. It certainly addresses an important problem of trade-off between convergence and stability in multi-objective settings and I have updated my score from 6 to 8 to strongly support it for acceptance.\"}", "{\"title\": \"Response to your review\", \"comment\": \"Thank you for this review. Some of the notation could certainly have been made clearer. Each point is addressed below and will be incorporated in a revision of the paper.\\n\\n1. If agent $i$ has parameters $\\\\theta^i_t$ at some fixed time $t$, the \\\"current parameters\\\" are simply defined as $\\\\hat{\\\\theta}^i = \\\\theta^i_t$. The point is that these parameters are updated at each step to minimise a loss function. In LOLA, each agent assumes that the opponent updates their parameters dynamically, *after* their own optimisation step. In reality, they can only see the *current* parameters $\\\\theta^i_t$ instead of the *optimised* (next) parameters $\\\\theta^i_{t+1}$. Noticing this leads to an alternative algorithm, LookAhead.\\n\\n2. The stop-gradient operator is really a *computational* operator rather than a formal, mathematical one. This is known in PyTorch as *detach* and in Tensorflow as *stop_gradient*. This operator acts on functions, setting their gradient to zero while keeping their value intact. In other words, $\\\\bot f(x) = f(x)$ when evaluated at any $x$, while $\\\\nabla (\\\\bot f) (x) = 0$ for any $x$.\\n\\n3. You are absolutely right: the diag operator in Proposition 1 should be defined as taking diagonal *blocks* since we are working with block matrices, not diagonal *entries*. Thank you for noticing this.\"}", "{\"title\": \"Response to your review\", \"comment\": \"Thank you for this review. We will be sure to incorporate your comment in a revision of the paper.\"}", "{\"title\": \"Response to your review\", \"comment\": \"Thank you for your detailed and thoughtful comments. Below we address each point regarding the proof of Theorem D.4. We will also revise the paper to clarify these points and want to emphasise that these details do not affect the validity of our results.\\n\\n1. This is a notational confusion: $u$ lives in $R^d$, not $R^{d-1}$, while $G$ and $M$ are both square $d \\\\times d$ matrices. Indeed $u$ is defined to be an arbitrary vector in $S^{d-1}$, the unit (d-1)-sphere living in Euclidian space $R^d$. This is a standard but confusing convention (see https://en.wikipedia.org/wiki/N-sphere ).\\n\\n2. As above, $S^m$ with $m = d-1$ is the space of unit vectors in $R^d$.\\n\\n3. $S$ and $A$ are the symmetric and antisymmetric parts of $H$ respectively, which we mistakenly failed to define in the paper. The definitions are $S = (H+H^T)/2$ and $A = (H-H^T)/2$, so that $H = S + A$. In the specific example of Remark D.5, $S$ is not positive definite. However, one can easily show that a matrix $H$ is positive semi-definite iff its symmetric part $S$ is positive semi-definite (consider $u^T H u = u^T S u + u^T A u = u^T S u$ by antisymmetry of $A$). By assumption in Theorem D.4, it follows that $S$ is positive semi-definite and thus has a Cholesky decomposition.\\n\\n4. See point 3.\\n\\n5. This is the correct assumption. Regarding your concern about the expression for $H^2$, we have $H = S + A$ but also $H = S^T - A^T$ by symmetry of $S$ and antisymmetry of $A$. It follows that $H^2 = (S^T - A^T)(S + A)$ as claimed.\\n\\nDefinition 1 was chosen to be in line with prior work (Balduzzi et al, ICML 2018), where losses are *twice* differentiable. Our results require *thrice* differentiable losses because both Ostrowski and Stable Manifold Theorems require continuous differentiability of the gradient adjustment. Now the gradient adjustment for SOS contains second-order gradients of the losses through the Hessian $H$, so will only be continuously differentiable if the losses themselves are *thrice* continuously differentiable. We chose to make this extra (very weak) assumption explicit before stating our results, instead of changing the definition of differentiable games to fit our purposes. We are happy to alter the definition if this helps at all.\\n\\nAppendix A provides a more detailed justification for choosing stable fixed points over Nash equilibria as the correct solution concept for gradient-based optimisation in games. Though the example given in the main body is insufficient by itself, the aim was not to show that Nash are *always* undesirable (this is not true), but to show that optimisation algorithms should not aim/succeed in converging to *all* Nash equilibria. The appendix was referenced for further detail about stable fixed points, but we will further clarify this in the main paper in the final version.\", \"replies_to_minor_comments\": \"1. Agreed: speaking of \\\"policy\\\" is indeed too specific and inappropriate.\\n\\n2. Well-spotted typo! We will correct this.\\n\\n3. Choosing $a$ closer to $0$ means that SOS is forced to agree strongly with the direction of LA, while $a$ close to $1$ gives more flexibility (larger angle between the adjustments). In other words: smaller $a$ means potentially faster convergence, larger $a$ allows for more opponent shaping. Similarly for $b$: the parameter $p$ will be shrunk in a $b$-neighbourhood of fixed points, so larger $b$ ensures convergence in a wider radius while smaller $b$ allows for more opponent shaping. As briefly mentioned in the paper, we found that these hyperparameters were quite robust in experiments overall, though choosing $b = 0.1$ (quite small) for the IPD and Gaussian Mixtures was necessary to guarantee strong opponent shaping in a large region of parameter space. We hope this helps shed some light on the practical implications of choice on $a$ and $b$, though all theoretical results are indeed independent from this choice.\"}", "{\"title\": \"Interesting paper, strong theoretical results but concerns with the main theorem\", \"review\": \"This paper focuses on the problem of convergence in multi-objective optimisation with differentiable losses. This topic is timely and relevant, given the increasing amount of recent work on multi-objective architectures, e.g. GANs, adversarial learning, multi-agent reinforcement learning. The authors focus on stable fixed points (SFP), rather than Nash equilibria, as the solution concept in the entirety of their analysis. Casting the recently proposed LOLA gradient adjustment into a general matrix form, they diagnose an example where the shaping term in LOLA prevents convergence to SFP. They also find that discarding the shaping term leads to an earlier method (which they name ''LA'') with convergence guarantees in two-player two-action games. However, this also loses the opponent shaping ability of LOLA. To address these limitation, the authors propose SOS, which interpolates between LA and LOLA, and dynamically chooses the interpolation coefficient $p$ so that their adjusted gradient preserves LOLA's shaping ability only to the extent allowed by the constraint of moving in LA's direction. The main goal of the paper is to show that SOS converges locally to SFP, and to fixed points only, while avoiding strict saddles. Experiments on synthetic games show that SOS preserves the benefit of LOLA while avoiding its theoretically-predicted issues, and a more complex Gaussian mixture GAN experiment shows SOS is empirically competitive with other gradient adjustment methods.\\n\\nThe main conceptual novelty consists of the dynamic interpolation term to combine advantages of LOLA and LA while avoiding pitfalls of both. The major strength of the paper lies in the clear justification for this interpolation approach. The paper contains strong theoretical results for general differentiable games, and deserves the notice of the ICLR community if valid. However, I have major concerns with the proof of Theorem 2 (i.e. Theorem D.4 in the appendix), which affects the validity of Corollary 3 and Theorem 4. \\n\\nIn the proof of Theorem D.4:\\n1. How does the expression $u^T M^{-1}GMu$ have conformable dimensions, when $G \\\\in R^{d \\\\times d}$ while $u \\\\in R^{d-1}$? Was any assumption made about the matrix $M = (I + \\\\alpha H_d)^{1/2}$?\\n2. In the middle of page 14, a unit vector $u \\\\in S^m$ is defined, but it is not clear what vector space is meant by $S^m$.\\n3. In the second-to-last line of page 14, a quantity $S$ is used but not defined clearly in any preceding part of the proof. Remark D.5 refers to $S$ as the symmetric part of $G$, and asserts that S is not positive definite. If the quantity $S$ used in the proof is the same non-PD quantity, then $S$ does not have a Cholesky factorisation. So how is Cholesky decomposition conducted at end of page 14?\\n4. In the first line of page 15, a quantity $A$ is used but not defined anywhere else in the entire paper. \\n5. From the subsequent line, it appears to be the anti-symmetric part of H. Is it correct assumption? If so, $H^2$ is not $(S^T - A^T)(S + A)$. If you replace it with correct form, whole quantity does not compute to be positive or becomes meaningless.\\n\\nAs Theorem 2 is the crux for all the theoretical advancement presented in the paper, clarifications on above correctness questions is very important for clear acceptance of this work.\\n\\nWhile Definition 1 precisely defines differentiable games to have *twice* differentiable losses, why do the authors assume *thrice* differentiable losses at the start of Section 4?\\n\\nIn Section 2.2, the authors make a broad statement that ''Nash equilibria cannot be the right solution concept for multi-agent learning.'' They provide one example where Nash is undesirable (L^1 = L^2 = xy). However, since this example can be viewed as a fully cooperative game with joint loss L = 2xy, it does not support the broader statement that Nash is undesirable in all games. Because this statement directly motivates the authors to focus on stable fixed points, rather than Nash, as the solution concept in their subsequent analysis, it is very important to provide better justification for the claim.\", \"minor_comments\": \"1. Under Proposition 1, the authors suddenly speak of ''...the policy being optimal''. Since the author's work pertains to general multi-objective settings, not solely multi-agent reinforcement learning, the word ''policy'' sounds strange in context.\\n2. The statement of Proposition B.1, and the concluding line of the derivation, left out a coefficient $\\\\alpha$ that is present in Proposition 1 in the main text.\\n3. While the authors claim and prove independence of theoretical results from choice of a and b, are there any practical implications in terms of performance or convergence?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review for Stable Opponent Shaping in Differentiable Games\", \"review\": \"This paper studies differential games, in which there are n players and each has a loss function. The loss function depends on all parameters. Differential games appear naturally in GANs, where the two players are the generator and the discriminator. The authors first argue why Nash equilibria should not be the right solution concept for multi-agent learning and propose \\u201cstable fixed points\\u201d (SFP) as a possible solution concept. The authors then show the LOLA algorithm (Foerster et al. (2018)) fails to preserve fixed points by explicitly constructing an instance (the tandem game). In fact in the tandem game, LOLA will converge to sub-optimal scenarios with worse losses for both agents. The authors then show that an known algorithm LookAhead (Zhang & Lesser (2010)) has local convergence to SPF. However, LookAhead does not have the capacity to exploit opponent dynamics and encourage cooperation. To alleviate this issue, the authors propose a new algorithm SOS, which can be seen as an interpolation between LOLA and LookAhead, characterized by a parameter p. The authors also discuss how to choose the parameter p and prove that SOS will have local convergence to SFP and can avoid strict saddles.\\n\\nOverall, this paper is well-written and develops algorithms for a well-motivated problem. Although I am not an expert on this topic, the paper seems interesting to me.\", \"minor_comment\": \"First paragraph in Section 2.2, \\\"It is highly undesirable to converge to Nash in this game\\\" -> Nash equilibria\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Stable Opponent Shaping in Differentiable Games\", \"review\": \"This paper introduces a new algorithm for differential game, where the goal is to find a optimize several objective functions simultaneously in a game of n players. The proposed algorithm is an interpolation between LOLA and LookAhead, and it perserves both the stability from LOLA and the \\\"convergence to fixed point\\\" property of LookAhead. The interpolation parameter is chosen in Section 3.2.\\n\\nThe paper looks novel, though some notations are not completely clear to me. For example, the defintions of the \\\"current parameters\\\" \\\\hat{\\\\theta}_1 and \\\\hat{\\\\theta}_2 in Section 3.1, and the stop-gradient operator. Also, how is the diag operator in Propostion 1 is defined? Normally it only represents the diagonal entries but here it might represent the diagonal blocks.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
BJxssoA5KX
Bounce and Learn: Modeling Scene Dynamics with Real-World Bounces
[ "Senthil Purushwalkam", "Abhinav Gupta", "Danny Kaufman", "Bryan Russell" ]
We introduce an approach to model surface properties governing bounces in everyday scenes. Our model learns end-to-end, starting from sensor inputs, to predict post-bounce trajectories and infer two underlying physical properties that govern bouncing - restitution and effective collision normals. Our model, Bounce and Learn, comprises two modules -- a Physics Inference Module (PIM) and a Visual Inference Module (VIM). VIM learns to infer physical parameters for locations in a scene given a single still image, while PIM learns to model physical interactions for the prediction task given physical parameters and observed pre-collision 3D trajectories. To achieve our results, we introduce the Bounce Dataset comprising 5K RGB-D videos of bouncing trajectories of a foam ball to probe surfaces of varying shapes and materials in everyday scenes including homes and offices. Our proposed model learns from our collected dataset of real-world bounces and is bootstrapped with additional information from simple physics simulations. We show on our newly collected dataset that our model out-performs baselines, including trajectory fitting with Newtonian physics, in predicting post-bounce trajectories and inferring physical properties of a scene.
[ "intuitive physics", "visual prediction", "surface normal", "restitution", "bounces" ]
https://openreview.net/pdf?id=BJxssoA5KX
https://openreview.net/forum?id=BJxssoA5KX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJlzFRDZeE", "rJgeNo_4yV", "Hyl5ady1JV", "rkgYYukJ1N", "S1lJ8Ylo0Q", "SJe2hHXj6Q", "H1gfIEmspm", "r1eXfEXiaX", "ByekAQQo6Q", "SJloGnWRnQ", "HkegBTQcnm", "ryeBDwsMs7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544810106126, 1543961384423, 1543596225768, 1543596161089, 1543338311440, 1542301107603, 1542300746325, 1542300682945, 1542300614804, 1541442579372, 1541188919620, 1539647325305 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper651/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper651/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper651/Authors" ], [ "ICLR.cc/2019/Conference/Paper651/Authors" ], [ "ICLR.cc/2019/Conference/Paper651/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper651/Authors" ], [ "ICLR.cc/2019/Conference/Paper651/Authors" ], [ "ICLR.cc/2019/Conference/Paper651/Authors" ], [ "ICLR.cc/2019/Conference/Paper651/Authors" ], [ "ICLR.cc/2019/Conference/Paper651/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper651/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper651/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a novel dataset of bouncing balls and a way to learn the dynamics of the balls when colliding. The reviewers found the paper well-written, tackling an interesting and hard problem in a novel way. The main concern (that I share with one of the reviewers) is about the fact that the paper proposes both a new dataset/environment *and* a solution for the problem. This made it difficult the for the authors to provide baselines to compare to. The ensuing back and forth had the authors relax some of the assumptions from the environment and made it possible to evaluate with interaction nets.\\n\\nThe main weakness of the paper is the relatively contrived setup that the authors have come up with. I will summarize some of the discussion that happened as a result of this point: it is relatively difficult to see how this setup that the authors have and have studied (esp. knowing the groundtruth impact locations and the timing of the impact) can generalize outside of the proposed approach. There is some concern that the comparison with interaction nets was not entirely fair.\\n\\nI would recommend the authors redo the comparisons with interaction nets in a careful way, with the right ablations, and understand if the methods have access to the same input data (e.g. are interaction nets provided with the bounce location?). \\n\\nDespite the relatively high average score, I think of this paper as quite borderline, specifically because of the issues related to the setup being too niche. Nonetheless, the work does have a lot of scientific value to it, in addition to a new simulation environment/dataset that other researchers can then use. Assuming the baselines are done in a way that is trustworthy, the ablation experiments and discussion will be something interesting to the ICLR community.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"metareview\"}", "{\"title\": \"hesitantly converged to 6\", \"comment\": \"Dear AC,\\n\\nThanks for following up about this. I appreciate the authors taking the time to show the follow-up analysis in their recent response. I'm really torn about this one. I've raised my rating to a 6 mainly because the dataset and ablation experiments are interesting, but not without feeling a bit uneasy. I do not want to be unreasonably critical, so here I'll detail my thoughts:\\n\\nI would not necessarily recommend this paper to colleagues who work on machine learning methods for inferring physical properties, in large part because the model is specific to the authors' particular (and very unusual) paradigm, which entails:\\n\\n1) A large simulated dataset and a small real-world dataset\\n2) One single object of interest bouncing multiple times in each real-world room\\n3) Ground-truth impact locations (yet no other physical parameters) known for the real-world dataset\\n4) Ground-truth knowledge of when an impact occurs (and specificity of the model to this time-point)\\n\\nThe authors' model can only work in a setup where all of these conditions are met, and it's not easy to imagine how it could be applied to a setup where any one of these conditions is not met. This is not a common setup, and in fact has almost surely never been studied before. Consequently, the authors introduce a dataset of their own with these properties.\\n\\nFurthermore, the authors' model is designed to infer some unknown physical parameters (collision normal and COR) but is not designed to do more common kinds of inference, like rollouts, video prediction, or estimates of future state/events.\\n\\nThus I don't think the model in this paper can be of broad interest. I would encourage the authors to try to make their model more general, reduce their assumptions, and explore other datasets.\\n\\nAlso, as mentioned before, the comparison to existing baselines was nonexistent. In their most recent reply the authors quote results from Interaction Networks (IN), but I find these surprising. I'm in no position to question the correctness of the authors' experiments, but from personal experience and familiarity with others' successful applications of INs to very similar environments, I'm quite shocked that the authors found the IN baseline to perform so poorly. Some possible explanations for this are:\\n1) The authors do not indicate if they provided the IN with the bounce surface location. If they didn't, that would explain the poor IN results.\\n2) It also seems from their description that the authors didn't add noise to the inputs, which is another important to do for accurate rollouts when running IN.\\n\\nIn short, the reported IN results do not alleviate my uncertainty about how well the authors' model actually performs.\\n\\nFinally, to reply to the authors' usage of the RANSAC-estimated point cloud center in the collision frame as the collision point: This then makes the model specific to a spherical object, since the impact location of a non-spherical object will depend on the object's orientation as well as its center-point. So while it reduces one concern about model generality, it raises another. I realize that the aim of the model is to estimate surface properties, but this is a serious concern for potential users.\\n\\nDespite all of these strong reservations about the model, as mentioned above I do think the dataset will be useful and the ablation experiments are interesting, and maybe those justify acceptance to ICLR.\", \"tldr\": \"To answer your questions directly, I do not find the new baseline comparison convincing, and I still find the problem quite \\\"niche\\\". But upon further consideration, given the authors will release a useful dataset and do have some interesting points in their ablation experiments which could apply to other models, I'll raise my rating to a 6. I respect the AC's discretion about acceptance, whatever it may be.\"}", "{\"title\": \"(cont.) Relaxed impact location assumption, Comparison to Battaglia et al. and discussion\", \"comment\": \"2) \\u00a0\\u201cLots of hand-holding, lack of generality. \\u00a0Giving the ground-truth bounce position on the real dataset is a serious assumption.\\u201d \\u00a0\\n While we make the assumption of knowing the impact spatial location, it can be automatically estimated. We demonstrate this by using the RANSAC-estimated point cloud center in the collision frame as the collision point and retraining the best model (Row 5 Table 1: \\u201cTrain core, Fix traj. enc.\\u201d). \\n\\t\\t\\t\\t\\t\\t Dist.\\t\\t\\t% Normals\\t\\tCOR Median Abs Err\\n\\t\\t\\t\\t\\t \\u00a0\\u00a0\\u00a0\\u00a0\\u00a0 (Mean, Std)\\t (Mean, Std)\\t\\t \\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0(Mean, Std)\\nKnown Impact Loc \\t\\t \\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a021.9, 0.006\\t \\t27.06, 0.09\\t\\t \\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a00.158, 0.01\\nAuto. Estimated Impact Loc . 21.7, 0.009\\t\\t27.58, 0.04\\t\\t \\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a0\\u00a00.153, 0.01\", \"the_assumption_of_impact_location_simply_allowed_us_to_create_the_experimental_setup_to_investigate_the_main_goal_of_our_work\": \"estimation of physical parameters by learning from observed bounces.\\n\\n\\n\\n\\u201cConfidence intervals should be in the paper itself in Tables 1 and 2\\u201d\\n\\n\\u201cTraining curves should be plotted\\u201d\\n We will add these to the final version of the paper, if accepted. While the PDF revision deadline has passed, we are hoping to incorporate all discussions with the reviewers.\", \"discussion_about_the_problem\": \"\", \"point_1\": \"We are agreed \\u201cnot niche\\u201d.\", \"point_2\": \"We hope this concern is now addressed by our removal of the assumption of knowing the spatial location of the impact (in the experiment above) by training with automatically estimated bounce positions.\", \"points_3_and_4\": \"Collision observation, detection and simulation of the non-rigid surfaces that are encountered in everyday scenes remains a challenging task[a] and the subject of active research. Further, in our setting, we have the added challenge of approximate and noisy estimates of the scene geometry; collision detection and processing with uncertainty is just recently considered [b] and there are no standardized codes or methods in the deformable setting. We show post-bounce predictions in Figures 1 and 3 and will release videos of our post-bounce predictions (ICLR\\u2019s openreview did not have a way for us to submit the videos anonymously as supplemental material). Note that rollouts require collision detection, which is challenging as previously noted. Collision detection and rollouts are interesting for future work.\", \"point_5\": \"As discussed in the first paragraph in the introduction, modeling single-object bounces has potential application in augmented reality for dynamic object compositing. Handling multiple interacting objects is interesting future work, but the single object setting is a first and necessary step towards it.\\n\\n[a] Collision Detection for Deformable Objects. M. Teschner, S. Kimmerle, G Zachmann, B. Heidelberger, L Raghupathi, A. Fuhrmann, M-P Cani, F Faure, N. Magnenat-Thalmann, and W. Strasser. Eurographics 2004, State-of-the-Art Report\\n[b] Fast and Bounded Probabilistic Collision Detection for High-DOF Trajectory Planning in Dynamic Environments. C. Park, J.S. Park, and D. Manocha. IEEE Transactions on Automation Science and Engineering. 2018.\"}", "{\"title\": \"Relaxed impact location assumption, Comparison to Battaglia et al. and discussion\", \"comment\": \"We thank the reviewer for their response and useful suggestions. We have conducted additional experiments that specifically address the major concerns of the reviewer. We compare the PIM model to Interaction Networks (Battaglia et al., 2016) and also relax the assumption of knowledge of impact spatial location. \\u00a0We show that our PIM model significantly outperforms Interaction Networks on synthetic data and that a model trained without knowledge of ground truth spatial location of impact performs as well as our proposed model. First, we would like to address the high-level concerns of the reviewer.\\n\\nIn the context of applications, it is true that our proposed model can be applied to a single ball collision under knowledge of impact time. However, as a research problem, we believe that this setting and dataset exposes numerous important challenges that currently hinder progress in this direction (discussed in the Introduction). In fact, we believe that what seems like a niche problem is actually a challenging unaddressed elementary problem in modeling real-world collisions.\", \"we_now_present_experimental_results_to_address_the_mentioned_concerns\": \"1) \\u00a0\\u201cAlso on the simulated data you could compare to state-based prediction models, such as (Battaglia et al., 2016) that you reference.\\u201d\\n Thank you for the suggestion. Comparing PIM to Interaction Networks (IN) is indeed an interesting experiment. We have used the simulation data from our experiments to train two versions of the IN model (IN-positions and IN-positions-velocity, described below) using the available codebase. Forward prediction error at 0.1s post-bounce (\\u201cDist.\\u201d in the main text) from the simulation-based PIM (as described in \\u201cPretraining the PIM\\u201d in Section 3.1) and the two IN Models are as follows:\", \"center_based_pim\": \"11.72cm (stdev: 0.009)\", \"pointnet_based_pim\": \"12.87cm (stdev: 0.005)\", \"in_position_velocity\": \"State vector of the object at t=1 contains [x, y, z, v_x, v_y, v_z] \\u00a0(used in the original Interaction Networks paper)\", \"in_pos1_pos2_pos3\": \"State vector of the object at t=3 contains [x1, y1, z1, x2, y2, z2, x3, y3, z3]\\n\\n\\n\\n\\nResults continued below\"}", "{\"title\": \"Difficult to Evaluate and Lack of Generality\", \"comment\": \"Dear Authors,\\n\\nThank you for your reply. However, many of my previous comments still apply (also, the paper itself looks to have not been revised much if at all).\\n\\nSpecifically, my main concerns are these:\\n\\n1) No comparison to existing baselines. There are many baselines against which you could compare. For example, you can compare to video prediction baselines on the simulation data (where you can use the simulation renderer to render trajectories). Also on the simulated data you could compare to state-based prediction models, such as (Battaglia et al., 2016) that you reference. Ultimately, as a reader I have no idea how well your model actually models physics. Given some of the trajectories in Figure 12 it is clear that the model does in fact make mistakes, so this must be compared to existing baselines (even if they don't use exactly the same training paradigm) to verify that it is actually learning the physics well.\\n\\n2) Lots of hand-holding, lack of generality. Giving the ground-truth bounce position on the real dataset is a serious assumption. For more general data, this could be a highly non-trivial preprocessing step and limits the generality of the model. Similarly for ground-truth knowledge of the impact time.\\n\\nAlso, a few of my minor comments remain unresolved:\\n1) Confidence intervals should be in the paper itself in Tables 1 and 2, preferably as 90% or 95% confidence intervals.\\n2) Training curves should be plotted (at least in the supplementary material) corresponding the the tables would be good to see. The shape of the training curves would indicate how fast the model learns and whether the fine-tuning asymptotes or results in seed-dependent instability (which is common for fine-tuning physics prediction models).\\n\\nStepping back a bit, this paper addresses a very niche problem, because the paradigm involves:\\n1) A large simulated dataset and a small real-world dataset (not niche)\\n2) Ground-truth impact locations yet no other physical parameters for the real-world dataset (very niche)\\n3) Ground-truth knowledge of when the impact occurs in time, and specificity of the model to this time-point (very niche)\\n4) Your aim is to infer some unknown physical parameters without actually be able to do rollouts or video prediction (somewhat niche)\\n5) The only environment is a single object bouncing (very niche).\\n\\nYour model is also very specific to this particular paradigm. So without strong results (which, given no benchmarking with existing methods, the reader cannot evaluate) I'm struggling to see how this paper could be of interest to the wider ICLR audience.\\n\\nWhile I appreciate your reply, I cannot in good conscience give a rating higher than 5.\"}", "{\"title\": \"Author Response for AnonReviewer3\", \"comment\": \"We thank the reviewer for their appreciation of our work. We address the reviewer\\u2019s concerns here:\\n\\n1) \\u201cIt would be more interesting if the dataset was created using multiple types of probe objects. Currently, it is only a ball.\\u201d\\n We agree that the eventual goal for research in this direction should be to generalize to multiple types of probe objects. We discuss this further in the response to the review from AnonReviewer2. (https://openreview.net/forum?id=BJxssoA5KX&noteId=ByekAQQo6Q )\\n\\n2)\\u201cThe length of the groundtruth and predicted trajectories might be different. How is the difference computed?\\u201d\\n The evaluation is not dependent on the length of the trajectories recorded. The distance between the predicted center and the ground-truth center is computed at timestep 10 (0.1 seconds post-bounce). All trajectories in the dataset have length greater than 10 timesteps.\\n\\n3) \\u201cThe impact location (x,y) corresponds to multiple locations in 3D. Why not using a 3D point as input? It seems the 3D information is available for both the real and synthetic cases.\\u201d\\n In the physics model, the 3D collision point is currently used since the point cloud is represented with collision as origin. In the VIM model, using the 3D points is similar to using a 2D (x,y) points since we eventually need to extract visual features from 2D input images.\\n\\n4) \\u201cWhy is it non-trivial to use a deconvolution network for predicting the output point cloud trajectory?\\u201d\\n There is very limited work on generating point clouds from embeddings. Integrating a deconvolution model would have added an additional obstacle to an already challenging problem. Furthermore, it would make localizing the errors more difficult.\", \"some_relevant_literature_that_demonstrate_the_challenges_of_generating_point_clouds\": \"[1] Achlioptas, Panos, et al. \\\"Representation learning and adversarial generation of 3D point clouds.\\\" arXiv preprint arXiv:1707.02392 (2017).\\n[2] Insafutdinov, Eldar, and Alexey Dosovitskiy. \\\"Unsupervised Learning of Shape and Pose with Differentiable Point Clouds.\\\" arXiv preprint arXiv:1810.09381 (2018).\\n[3] Lin, Chen-Hsuan, Chen Kong, and Simon Lucey. \\\"Learning efficient point cloud generation for dense 3D object reconstruction.\\\" arXiv preprint arXiv:1706.07036 (2017).\\n[4] Achlioptas, Panos, et al. \\\"Learning Representations and Generative Models for 3D Point Clouds.\\\" (2018).\\n\\n\\n5) \\u201cThe length of the input trajectory can vary, but it seems the proposed architecture assumes a fixed-length trajectory. I am wondering how it handles a variable-length input.\\u201d\\n We observed that 10 frames before and after the collision contain sufficient information. Therefore, we used these 20 frames in the proposed model. For videos where more frames are available, we use only the 10 frames before and after collision. \\n\\n6) \\u201cHow is the bounce location encoded in VIM?\\u201d\\n The bounce location is used to index the feature map which is the output of the VIM. We present this in Subsection 3.2 \\u201cTraining\\u201d paragraph - $\\\\rho_{x,y}$ is obtained by indexing the output $\\\\mathcal{V}(I)$.\\n\\n7) \\u201cI don't see any statistics about the objects being used for data collection. That should be added to the paper.\\u201d\\n Thank you for the suggestion. That would indeed be informative. We shall add this to the final version of the paper since this would require some additional effort to label the objects.\"}", "{\"title\": \"Author Response for AnonReviewer1\", \"comment\": \"We thank the reviewer for their time and appreciation of our work.\"}", "{\"title\": \"(continuation of ) Author Response for AnonReviewer2\", \"comment\": \"3) \\u201cThe authors could evaluate their model with respect to pixel loss (after ground-truth rendering) and compare to a video prediction algorithm (such as PredNet by Lotter, Kreiman, & Cox, 2016).\\u201d\\n The goal of our work was to investigate whether real-world data can be used to learn models of physics and also simultaneously estimate physical parameters in real-world scenes. We do not, however, deal with the realistic rendering of the predicted outputs from the learned physics model. Therefore, we cannot directly compare to future-prediction models like PredNet [Lotter et al], since we do not predict the pixels in the future frames. \\n\\n4) \\u201cI would love to see training curves with errorbars of the models on the most important metrics (e.g. Dist and COR Median Absolute Error)\\u201d\\n We have now computed the error bars for the Forward prediction distance error and COR Median absolute error over multiple training/testing runs with different initializations. These results confirm the conclusions of our ablative study.\\nExperiment\\t\\t\\tDist (Mean, Std)\\t\\tCOR Med Abs Err (Mean, Std)\\nCenter based\\t\\t\\t28.2, 0.005\\t\\t\\t0.173, 0.01\\nFix core and traj. enc. \\t 38.4, 0.008\\t\\t\\t0.258, 0.008\\nTrain core and traj. Enc.\\t24.7, 0.004\\t\\t\\t0.169, 0.006\\nTrain core, Fix traj. Enc.\\t21.9, 0.006\\t\\t\\t0.158, 0.01\", \"clarifications\": \"1) \\u201cHow was the PointNet trajectory encoder trained? Were gradients passed through from the PIM? Was the same network used for both the simulation and real-world data?\\u201d\\n Yes, the PointNet trajectory encoder is actually part of the PIM in our proposed approach. The gradients for the trajectory encoder are computed with respect to the objectives mentioned in Equations (2) and (3). \\nYes, the same network is used for simulation and real-world data.\\n\\n2) \\u201cThe performance of the center-based model in Table 1 seems surprisingly low. Is the VIM at fault? Or is the sphere-fitting sub-optimal?\\u201d\\n In theory, if accurate centers and point clouds are available, both models should perform similarly. The sphere-fitting in our data is sub-optimal due to the noise in the stereo-depth estimates. We believe that this highlights the advantage of using a PointNet-based model to avoid dealing with hand-crafted estimates of centers.\\n\\n[a] Jui-Hsien Wang, Rajsekhar Setaluri, Dinesh K. Pai, and Doug L. James. Bounce maps: An improved restitution model for real-time rigid-body impact. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2017), 36(4), July 2017. doi: https://doi.org/10.1145/3072959.3073634.\"}", "{\"title\": \"Author Response for AnonReviewer2\", \"comment\": \"We thank the reviewer for their feedback. We address the concerns of the reviewer below.\\n\\n1) \\u201cThe authors are introducing both a new training paradigm (to my knowledge unused in the literature) and a new model, and without any existing baselines to compare against I find it a bit difficult to understand how well the model works.\\u201d\\n We agree that due to the novelty of our training paradigm, model and data, there is a lack of existing literature/baselines to compare against. This is an unavoidable challenge we face. However, in order to better provide context for the performance of our models, we have conducted extensive quantitative and qualitative experiments and compared to relevant baselines (as also noted by other reviewers) including: (a) experiments dissecting the proposed model to localize the performance gains obtained due the PointNet trajectory encoders; (b) training the PIM on real-world data; and (c) a ground truth normals based experiment for reference. Overall, we hope that our proposed approach can also serve as a useful baseline for future work in this direction. \\n\\n2) \\u201cOverall, the authors\\u2019 model is somewhat complicated and not as general as it initially seems. To justify this complication I would like to see more convincing results and benchmarking or application to more than one single dataset (e.g. non-spheres bouncing).\\u201d\\n As previously noted by the reviewer, prior work along the lines of estimating physical parameters and learning models of physics from real-world data is extremely scarce. Therefore, there are no relevant datasets that can directly be used to benchmark our approach, which also emphasizes the need for such a dataset. \\nIn the nascent stages of this field, we believe that addressing the problem with a spherical probe object provides a good starting point. Non-spherical probe objects introduce additional complexity making exploration in this direction more challenging. For example, results in [a] show how much the physical properties vary across the surface of an object. The controlled setup of a spherical probe object ensures that the outcomes of bounces are dependent only on the physical properties of one object. However, we agree that non-spherical probe objects could definitely be an interesting and essential next step to pursue as future work.\", \"specific_concerns\": \"1) \\u201cA link to open source version of dataset is not available\\u201d\\n The double-blind submission of ICLR constrains the ability for us to provide the dataset publicly without revealing our identity. The data will be made publicly available with the final version of the paper. \\n\\n2) \\u201cThe authors claim in multiple places that the model is trained end-to-end, but this does not seem to be the case. Specifically, the PIM is pre-trained on an auxiliary dataset from simulation. The trajectory encoder also seems to be pre-trained (though I could be wrong about that, see my question below). Furthermore, there is a bit of hand-holding: The PIM uses ground-truth state for pre-training, and the VIM gets the ground-truth bounce location. In light of this, the model seems a lot less general and end-to-end than implied in the abstract and introduction.\\u201d\\n The PIM (including the trajectory encoder) is pretrained initially using simulation data. The VIM+PIM pipeline is then finetuned in end-to-end manner on the real data. In the abstract/introduction, we refer to this end-to-end training. It is true that the PIM uses simulation parameters in the pretraining phase and the VIM uses the ground truth location to index the feature maps. However, the training is still \\u201cend-to-end\\u201d in the conventional usage of the term, since the model is fully differentiable and the gradients for the objective in Equation 3 are computed w.r.t all the parameters of both the VIM and PIM. This is analogous to pretraining on ImageNet and finetuning with added parameters for other tasks which is also referred to as end-to-end training. \\n\\n\\n(Continued below)\"}", "{\"title\": \"Might be Good but Difficult to Evaluate: No Comparison to Existing Methods.\", \"review\": \"The authors present both a dataset of videos of a real-world foam ball bouncing and a model to learn the trajectory of the ball at collision (bounce) points in these videos. The model is comprised of a Physics Inference Module (PIM) and a Visual Inference Module (VIM). The PIM takes in both a vector of physical parameters (coefficient of restitution and collision normal) and a point cloud representation of the pre-bounce trajectory, and produces a point cloud representation of the post-bounce trajectory (or, rather, an encoded version of such). The VIM takes in an image and ground-truth bounce location and produces the physical parameters of the surface at that location.\\n\\nI find the paper well-written and clear. The motivation in the introduction is persuasive and the related work section is complete. However, the authors are introducing both a new training paradigm (to my knowledge unused in the literature) and a new model, and without any existing baselines to compare against I find it a bit difficult to understand how well the model works. \\n\\nOverall, the authors\\u2019 model is somewhat complicated and not as general as it initially seems. To justify this complication I would like to see more convincing results and benchmarking or application to more than one single dataset (e.g. non-spheres bouncing).\", \"here_are_some_specific_concerns\": \"1) I could not find a link to an open-sourced version of the dataset(s). Given that the authors emphasize the dataset as a main contribution of the paper, they should open-source it and make the link prominent in the main text (apologies if I somehow missed it).\\n\\n2) The authors claim in multiple places that the model is trained end-to-end, but this does not seem to be the case. Specifically, the PIM is pre-trained on an auxiliary dataset from simulation. The trajectory encoder also seems to be pre-trained (though I could be wrong about that, see my question below). Furthermore, there is a bit of hand-holding: The PIM uses ground-truth state for pre-training, and the VIM gets the ground-truth bounce location. In light of this, the model seems a lot less general and end-to-end than implied in the abstract and introduction.\\n\\n3) No comparison to existing baselines. I would like to see how the authors\\u2019 model compares to standard video prediction algorithms. The authors could evaluate their model with respect to pixel loss (after ground-truth rendering) and compare to a video prediction algorithm (such as PredNet by Lotter, Kreiman, & Cox, 2016). Given that the authors\\u2019 method uses some extra \\u201cprivileged\\u201d information (as described in point 2), it should far out-perform algorithms that train only on video data, and such a result would strengthen the paper a lot.\\n\\n4) Table 1 is not a very convincing demonstration of performance. Regardless of baselines, the table does not show confidence intervals. I would love to see training curves with errorbars of the models on the most important metrics (e.g. Dist and COR Median Absolute Error).\", \"i_also_was_confused_about_a_couple_of_things\": \"1) How was the PointNet trajectory encoder trained? I did not see this mentioned anywhere. Were gradients passed through from the PIM? Was the same network used for both the simulation and real-world data?\\n\\n2) The performance of the center-based model in Table 1 seems surprisingly low. The center-based model should be as good at the Train core, Fix traj. enc. model, since it has access to the ball\\u2019s position. Why is it worse? Is the VIM at fault? Or is the sphere-fitting sub-optimal? How does it compare on the simulated data with ground truth physical parameters?\\n\\n3) Lastly, the color-scheme is a bit confusing. It looks like the foam ball in the videos was rainbow-colored. However, in the model outputs in trajectory figures time is also rainbow-colored. This was initially a bit confusing. Perhaps grayscale for the model outputs would be clearer.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"a well evaluated solution to an interesting and challenging problem\", \"review\": \"This paper presents a method for inferring physical properties of the world (specifically, normals and coefficients of restitution) from both visual and dynamic information. Objects are represented as trajectories of point clouds used under an encoder/decoder neural network architecture. Another network is then learned to predict the post bounce trajectory representation given the prebounce trajectory representation given the surface parameters. This is used both to predict the post bound trajectory (with a forward pass) but also to estimate the surface parameters through an optimization procedure. This is coupled with a network which attempts to learn these properties from visual cues as well. This model can be either pretrained and fixed or updated to account for new information about a scene.\\n\\nThe proposed model is trained on a newly collected dataset that includes a mixture of real sequences (with RGB, depth, surface normals, etc) and simulated sequences (additionally with physical parameters) generated with the help of a physics engine. It is compared with a number of relevant baseline approaches and ablation models. The results suggest that the proposed model is effective at estimating the physical properties of the scene.\\n\\nOverall the paper is well written and thoroughly evaluated. The problem is interesting and novel, the collected dataset is likely to be useful and the proposed solution to the problem is reasonable.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Great work; Important and interesting problem; Missing some details\", \"review\": \"Paper summary:\\nThe paper proposes to predict bouncing behavior from visual data. The model has two main components: (1) Physics Interface Module, which predicts the output trajectory from a given incoming trajectory and the physical properties of the contact surface. (2) Visual Interface Module, which predicts the surface properties from a single image and the impact location. A new dataset called Bounce Dataset is proposed for this task.\", \"paper_strengths\": [\"The paper tackles an interesting and important problem.\", \"The data has been collected in various real scenes.\", \"The idea of training the physics part of the network with synthetic data and later fine-tuning it with real images is interesting.\", \"The experiments are thorough and well-thought-out.\"], \"paper_weaknesses\": [\"It would be more interesting if the dataset was created using multiple types of probe objects. Currently, it is only a ball.\", \"It is not clear how the evaluation is performed. For instance, the length of the groundtruth and predicted trajectories might be different. How is the difference computed?\", \"The impact location (x,y) corresponds to multiple locations in 3D. Why not using a 3D point as input? It seems the 3D information is available for both the real and synthetic cases.\", \"Why is it non-trivial to use a deconvolution network for predicting the output point cloud trajectory?\", \"The length of the input trajectory can vary, but it seems the proposed architecture assumes a fixed-length trajectory. I am wondering how it handles a variable-length input.\", \"How is the bounce location encoded in VIM?\", \"I don't see any statistics about the objects being used for data collection. That should be added to the paper.\", \">>>>> Final score: The authors have addressed my concerns in the rebuttal. I believe this paper tackles an interesting problem, and the experiments are good enough since this is one of the first papers that tackle this problem. So I keep the initial score.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BkgosiRcKm
Deep Recurrent Gaussian Process with Variational Sparse Spectrum Approximation
[ "Roman Föll", "Bernard Haasdonk", "Markus Hanselmann", "Holger Ulmer" ]
Modeling sequential data has become more and more important in practice. Some applications are autonomous driving, virtual sensors and weather forecasting. To model such systems, so called recurrent models are frequently used. In this paper we introduce several new Deep Recurrent Gaussian Process (DRGP) models based on the Sparse Spectrum Gaussian Process (SSGP) and the improved version, called Variational Sparse Spectrum Gaussian Process (VSSGP). We follow the recurrent structure given by an existing DRGP based on a specific variational sparse Nyström approximation, the recurrent Gaussian Process (RGP). Similar to previous work, we also variationally integrate out the input-space and hence can propagate uncertainty through the Gaussian Process (GP) layers. Our approach can deal with a larger class of covariance functions than the RGP, because its spectral nature allows variational integration in all stationary cases. Furthermore, we combine the (Variational) Sparse Spectrum ((V)SS) approximations with a well known inducing-input regularization framework. For the DRGP extension of these combined approximations and the simple (V)SS approximations an optimal variational distribution exists. We improve over current state of the art methods in prediction accuracy for experimental data-sets used for their evaluation and introduce a new data-set for engine control, named Emission.
[ "Deep Gaussian Process Model", "Recurrent Model", "State-Space Model", "Nonlinear system identification", "Dynamical modeling" ]
https://openreview.net/pdf?id=BkgosiRcKm
https://openreview.net/forum?id=BkgosiRcKm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJxEA2PleV", "rJg6mhJi07", "ryeukb2_07", "Hyxwdyn_C7", "S1lc23s_Am", "BJgt7uoO07", "HJeVgHjqh7", "rJgI340tn7", "SJgR7X0th7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544744140155, 1543334948816, 1543188704225, 1543188334670, 1543187633854, 1543186464761, 1541219564059, 1541166254234, 1541165862474 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper650/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper650/Authors" ], [ "ICLR.cc/2019/Conference/Paper650/Authors" ], [ "ICLR.cc/2019/Conference/Paper650/Authors" ], [ "ICLR.cc/2019/Conference/Paper650/Authors" ], [ "ICLR.cc/2019/Conference/Paper650/Authors" ], [ "ICLR.cc/2019/Conference/Paper650/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper650/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper650/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper is concerned with combining past approximation methods to obtain a variant of Deep Recurrent GPs. While this variant is new, 2/3 reviewers make very overlapping points about this extension being obtained from a straightforward combination of previous ideas. Furthermore, R3 is not convinced that the approach is well motivated, beyond \\u201cfilling the gap\\u201d in the literature.\\n\\nAll reviewers also pointed out that the paper is very hard to read. The authors have improved the manuscript during the rebuttal, but the AC believes that the paper is still written in an unnecessarily complicated way. \\n\\nOverall the AC believes that this paper needs some more work, specifically in (a) improving its presentation (b) providing more technical insights about the methods (as suggested by R2 and R3), which could be a means of boosting the novelty.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"New model but presentation and insights need to improve.\"}", "{\"title\": \"Response to All Reviewer\", \"comment\": \"We are sorry for a 'typo' in the lower bound in Equation (15) regarding the expectation:\\n\\nActually one has to replace\\n\\nE_p_l[ sum_l ... ]\\n\\nwith\\n\\nlog( prod_l E_p_l[ ... ] )\\n\\nWe are sorry for this and will fix this in the next upload phase.\\n\\nThank you. Sincerely,\\n\\nThe authors.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We greatly appreciate your insightful feedback. We would like to respond to your comments and explain how we improved the manuscript.\\n\\nFirst of all, we recognized that you changed your rating from\\n\\n<<Rating: 7, Confidence: 3>> to <<Rating: 5: Confidence: 2>> ,\\n\\nwithout any further explanations and without waiting for our responses.\\nWe do not know, what your concerns are, but we hope, we can address these with our answers now.\\n\\n<<Arguments against acceptance: Clarity of the paper.\\n-> Section 2 writing style lacks a bit of cohesion, relating the paragraphs may be a solution.>>\\n\\nThanks, we addressed this issue as good as possible.\\n\\n<<-> Abstract should be rewritten adding a motivation and focusing more on the problems being solved and less in the details of the solutions. >>\\n\\nThank you for the advice. We reformulated the abstract. We added:\\n\\u201cOur approach can deal with a larger class of covariance functions than the RGP, because its spectral nature allows variational integration in all stationary cases.\\u201d \\n\\n\\u201cFor the DRGP extension of these combined approximations and the simple (V)SS approximations an optimal variational distribution exists.\\u201d\\n\\nAnd deleted\\n\\n\\u201cThis case naturally collapses to a tractable expression by calculating the integrals. For the simple\\nextension of the (V)SS approximation an optimal variational distribution exists. \\u201c\\n\\n \\u201cTraining is realized through optimizing the variational lower bounds.\\u201d\\n\\n<<-> Recurrent indexes that go backwards (i) of Eq. 1. should be explained why are going backwards before being used like that. Newcomers may be confused.\", \"minor_issues_and_typos\": \"-> (V)SS not defined before being used.\\n-> Q is not defined in section 3.1 paragraph 1.\\n-> A valid covariance function must produce a PSD matrix, put that in section 3.1. >>\\n\\nThank you for pointing out these issues.\\nWe see your point with the time horizon and simplified it to H, similar to our experiments.\\nWe addressed the issue with (V)SS and also defined in the abstract the GP.\\nWe added in Section 3.1: \\n\\u201c \\u2026,Q \\u2208 N the input-dimension,\\u2026\\u201d\\n\\u201cBe aware of that a valid covariance function must produce a positive definite matrix K NN\\ndef= (k(x i ,x j )) N i,j=1 \\u2208 R N\\u00d7N , when filling in combinations of data-input points x i , i = 1,...,N.\\u201d\\n\\n<<-> I do not see how U marginalizes in Eq. 7, kind of confused about that, I think that it should be p(y|X,U).>>\\n\\nYou are correct that the LHS depends on U and we have to make this point clear to the reader. Additionally, the LHS depends on further variables like L, p, b. etc. Nevertheless, we decided to highlight U just in p(y|a,Z,U,X) in the integral. On the one hand, our approximations are built on U in the following sections (c.f. Section 3.3.) and on the other hand we want to be notationally conform with Gal (2016), Section 3. Having said this, for clarity we added: \\n\\u201c\\u2026highlighting U just in the integral, to be notationally conform to Gal & Turner (2015), Section 3.\\u201d\\n\\n <<-> Section 3.4 statistics should be explained.>>\\n\\nThank you for the advice. We added: \\n\\u201cThese statistics are essentially the given matrices \\u03a6, \\u03a6 T \\u03a6, K_MN K_NM from the beginning, but every input h_i and every spectral point z_m are now replaced by a mean \\u00b5_i , \\u03b1_m and a variance \\u03bb_i , \\u03b2_m resulting in matrices of the same size. The property of positive definiteness is preserved.\\u201d\\n\\nThank you. Sincerely,\\n\\nThe authors\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We greatly appreciate your insightful feedback. We would like to respond to your comments and explain how we improved the manuscript.\\n\\n<<For the first contribution stated by the authors, what are the theoretical and practical implications of the different regularization terms/properties \\u2026>>\\n\\nThank you for asking for details. The practical implications are, that the GP is regularized during optimization when optimizating over U. These parameters U have, following Gal & Turner (2015), the same properties as in the Nystr\\u00f6m case, but in the lower bound in (8) they are simply used without being linked to the weights a.\\nWhat we can show, is, that we can further marginalize the integral in (7) to a Gaussian with mean 0 and covariance matrix\\nK = K_NN + (\\u03a6 (K_MM)^-1 \\u03a6^T - K_NM (K_MM)^-1 K_MN) + \\u03c3_noise^2.\\nWe see that we have the true covariance matrix plus a discrepancy of the sparse approximations of the Sparse Spectrum and the Nystr\\u00f6m covariance matrix plus noise. Therefore, our approximation can be seen as a trade-off between these two sparse approximations.\\nFor now we included these insights in Section 3.3. \\nWe further fixed a typo in the prior assumption and added the noise. We also made a distinction for the IP case in Section 4.2.\\n\\n<<Can the authors provide a detailed derivation of DVI for equation 13 as well as for the predictive distributions in Section 6.3.5? >>\\n\\nPlease refer to Section 6.3.6. and to the end of Section 6.3.5. for the detailed derivation.\\n\\n<<Can the authors provide a time complexity analysis of all the tested deep recurrent GPs? >>\\n\\nSince the different implementations have different evolution states with regard to optimization, we decided against a time complexity analysis, since we think it might be misleading. But we agree, that your question is of interest and should be done in an extra work.\\n\\n<<Would the authors' proposed approach be able to extend the framework of Hoang et al. (2017) (see below) \\u2026 >>\\n\\nThank you for making us aware of this paper. We do not see any problems. We added a sentence in Section 5.3.\\n\\n<<Minor issues: Just below equation 6, equation 9, \\u2026 need to decide whether to italicize their notations in bold or not. >>\\n\\nThank you for the advice. We consciously decided to use italic and bold. We have to make this point clear to the reader. To make this clearer to the reader, we added an explanation in beginning of Section 3. We hope this will meet your concerns. We fixed an issue for the expectation in Section 3.1, write it in bold as in the entire paper and also write f_x instead of f(x) for the random variable at x.\\n\\n<<Equations are not properly referenced in a number of instances. >>\\n<<The authors have used their commas too sparingly, which makes some sentences very hard to parse. >>\\nThanks for your comment. We tried to address these issues as good as possible.\\n\\n<<What is the difference between REVARB-(V)SS(-IP), DRGP-(V)SS(-IP), and DRGP-VSS-IP?\", \"page_5\": \"will makes it possible? >>\\n\\nThank you for pointing out all those issues, we addressed all of them.\\n\\n<<Page 5: to simplify notation, we write h^{L+1}_{Hx+1:} = y_{Hx+1:}?\\nSuch a notation does not look simplified. >>\\n\\nThe simplification comes from the special role of the last layer, where h^(L+1) corresponds with y. In order to avoid these notational issues throughout the paper, we introduced the respective notation (see the joint density and Equations 14, 15). Anyhow, we changed the position of the simplification right below the joint density.\\n\\n<<Equation after equation 12: On LHS, should U^(l) be a random variable? >>\\n\\nWe did not define any prior or variational distribution on these parameters, but the standard notation is, to assume it is. We also write f_X |X, where e.g. over X no distribution is defined.\\n\\n<<Page 17: Should the expressions begin with >=?>>\\n\\nYes, in order to highlight that they are lower bounds.\\n\\nThank you. Sincerely,\\n\\nThe authors\", \"equation_8\": \"q_a and q_Z should be placed next to the expectation.\", \"page_4\": \"choosen?\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We greatly appreciate your insightful feedback. We would like to respond to your comments and explain how we improved the manuscript.\\n\\n<<Most (if not all) of the technical developments in the paper are straightforward applications \\u2026 technical contribution of the paper is largely incremental.>>\\n\\nYou are correct that many of the steps we take are incremental and we face the same difficulties as in Titsias & Lawrence (2010), Gal & Turner (2015) and Mattos et al. (2016). Nevertheless, we think that the combination of these methods is new, as well the integration of the input space for spectral covariance function, which is not straightforward. Also, the combination of (V)SSGP with the regularization property of Titsias & Lawrence (2009);(2010) is not clear from the start. All in all, we hope that the sum of all these steps gives a valuable contribution to the community.\\n\\n<<Furthermore, while it is sensible to use random-feature approximation approaches (such as SS and VSS), \\u2026 and from my perspective, having a prior conditioned on the inducing variables lacks any theoretical motivation. >>\\n\\nThese parameters U have, following Gal & Turner (2015), the same properties as in the Nystr\\u00f6m case, but in the lower bound in (8) they are simply used without being linked to the weights a.\\nWhat we can show, is, that we can further marginalize the integral in (7) to a Gaussian with mean 0 and covariance matrix\\nK = K_NN + (\\u03a6 (K_MM)^-1 \\u03a6^T - K_NM (K_MM)^-1 K_MN) + \\u03c3_noise^2.\\nWe see that we have the true covariance matrix plus a discrepancy of the sparse approximations of the Sparse Spectrum and the Nystr\\u00f6m covariance matrix plus noise. Therefore, our approximation can be seen as a trade-off between these two sparse approximations.\\nFor now we included these insights in Section 3.3. \\nWe further fixed a typo in the prior assumption and added the noise. We also made a distinction for the IP case in Section 4.2.\\n\\n<<The empirical results are a bit of a mixed bag, \\u2026, it will be good to have some insights into when the proposed methods are expected to be better than their competitors. >>\\n\\nOf course you are correct. Our approach does not beat all corresponding benchmarks. The question, what method should be used in what setting is of great interest and we want to address this topic in future work. Currently we cannot characterize situations, when the new methods are better. Nevertheless, we hope that our work shows the capabilities of our approach and therefore is of interest.\\n\\n<<While the proposed method is motivated from an uncertainty propagation perspective, \\u2026 predictive posterior distributions. What is the point of using GPs otherwise? >>\\n\\nWe compared our predictive posterior distribution with the one of Mattos et al. (2016). We implemented his version by ourselves and did not have the same problems with variance predictions as in his paper Mattos et al. (2016) Figure 2, (l) (we therefore think that it might have been an implementation issue). \\nAll in all we think, that the variance predictions are equally good and therefore we concentrated on the RSME comparison. We added two more sentences in Section 5.2 regarding this.\\n\\n<<It is unnecessary to cite Bishop to explain how one obtains a marginal distribution. >>\\n\\nYes, it is straightforward. We deleted the reference before Equation 7, but we think that it might be of interest for a reader unfamiliar with the topic, and so we kept the reference in Section 3.3.\\n\\n<<Would it be possible, to use the work of Cutajar et al (2017), \\u2026 If so, why aren\\u2019t the authors comparing to this? >>\\n\\nThank you for the input. We added the experiments for this DGP with NARX structure for the first layer to our paper. \\n\\n<<I recommend the authors use the notation p(v) = \\u2026 and q(v) = \\u2026 everywhere rather than v ~ \\u2026>>\\n\\nThank you for your advice. We agree that the notation p(v)=... is widely used. Nevertheless, we prefer to be mathematically more precise and differentiate between the random variable v (italic) and the realizations/samples v (upright) and therefore we use the notation p_v, where v ~ N(m,v). We agree, that our paper was not very specific in highlighting that point. To make this clearer to the reader, we added an explanation in beginning of Section 3. We hope this will meet your concerns.\\n\\n<<The analysis of Figure 1 needs expanding. >>\\n\\nThank you for the advice. We added the following explanation and hope that we meet your concerns: \\n\\u201dWe initialize the states with the output training-data for all layers with minor noise (first column) and after training we obtain a trained state (second column).\\u201d\\n\\n<<What are the performance values obtained with a standard recurrent neural net / LSTM? >>\\n\\nWe agree, that a comparison with those methods is of interest and we therefore added the results of Al-Shedivat et al. (2017) and our own results for standard RNN and LSTM. We further added also the results, where we deleted the auto-regressive part in the first layer for GP-LSTM.\\n\\nThank you. Sincerely,\\n\\nThe authors\"}", "{\"title\": \"Response to All Reviewer\", \"comment\": \"We greatly appreciate all your insightful feedback. We would like to respond to your comments in general and explain how we improved the manuscript.\\nIn particular, we included three more references: Eleftheriadis et al. (2017) for SSM models, (Rahimi & Recht, 2008) for the case of Random Fourier Features, Hoang et al. (2017) for a generalization of the (V)SSGP.\\nMore methods in the experiments are included, the RNN, the LSTM and the DGP-RFF from Cutajar et al. (2016).\\nWe improved the appendix.\\nFurthermore, we added the detailed derivation of DVI in the appendix.\\nWe further changed the appearance of the density for all expectations, bringing it to the front, for more readability.\\nThere was a discussion about our notation (upright, bold and italic), therefore we added at the beginning of Section 3. an explanation.\\nIn Section 3.3 we explain more in detail how the combination with the IP case makes sense from the theoretical point and hope this will meet your concerns.\\nIn Section 3.4 we explain the upcoming statistics more in detail.\\nWe further improved many language issues, punctuation problems and hope the paper improved its clarity.\\n\\nDetails are explained in the individual responses to each referee report.\\n\\nThank you. Sincerely,\\n\\nThe authors.\"}", "{\"title\": \"A combination of existing sparse-spectrum techniques and deep recurrent Gaussian processes but not properly justified\", \"review\": \"This paper addresses the problem of modeling sequential data based on one of the deep recurrent Gaussian process (DRGP) structures proposed by Mattos et al (2016). This structure acts like a recurrent neural net where every layer is defined as a GP. One of the main limitations of the original method proposed by Mattos et al (2016) is that it is limited to a small set of covariance functions, as the variational expectations over these have to be analytically tractable.\\n\\nThe main contributions of this paper are the use of previously proposed inference, namely (i) the sparse spectrum (SS) of Lazaro-Gredilla et al (2010); its variational improvement by Gal and Turnner (2015) (VSS); and the inducing-point (IP) framework of Titsias and Lawrence (2010) into the recurrent setting of Mattos et al (2016). Most (if not all) of the technical developments in the paper are straightforward applications of the results in the papers above. Therefore, the technical contribution of the paper is largely incremental. Furthermore, while it is sensible to use random-feature approximation approaches (such as SS and VSS) in GP models, it is very unclear why combining the IP framework with SS approaches makes any sense at all. Indeed, the original IP framework was motivated as a way to deal with the scalability issue in GP models, and the corresponding variational formulation yielded a nice property of an additional regularization term in the variational bound. However, making the prior over a (Equation 9) conditioned on the inducing variables U is rather artificial and lacks any theoretical justification. To elaborate on this, in the IP framework both the latent functions (f in the original paper) and the inducing inputs come from the same GP prior, hence having a joint distribution over these comes naturally. However, in the approach proposed in this paper, a is a simple prior over the weights in a linear-in-the-parameters model, and from my perspective, having a prior conditioned on the inducing variables lacks any theoretical motivation. \\n\\nThe empirical results are a bit of a mixed bag, as the methods proposed beat (by a small margin) the corresponding benchmarks on 6 out of 10 problems. While one would not expect a proposed method to win on all possible problems (no free lunch), it will be good to have some insights into when the proposed methods are expected to be better than their competitors. \\n\\nWhile the proposed method is motivated from an uncertainty propagation perspective, only point-error metrics (RMSE) are reported. The paper needs to do a proper evaluation of the full predictive posterior distributions. What is the point of using GPs otherwise?\", \"other_comments\": \"I recommend the authors use the notation p(v) = \\u2026 and q(v) = \\u2026 everywhere rather than v ~ \\u2026 as the latter may lead to confusion on how the priors and the variational distributions are defined. \\nIt is unnecessary to cite Bishop to explain how one obtains a marginal distribution\\nWould it be possible to use the work of Cutajar et al (2017), who use random feature expansions for deep GPs, in the sequential setting? If so, why aren\\u2019t the authors comparing to this?\\nThe analysis of Figure 1 needs expanding \\nWhat are the performance values obtained with a standard recurrent neural net / LSTM?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Combination of known ideas, hard to read\", \"review\": \"This paper proposes deep recurrent GP models based on the existing DRGP framework, two works on sparse spectrum approximation as well as that of inducing points. In these models, uncertainty is propagated by marginalizing out the hidden inputs at every layer.\\n\\nThe authors have combined a series of known ideas in the proposed work. There is a serious lack of discussion or technical insights from the authors for their technical formulations: in particular, what are the non-trivial technical challenges addressed in the proposed work? Furthermore, the authors are quite sloppy in referencing equations and inconsistent in the use of their defined notations and acronyms. I also find it hard to read and understand the main text due to awkward sentence structures.\\n\\nHave the authors revealed their identity on page 2 of the paper? I quote: \\\"We refer to the report Foll et al. (2017) for a detailed but preliminary formulation of our models and experiments.\\\" and \\\"DRGP-(V)SS code available from http://github.com/RomanFoell/DRGP-VSS.\\\"\", \"detailed_comments_are_provided_below\": \"For the first contribution stated by the authors, what are the theoretical and practical implications of the different regularization terms/properties between the lower bounds in equations 10 vs. 8? These are not described in the paper.\\n\\nCan the authors provide a detailed derivation of DVI for equation 13 as well as for the predictive distributions in Sectio 6.3.5?\\n\\nCan the authors provide a time complexity analysis of all the tested deep recurrent GPs?\\n\\n\\nWould the authors' proposed approach be able to extend the framework of Hoang et al. (2017) (see below) that has generalized the SS approximation of Lazaro-Gredilla et al. (2010) and the improved VSS approximation of Gal & Turner (2015)?\\n\\nHoang, Q. M.; Hoang, T. N.; and Low, K. H. 2017. A generalized stochastic variational Bayesian hyperparameter learning framework for sparse spectrum Gaussian process regression. In Proc. AAAI, 2007\\u20132014.\", \"minor_issues\": \"Just below equation 6, equation 9, and throughout the entire paper, the authors need to decide whether to italicize their notations in bold or not.\\n\\nEquations are not properly referenced in a number of instances.\\n\\nThe authors have used their commas too sparingly, which makes some sentences very hard to parse.\\n\\nWhat is the difference between REVARB-(V)SS(-IP), DRGP-(V)SS(-IP), and DRGP-VSS-IP?\", \"equation_7\": \"LHS should be conditioned on U.\", \"page_4\": \"choosen?\", \"equation_8\": \"q_a and q_Z should be placed next to the expectation.\", \"page_5\": \"to simplify notation, we write h^{L+1}_{Hx+1:} = y_{Hx+1:}? Such a notation does not look simplified.\", \"equation_after_equation_12\": \"On LHS, should U^(l) be a random variable?\", \"page_17\": \"Should the expressions begin with >=?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"DEEP RECURRENT GAUSSIAN PROCESS WITH VARIATIONAL SPARSE SPECTRUM APPROXIMATION\", \"review\": \"Overall Score: 7/10.\", \"confidence_score\": \"7/10.\", \"detailed_comments\": \"This paper introduces various Deep Recurrent Gaussian Process (DRGP) models based on the Sparse Spectrum Gaussian Process (SSGP) models and the Variational Sparse Spectrum Gaussian Process (VSSGP) models. This is a good paper and proposed models are very sound so I recommend for acceptance although as main weakness I can say that is very technical so it can be difficult to follow. Adding more intuitive ideas, motivation and maybe a figure for each step would be a solution. Apart from that it is a really good paper, congratulations.\", \"related_to\": \"RNN models and Sparse Nystrom approximation.\", \"strengths\": \"Models are very sound, solutions are solid, the proposed methodology is correct and the empirical results and experiments are valid and properly done.\", \"weaknesses\": \"It is too difficult to follow and it is written in an extreme technical way. More intuitions and a proper motivation both in the abstract and introduction may be put in order to make the paper easier to read and, hence, more used by researchers and data scientists.\\n\\nDoes this submission add value to the ICLR community? : Yes it does, the experiments show the efficiency of the proposed methods in some scenarios and are valid methodologies.\", \"quality\": \"Is this submission technically sound?: Yes it is.\\nAre claims well supported by theoretical analysis or experimental results?: Experimental results prove empirically the methods and appendixes show the analysis performed in a clear and elegant way.\\nIs this a complete piece of work or work in progress?: Complete piece of work.\\nAre the authors careful and honest about evaluating both the strengths and weaknesses of their work?: Yes, and I would enfatize that I have liked that some experiments are won by other methods such as GP-LSTM, they are very honest.\", \"clarity\": \"Is the submission clearly written?: Yes, but it is difficult for newcomers due to the reasons that I have stated before.\\nIs it well organized?: Yes it is.\\nDoes it adequately inform the reader?: Yes it is.\", \"originality\": \"Are the tasks or methods new?: Yes, they are sound.\\nIs the work a novel combination of well-known techniques?: Yes it is.\\nIs it clear how this work differs from previous contributions?: Yes.\\nIs related work adequately cited?: Yes, being a strength of the paper.\", \"significance\": \"Are the results important?: I would argue that they are and are a clear alternative to consider in order to solve these problems.\\nAre others likely to use the ideas or build on them?: If the paper is written in a more friendly way, yes.\\nDoes the submission address a difficult task in a better way than previous work?: Yes I think.\\nDoes it advance the state of the art in a demonstrable way?: Yes, empirically.\", \"arguments_for_acceptance\": \"Models are very sound, solutions are solid, the proposed methodology is correct and the empirical results and experiments are valid and properly done\", \"arguments_against_acceptance\": \"Clarity of the paper.\", \"minor_issues_and_typos\": \"-> (V)SS not defined before being used.\\n-> Abstract should be rewritten adding a motivation and focusing more on the problems being solved and less in the details of the solutions.\\n-> Recurrent indexes that go backwards (i) of Eq. 1. should be explained why are going backwards before being used like that. Newcomers may be confused.\\n-> Section 2 writing style lacks a bit of cohesion, relating the paragraphs may be a solution.\\n-> Q is not defined in section 3.1 paragraph 1.\\n-> A valid covariance function must produce a PSD matrix, put that in section 3.1. \\n-> I do not see how U marginalizes in Eq. 7, kind of confused about that, I think that it should be p(y|X,U).\\n-> Section 3.4 statistics should be explained.\", \"reading_thread_and_authors_response_rebuttal_decision\": \"=================================================\\n\\nI consider that the authors have perfomed a good rebuttal and reading the other messages and the authors response I also consider that my issue with clarity is solved. Hence, I upgrade my score to 7 and recommend the paper for publication.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
H1eqjiCctX
Understanding Composition of Word Embeddings via Tensor Decomposition
[ "Abraham Frandsen", "Rong Ge" ]
Word embedding is a powerful tool in natural language processing. In this paper we consider the problem of word embedding composition \--- given vector representations of two words, compute a vector for the entire phrase. We give a generative model that can capture specific syntactic relations between words. Under our model, we prove that the correlations between three words (measured by their PMI) form a tensor that has an approximate low rank Tucker decomposition. The result of the Tucker decomposition gives the word embeddings as well as a core tensor, which can be used to produce better compositions of the word embeddings. We also complement our theoretical results with experiments that verify our assumptions, and demonstrate the effectiveness of the new composition method.
[ "word embeddings", "semantic composition", "tensor decomposition" ]
https://openreview.net/pdf?id=H1eqjiCctX
https://openreview.net/forum?id=H1eqjiCctX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJlCZ4AflN", "SJgh2tslAQ", "Skeg0uolRm", "HyeFqDog0X", "Skg5uUslRX", "rkeMbIWjn7", "SJgejqZqhQ", "Skg4J4xt27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544901637561, 1542662580028, 1542662343615, 1542662032950, 1542661745548, 1541244409600, 1541180056358, 1541108699543 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper649/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper649/Authors" ], [ "ICLR.cc/2019/Conference/Paper649/Authors" ], [ "ICLR.cc/2019/Conference/Paper649/Authors" ], [ "ICLR.cc/2019/Conference/Paper649/Authors" ], [ "ICLR.cc/2019/Conference/Paper649/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper649/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper649/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"AR1 is concerned about lack of downstream applications which show that higher-order interactions are useful and asks why not to model higher-order interactions for all (a,b) pairs. AR2 notes that this submission is a further development of Arora et al. and is satisfied with the paper. AR3 is the most critical regarding lack of explanations, e.g. why linear addition of two word embeddings is bad and why the corrective term proposed here is a good idea. The authors suggest that linear addition is insufficient when final meaning differs from the individual meanings and show tome quantitative results to back up their corrective term.\\n\\nOn balance, all reviewers find the theoretical contributions sufficient which warrants an accept. The authors are asked to honestly reflect all uncertain aspects of their work in the final draft to reflect legitimate concerns of reviewers.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Further development/continuation of Arora et al.\"}", "{\"title\": \"Uploaded revision\", \"comment\": \"We have uploaded a revision of the paper that incorporates suggestions of the reviewers and expands on experimental results. The largest changes are in Section 5 on the experimental verification, where we include the results of our experiments on verb-object phrases (previously we only showed results for adjective-noun phrases).\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for reading and evaluating our submission.\\n\\nAdditive composition vs. tensor: as discussed in our introduction (and illustrated by the qualitative results in Tables 1 and 2), we believe that linear addition of two word embeddings may be an insufficient representation of the phrase when the combined meaning of the words differs from the individual meanings. Syntactically related word pairs such as adjective-noun and verb-object pairs can have this property. The tensor term can capture the specific meaning of the word pair taken as a whole, as evidenced by qualitative and quantitative evaluations.\", \"rand_walk_and_syntax\": \"we will clarify this point more carefully: what we mean is that the RAND-WALK model itself does not treat syntactically related word-pairs different from other word pairs. From a purely model perspective, in the RAND-WALK model each word is generated independent of all others given the discourse vector, hence the model itself does not account for syntactic relationships between words. Certainly the word embeddings trained based on this model may capture syntactic information that is communicated through the co-occurrence statistics of the training corpus, which allows their embeddings to perform decently on syntactic analogy tasks. Our goal is to explicitly model syntactic dependencies in the context of a word embedding model, in the hopes that the learned embeddings might capture additional information that is missed in non-syntax-aware embedding models.\", \"weighting_the_tensor_term\": \"we don't expect that our model or any other model will correspond perfectly with how humans use language in practice. When it comes to tasks such as predicting phrase similarity, we give our model a bit of extra flexibility to account for this discrepancy. We also note that previous works on embedding composition also explore various re-weighting schemes. While the meaning of the weighting parameter isn't a central question in our work, one can think of it as the degree to which specific knowledge of the syntactic relationship between the two words affects the phrase's overall meaning.\", \"verifying_assumptions_in_our_model\": \"we note that in section 5 of the paper, we verify the assumptions and concentration phenomena introduced in our model.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We are grateful to the reviewer for their time and effort in reading our paper and providing feedback.\", \"generative_model_assumptions\": \"our model is an expansion of the original RAND-WALK model of Arora et. al., with the purpose of accounting for syntactic dependencies. The additional assumptions we include and the concentration phenomena we prove theoretically are verified empirically in section 5, so our results do hold up on real data.\", \"use_on_downstream_tasks\": \"we believe that capturing syntactic relationships using a tensor can be useful for some downstream tasks, since our results in the paper suggest that it captures additional information above and beyond the standard additive composition. However, as the main goal of this paper is to introduce and analyze the model, we defer more application-focused analysis to future work.\", \"interaction_between_arbitrary_word_pairs\": \"our model introduces the tensor in order to capture syntactic relationships between pairs of words, such as adjective-noun and verb-object pairs. While it might be interesting to try to capture interactions between all pairs of words, that is not justified by our model and we didn't explore it. However, we also trained our model using verb-object pairs, and we have updated section 5 as well as the appendix to include these additional results.\\n\\nComparison to Arora, Liang, Ma ICLR 2017: we appreciate the suggestion to include a comparison with the SIF embedding method of Arora et. al., as this method is also obtained from a variant of the original RAND-WALK paper. We have updated Table 2 and the discussion in section 5 to include these additional results. As reported in their paper, the SIF embeddings yield a strong baseline for sentence embedding tasks, and we find the same to be true in the phrase similarity task for adjective-noun phrases (not so for verb-object phrases). However, we find that we can improve upon the SIF performance by addition of the tensor component from our model. (We note that we have just used the tensors trained in our original model; it is possible that combining the model in SIF and syntactic RAND-WALK more carefully could give even better results.)\", \"additional_citations\": \"we have updated the paper to include both additional citations.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for their time and response to our paper.\", \"phrase_similarity_results\": \"the tensor component T(v_a,v_b,.) does yield improvement over all other weighted additive methods in 5 out of 6 cases, as shown in Table 3. We have also updated that table with additional results, which show that adding in the tensor component improves upon the strong baseline of the SIF embedding method. We also added Table 4, which repeats the phrase-similarity task for verb-object pairs, and shows that the tensor component leads to improvement in most cases.\"}", "{\"title\": \"T(v_a, v_b,.)-addition is an improvement?\", \"review\": \"The paper deals with further development of RAND-WALK model of Arora et al. There are stable idioms, adjective-noun pairs and etc that are not covered by RAND-WALK, because sometimes words from seemingly different contexts can join to form a stable idiom.\\n\\nSo, the idea of paper is to introduce a tensor T and a stable idiom (a,b) is embedded into v_{ab}=v_a+v_b+T(v_a, v_b,.) and is emitted with some probability p_sym (proportional to exp(v_{ab} times context)). The latter model is similar to RAND-WALK, so it is not surprising that statistical functions there are similarly concentrated. Finally, there exists an expression, PMI3(u,v,w), that shows the correlation between 3 words, and that can be estimated from the data directly. It is proved that Tucker decomposition of that tensor gives us all words embeddings together with tensor T. Thus, from the latter we will obtain a tool for finding embeddings of idioms (i.e. v_a+v_b+T(v_a, v_b,.)).\\n\\nTheoretical analysis seems correct (I have not checked all the statements thoroughly, but I would expect formulations to be true). The only problem I see is that phrase similarity part is not convincing. I cannot understand from that part whether T(v_a, v_b,.) addition to v_a+v_b gives any improvement or not.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"novel, but it is unclear that the approach is useful for downstream tasks\", \"review\": \"The authors consider the use of tensor approximations to more accurately capture syntactical aspects of compositionality for word embeddings. Given two words a and b, when your goal is to find a word whose meaning is roughly that of the phrase (a,b), a standard approach to to find the word whose embedding is close to the sum of the embeddings, a + b. The authors point out that others have observed that this form of compositionality does not leverage any information on the syntax of the pair (a,b), and the propose using a tensor contraction to model an additional multiplicative interaction between a and b, so they propose finding the word whose embedding is closest to a + b + T*a*b, where T is a tensor, and T*a*b denotes the vector obtained by contracting a and b with T. They test this idea specifically on the use-case where (a,b) is an adjective,noun pair, and show that their form of compositionality outperforms weighted versions of additive compositionality in terms of spearman and pearson correlation with human judgements. In their model, the word embeddings are learned separately, then the tensor T is learned by minimizing an objective whose goal is to minimize the error in predicting observed trigram statistics. The specific objective comes from a nontrivial tensorial extension of the original matricial RAND-WALK model for learning word embeddings.\\n\\nThe topic is fitting with ICLR, and some attendees will find the results interesting. As in the original RAND-WALK paper, the theory is interesting, but not the main attraction, as it relies on strong generative modeling assumptions that essentially bake in the desired results. The main appeal is the idea of using T to model syntactic interactions, and the algorithm for learning T. Given that the main attraction of the paper is the potential for more performant word embeddings, I do not believe the work will have wide appeal to ICLR attendees, because no evidence is provided that the features from the learned tensor, say [a, b, T*a*b], are more useful in downstream applications than [a,b] (one experiment in sentiment analysis is tried in the supplementary material with no compelling difference shown).\", \"pros\": [\"theoretical justification is given for their assumption that the higher-order interactions can be modeled by a tensor\", \"the tensor model does deliver some improvement over linear composition on noun-adjective pairs when measured against human judgement\"], \"cons\": [\"no downstream applications are given which show that these higher order interactions can be useful for downstream tasks.\", \"the higher-order features T*a*b are useful only when a is noun and b is an adjective: why not investigate using T to model higher-order interaction for all (a,b) pairs regardless of the syntactic relationships between a and b?\", \"comparison should be made to the linear composition method in the Arora, Liang, Ma ICLR 2017 paper\"], \"some_additional_citations\": [\"the above-mentioned ICLR paper provides a performant alternative to unweighted linear composition\", \"the 2017 Gittens, Achlioptas, Drineas ACL paper provides theory on the linear composition of some word embeddings\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The paper aims to produce useful word embedding compositions using a method based on the Tucker decomposition of a three-way PMI tensor. The paper presents a potentially promising solution to the problem of compositions in word embedding; yet it is marred by lack of theoretical insights, unwarranted over-generalizations, leaps in justification, and sub-optimal presentation.\", \"review\": \"The authors suggest a method to create combined low-dimensional representations for combinations of pairs of words which have a specific syntactic relationship (e.g. adjective - noun). Building on the generative word embedding model provided by Arora et al. (2015), their solution uses the core tensor from the Tucker decomposition of a 3-way PMI tensor to generate an additive term, used in the composition of two word embedding vectors.\\n\\nAlthough the method the authors suggest is a plausible way to explicitly model the relationship between syntactic pairs and to create a combined embedding for them, their presentation does not make this obvious and it takes effort to reach the conclusion above. Unlike Arora's original work, the assumptions they make on their subject material are not supported enough, as in their lack of explanation of why linear addition of two word embeddings should be a bad idea for composition of the embedding vectors of two syntactically related words, and why the corrective term produced by their method makes this a good idea. Though the title promises a contribution to an understanding of word embedding compositions in general, they barely expound on the broader implications of their idea in representing elements of language through vectors.\\n\\nTheir lack of willingness to ground their claims or decisions is even more apparent in two other cases. The authors claim that the Arora's RAND-WALK model does not capture any syntactic information. This is not true. The results presented by Arora et al. indeed show that RAND-WALK captures syntactic information, albeit to a lesser extent than other popular methods for word embedding (Table 1, Arora et al. 2015). Another unjustified choice by the authors is their choice of weighing the Tensor term (when it is being added to two base embedding vectors) in the phrase similarity experiment. The reason the authors provide for weighing the composition Tensor is the fact that in the unweighted version their model produced a worse performance than the additive composition. One would at least expect an after-the-fact interpretation for the weighted tensor term and what this implies with regard to their method and syntactic embedding compositions in general.\\n\\nArora's generative model for word embeddings, on which the current paper is largely based upon, not only make the mathematical relationship among different popular word embedding methods explicit, but also by making and verifying explicit assumptions with regard to properties of the word embeddings created by their model, they are able to explain why low-dimensional embeddings provide superior performance in tasks that implicate semantic relationships as linear algebraic relations. Present work, however interesting with regard to its potential implications, strays away from providing such theoretical insights and suffices with demonstrating limited improvements in empirical tasks.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rylqooRqK7
SNAS: stochastic neural architecture search
[ "Sirui Xie", "Hehui Zheng", "Chunxiao Liu", "Liang Lin" ]
We propose Stochastic Neural Architecture Search (SNAS), an economical end-to-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of back-propagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes less epochs to find a cell architecture with state-of-the-art accuracy than non-differentiable evolution-based and reinforcement-learning-based NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets.
[ "Neural Architecture Search" ]
https://openreview.net/pdf?id=rylqooRqK7
https://openreview.net/forum?id=rylqooRqK7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "LLINTR8To4L", "r1evyDwxlE", "HklcIwZyJV", "SJlIOlhYAX", "HJgkGl3FRQ", "HJegdD5FAQ", "BJle6ULzA7", "r1ewiLa-A7", "SJgtwZ2WC7", "S1ls0l2bCQ", "B1xdhCsZR7", "SJg_LRib0X", "BJeaD3s-CX", "rkxfcO6yCm", "r1gLByK8TQ", "SkxDoOiBpQ", "Skl7rA7zaQ", "S1lt6f7zpm", "Hyg3FnObpX", "BkxZlzashX", "B1g-LCSsn7", "Hkxvf8zq27", "rkeruyjYhX", "SJxvc_VdhX", "S1ZWK63j7", "BJxaZ7Kojm" ], "note_type": [ "official_comment", "meta_review", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "comment", "official_comment", "comment", "official_comment", "comment", "official_review", "official_review", "official_comment", "comment", "official_review", "official_comment", "comment" ], "note_created": [ 1585701999217, 1544742623001, 1543604049576, 1543254126269, 1543254022986, 1543247720198, 1542772408090, 1542735519430, 1542730081074, 1542729938987, 1542729391909, 1542729296089, 1542728804747, 1542604938469, 1541996350361, 1541941406658, 1541713466601, 1541710529214, 1541667972498, 1541292520770, 1541262920758, 1541182990520, 1541152621350, 1541060751509, 1540311288517, 1540227845018 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "ICLR.cc/2019/Conference/Paper648/Area_Chair1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper648/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper648/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper648/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper648/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"title\": \"Code has been released.\", \"comment\": \"Thanks for your interest, we have released our implementation at https://github.com/SNAS-Series/SNAS-Series.\"}", "{\"metareview\": \"This paper provides an alternative way to enable differentiable optimization to the neural architecture search problem. Different from DARTS, SNAS reformulates the problem and employs Gumbel random variables to directly optimize the NAS objective. In addition, the resource-constrained regularization is interesting. The major cons of the paper is that the empirical results are not quite impressive, especially when compared to DARTS, in terms of both accuracy and convergence. I think this is a borderline paper but maybe good enough for acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Alternative way to differentiable NAS\"}", "{\"comment\": \"Will you be releasing code for SNAS? It would help fill in some of the details for how the architecture distribution parameters are trained.\", \"title\": \"Code Release?\"}", "{\"title\": \"Manuscript updated\", \"comment\": \"We have updated Table 2 to include results of DARTS with single-level optimization as reported by DARTS\\u2019s authors for fair comparison with SNAS. SNAS is single-level optimization because it simultaneously optimizes neural operation parameters and architecture distribution parameters over the same dataset. Analysis is included in Section 3.3 Results. For further details, please refer to the response to AnonReviewer1 [1].\\n\\n[1] https://openreview.net/forum?id=rylqooRqK7&noteId=HJgkGl3FRQ\"}", "{\"title\": \"Response to review (3) - Clarification on 1st-order optimization\", \"comment\": \"We are sorry that in our last response we mistook 1st-order DARTS as single-level DARTS since the latter one was not reported by authors. It is reported in the newest version of DARTS, which is also added to our updated version. The \\\"1st-order SNAS\\\" in our last response actually meant single-level SNAS, because the neural operation parameters and architecture distribution parameters are updated simultaneously. In DARTS's newest version, it is stated that single-level DARTS performs much worse than bi-level, either 1st-order or 2nd-order, which is thus much worse than SNAS. This supports our claim that SNAS is less biased.\\n\\nAnd we can also provide an interpretation for 2nd-order DARTS's comparable performance with SNAS. From our understanding, DARTS is using meta-learning to look for a resolution for the bias proved by us in a data-driven way. Though authors cited [1], it could not justify that is optimizing the exact objective, for basically two reasons. Firstly, the connection between sufficient condition provided in [1] and DARTS is not discussed. Secondly, even if an explicit connection could be provided, 2nd-order DARTS is still biased due to the ignorance of the separate derivation scheme in the meta-learning loss (i.e. bi-level loss), which is proved by our experiments and single-level DARTS's unsatisfying performance. \\n\\nWe admit that the possibility of improvement with bi-level optimization exists even in less biased methods like SNAS. And the rationale is that some operations like skip connection affects the loss in next iteration more than the one in current iteration. When first proposed in [2], the skip connection is expected to help gradients' back-propagation. That is, skip connection plays a role of hyper-parameter for the gradient update process, the optimization of which prefers meta-learning. Unfortunately, we don't have enough time to run experiments to validate this rationale. It will be our next future work.\\n\\n[1] Franceschi et al., \\\"Bilevel programming for hyperparameter optimization and meta-learning\\\". ICML 2018.\\n[2] He et al., \\\"Deep Residual Learning for Image Recognition\\\", CVPR 2016.\"}", "{\"title\": \"Manuscript updated\", \"comment\": \"We have updated Table 2 to include results from three runs of SNAS as requested by one reviewer.\"}", "{\"title\": \"Response to review\", \"comment\": \"Thank you for your review.\\n\\n1) How to use gradient information to generate child network\\nSNAS does not directly use gradient information to generate child network. Search gradient is naturally applied to update architecture parameters, which are parameters for concrete distribution. Then in the derivation step, operations with largest probability in the concrete distribution are selected. \\n\\n2) Why using training/testing loss as reward can improve the results? \\nThis is introduced with details in Section 2.3, as well as Appendix D and E, which we believe is one of our contribution. As stated in your comment, we first prove that NAS is a task in deterministic environment with fully delayed reward. Then a proof from [1] is introduced that TD-learning suffers from delayed bias when delayed reward exists. It is proposed and proved in [1] that Taylor decomposition of reward could resolve delayed bias because no temporal difference setting exists anymore. We then prove that the delayed reward in NAS could be decomposed and assigned to all structural decisions with gradient back-propagation, when differentiable training/testing loss is used as reward. Therefore, leveraging the proof from [1], we prove that SNAS should converge faster than ENAS, which is verified by our experiment as stated in Section 3.1. Intuitively speaking, we spot the unnecessary temporal difference setting in NAS and solve it by using training/testing loss as reward. \\n\\n[1] Arjona-Medina et al., \\\"Rudder: Return decomposition for delayed rewards\\\", arXiv 2018\"}", "{\"title\": \"Response to review (2)\", \"comment\": \"1.2) Clear win over ENAS\\nFollow the metric defined above, SNAS's advantage over ENAS is three-fold, due to a better credit assignment mechanism and a resource constraint: \\nA. Less epochs to converge to higher accuracy in searching;\\nB. Automated sparse network generation;\\nC. Slightly better accuracy with 1/3 fewer parameters in child networks.\"}", "{\"title\": \"Response to questions\", \"comment\": \"Thank you for your questions.\\n\\n1) Reported DARTS accuracy lower than original paper\\nThe purpose of providing this reproduced result is to evaluate the searching result from last subsection (Section 3.1). Sorry for the confusion it caused. We have removed it in the updated version. In our search we only achieve (3.02+/-0.14)% evaluation accuracy, which is still a bit lower than accuracy reported in original paper. We are sure that we ran experiments with correct hyper-parameters. But as mentioned in your Q5, in stochastic searching task, sometimes ones just need a bit of luck due to the stochastic nature of the objective. We are willing to reveal random seeds at request. \\n\\n2) Empirical study on rnn\\nWe didn't have enough time to run extensive experiment on rnn. But we believe the theory proposed in our work could be sufficiently verified with our extensive experiments on cnn, whose extension to rnn would be straightforward. \\n\\n3) An explanation on Figure 4.\\nIn the updated version, Figure 4 is updated to show stats of entropy of softmax weights at all edges in the searching result. With a lower entropy in the learnt parent graphs, SNAS is more certain on the structural decision. \\n\\n4.1) Training setting\\nExperiments for ENAS were run with the default setting. And we noticed that the parent network in DARTS is a little bit different from ENAS, though parameter size is quite close. The reported result for SNAS were run with setting of DARTS for fair comparison, given that for some reason DARTS could not be fit into one GPU card with ENAS's setting. But we have also run SNAS experiment with ENAS's setting, whose searching curve has only negligible difference from the reported one. All experiments were run on the same training and testing set. \\n\\n4.2) Correlation of training accuracy and child network final accuracy\\nFor SNAS, the correlation coefficient between training accuracy and child network final accuracy is 0.79. We didn't run and evaluate DARTS for statistically sufficient number of times to reach any claim of the correlation. Could you please provide some justification for that? As if this claim was true, it would help validate our claim that the manually designed child network derivation scheme is biased... \\n\\n5) Report results from more runs\\nWe believe the reported result can support our claims. Nonetheless, we are evaluating more child networks to provide this variance. Thank you for your suggestion!\"}", "{\"title\": \"Further on parameter size\", \"comment\": \"Thank you for your explanation.\\n\\nThe experiments of SNAS are conducted only on convolutional cells due to limited time. Sorry we might not be able to answer questions regarding the size of the recurrent cells.\"}", "{\"title\": \"Response to review\", \"comment\": \"Thank you very much for your positive comments and detailed summary!\\n\\nWe have included experiments to show how the effect of ZERO op differentiates SNAS from DARTS. Please kindly have a check.\"}", "{\"title\": \"Response to review (1)\", \"comment\": \"Thank you very much for your review and questions!\\n\\n1) Clear win over DARTS:\\nAs a NAS task, it is believed that the performance of a framework is evaluated with i) the efficiency and automation in searching process and ii) the accuracy and complexity of searching result, i.e. child networks. In this metric, SNAS's advantage over DARTS is three-fold, due to less-biased searching objective and the resource constraint: \\n\\nA. Less computing resources for the whole searching pipeline\\nChild network derived from SNAS without any fine-tuning could maintain the accuracy, thus a) during searching, the accuracy in SNAS could reflect the performance of child network. DARTS, on the contrast, has to retrain the network for 100 epochs as stated in the caption of figure 3 to track the actual searching progress; b) after searching, DARTS has to retrain the child network even if there is no extension on cell number of channel number. SNAS, on the contrast, does not have this requirement. A retraining is only needed when the child network is extended, which in our work is basically for fair comparison. All these retraining will take much longer time when NAS is directly applied to a large dataset. \\n\\nB. Automated sparse network generation\\nThough DARTS takes ZERO op, which represents deleting the edge, into account in the searching process, it is omitted in child network operation selection as discovered by one of our reviewers in this comment [1]. (This discovery is very interesting, as this reviewer discovered that ZERO tends to be the op with largest weight in DARTS. That is, in DARTS the \\\"soft-2nd-max\\\" is chosen.) The approach to delete edge is manually designed as \\\"to choose the top-k incoming edges for each node\\\". In SNAS, to keep or delete an edge is automatically learnt. That is to say, the ZERO op is acting its supposed job to engender sparsity. In our updated version, experiment showed that with an aggressive resource constraint, SNAS discovers architecture whose reduction cell has only two edges and two nodes but comparable accuracy with 1st-order DARTS in CIFAR-10, posing a question for the validity or optimality of manually designed scheme in DARTS. \\n\\nC. Comparable accuracy with less resource in child networks\\nIn our updated version, we show that with a moderate resource constraint which plays the role of a regularizer, SNAS discovers architecture with slightly better accuracy and fewer parameters comparing to 1st-order DARTS, which is also comparable to 2nd-order DARTS. Note that in this paper we only show result of 1st-order SNAS due to limited time and extensive experiment required, though the 2nd-order extension is straight-forward [2]. As shown in DARTS, as well as [3], 2nd-order empirically brings better optimality, a fair comparison would be with 1st-order DARTS. Actually in 1st-order SNAS, an accuracy comparable with 1st-order DARTS could be achieved with 1/3 fewer parameters, when an aggressive constraint is applied. \\n\\nAs for transferring to ImageNet, there is no theoretical justification in the literature to the best of our knowledge, we provide it mainly for a fair comparison with DARTS. Our next step is to try a direct search on ImageNet leveraging that SNAS does not need retraining on the searching result. \\n\\n\\n2) The effect of fine-tuning in evaluating child networks directly derived from DARTS\\nIn our empirical study, a fine-tuning of the derived child networks can improve its performance, but could not remedy the gap completely after 100 epochs. (100 epoch is the plateau of fine-tuning, and also a fair comparison with SNAS.) And there seems always to be a small gap (-1.0+/-0.7)% between the accuracy after fine-tuning and at the end of searching.\\n\\nMore importantly, the 'gap' we want to discuss here is between the performance of a derived child network and the optimization objective in searching. As shown in Figure 3 in our paper, the optimization objective in searching is already converged to some optimum before this derivation, for both architecture parameters and operation parameters. Theoretically speaking, to use this parent network would be a justified result for the optimization problem. But with the absence of a guarantee that softmax weights will become discrete in the end, it would become an attention learning task, rather than NAS task. Though there exist methods like designing prior or extra learning objective to autonomously encourage one-hot-ness, a scheme is manually designed to delete a large portion of operations even though their weights are not 0. A natural question to ask is, why in this case the architecture parameter is still the optimal, even though the performance of child networks could be boosted with some fine-tuning. \\n\\n\\n\\n[1] https://openreview.net/forum?id=rylqooRqK7&noteId=rkeruyjYhX\\n[2] Finn et al., \\\"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks\\\", ICML 2017.\\n[3] https://openreview.net/forum?id=rylqooRqK7&noteId=BJxaZ7Kojm\"}", "{\"title\": \"Manuscript updated\", \"comment\": \"We thank all reviewers for your recommendation, comments and expressing your concerns. We have updated the manuscript in Section 1, Section 3.1 and Section 3.2, taking your feedbacks into account. Here we provide a summary of these updates:\\n\\n1) We have tried extensive sweeps on the constraint hyperparameter \\\\eta. Previously we only tried \\\\eta that lies at the margin of appearance of ZERO op in the child network. The new discovery is that with a larger \\\\eta, the regularizing effect of resource constraint becomes obvious. A pair of new cells was discovered on CIFAR-10, which achieves better accuracy than 1st-order DARTS, as well as ENAS. Its accuracy is also on par with 2nd-order DARTS, with fewer parameters. In the updated version, we report its accuracy and parameter size, with the architecture attached in Appendix H. \\n\\n2) When a more aggressive constraint is applied, more edges are dropped in SNAS. A new figure is added to exhibit SNAS's capability of discovering sparse structures that ENAS and DARTS are not able to discover. With 1/3 fewer parameters, it achieves on-par accuracy with 1st-order DARTS. \\n\\n3) We updated figure 4 as ZERO was not excluded in the previous version. As discovered by some reviewer, it is omitted during operation selection for child graph in the code released by DARTS's authors. Besides, since to use variance to measure how much architecture weights differentiate from each other might be confusing, we updated it with entropy of softmax at every edge, which we hope to be more self-explanatory. Basically, the conclusion remains the same, in SNAS the learnt architecture distribution is more certain about the structural decisions.\"}", "{\"comment\": \"Hi authors.\\nThis is an interesting work, but I feel that some experimental comparisons are a little bit unfair as follows:\\n\\n1. In Table 2, the authors report DARTS* has a test error of 3.15%, which is claimed as reproduced by the released code. I also run their codes and can obtain a similar performance with 2.86%. Reporting 3.15% might be misleading to others readers, and make other researchers wrongly refer the results of DARTS in the following papers.\\n\\n2. DARTS has conducted results on both CNN and RNN. Why SNAS does not report the RNN results?\\n\\n3. Would you mind to give more explanation about Figure 4?\\n\\n4. In Figure 3, how did you run ENAS? In addition, do these three methods use the same training and validation set? Based on the GitHub issue of DARTS, the validation accuracy of DARTS does not have an explicit connection with the performance of the discovered model. How about the connection between the validation accuracy of SNAS and the final discovered model of SNAS?\\n\\n5. For NAS approaches, there is usually a variance of the performance of the final discovered model. Would you mind to report the results of three runs of SNAS?\", \"title\": \"Some question about the experiments.\"}", "{\"comment\": \"Thanks for your reply!! As I run the code in github of DARTS paper, the parameter size was about 7M when a recurrent cell with 7 intermediate nodes is evaluated under the setting of 20 layers and 36 initial channels. Thus, I thought I should have reduced the number of nodes or the number of layers whatever...\", \"title\": \"20 Layers + 36 initial channels -> Too much parameters\"}", "{\"title\": \"On parameter size\", \"comment\": \"Thank you for your question.\\n\\nWe have implemented a counting function to reproduce the result of DARTS and used it to count the parameter size of our network.\\n\\nDo you mind showing how you reached the claim that the size should be much larger?\"}", "{\"comment\": \"Thank you for your quick and kind reply!\\nTo my knowledge, for CIFAR10 evaluation, the number of layers was set to 20(reported in the paper), the number of nodes within a cell was set to 7(adopted from search) and the number of initial channels was set to 36(in the comment above). \\nThen, the number of parameters should be much larger than 3.3M I think... Thus, think if the number of initial channels was set to 36, the number of layers should have been set to 8 as search mode to match the number of parameters reported in SOTA comparison table... Is it wrong?\", \"title\": \"More clarification\"}", "{\"title\": \"On initial channel number\", \"comment\": \"Thank you for the question.\\n\\nFor fair comparison with DARTS, we employed the same set of hyperparameters as specified in the code publicly released by its authors. I bet the number of initial channels in your question refers to the channel number used for the first cell in the network according to the code. During CIFAR10 evaluation, the number of initial channels is set to 36 rather than 16. This will lead to an increase in the number of parameters.\\n\\nWe will add more hyperparameter details in the revised version. Hope this can answer your question and thanks again for the comment.\"}", "{\"comment\": \"Hi authors!\\nI am so impressed by your paper because it feels like this is a connection between NAS and DARTS.\\nHowever, I am curious about some details on CIFAR10 evaluation.\\nWithin a cell, there are 7 nodes right as DARTS right?\\nThen, how is the number of initial channels set? The same number of initial channels 16 applies as search mode?\\nIf so, it seems to me that the number of parameters reported seems so large.\\nCould you specify the number of initial channels plus another hyperparmeters to clarify the evaluation setting?\\n\\nAnyway, thanks for the nice paper!\", \"title\": \"Derivation of child network for SOTA comparison\"}", "{\"title\": \"Novel approach that addresses some shortcomings of the previous NAS techniques.\", \"review\": \"This paper improves upon ENAS and DARTS by taking a differentiable approach to NAS and optimizing the objective across the distribution of child graphs. This technique allows for end-to-end architecture search while constraining resource usage and allowing parameter sharing by generating effective reusable child graphs.\\n\\nSNAS employs Gumbel random variables which gives it better gradients and makes learning more robust compared to ENAS. The use of Gumbel variables also allow SNAS to directly optimize the NAS objective which is an advantage over DARTS.\\n\\nThe resource constraint regularization is interesting. Regularizing on the parameters that describe the architecture can help constrain resource usage during the forward pass. \\n\\nThe proposed method is novel but the main concern here is that there is no clear win over existing techniques in terms of performance. I can't see anywhere in the tables where you demonstrate a clear improvement over DARTS or ENAS.\\n\\nFurthermore, in your child network evaluation with CIFAR-10, you mention that the comparison is without fine-tuning. Do you think this might be contributing to the performance gap in DARTS?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Official Review\", \"review\": \"Summary:\\nThis paper proposes Stochastic Neural Architecture Search (SNAS), a method to automatically and efficiently search for neural architectures. It is built upon 2 existing works on these topics, namely ENAS (Pham et al 2018) and DARTS (Liu et al 2018).\\n\\nSNAS provides nice theory and explanation of gradient computations, unites the strengths and avoid the weaknesses of ENAS and DARTS. There are many details in the paper, including the Appendix. The idea is as follows:\\n+------------+---------------------+-------------------------+\\n| Method | Differentiable | Directly Optimize |\\n| | | NAS reward |\\n+------------+---------------------+-------------------------+\\n| ENAS | No | Yes |\\n| DARTS | Yes | No |\\n| SNAS | Yes | Yes |\\n+------------+---------------------+-------------------------+\\nSNAS inherits the idea of ENAS and DARTS by superpositioning all possible architectures into a Directed Acyclic Graph (DAG), effectively sharing the weights among all architectures. However, SNAS improves over ENAS and DARTS as follows (Section 2.2):\\n\\n1. SNAS improves over ENAS in that it allows independent sampling at edges in the shared DAG, leading to a more tractable gradient at the edges of the DAG, which in turn allows more tractable Monte Carlo estimation of the gradients with respect to the architectural parameters.\\n\\n2. While DARTS also has the property (1), DARTS implements this by computing the expected value at each node in the DAG, with respect to the joint distribution of the input edges and the operations. This makes DARTS not optimize the direct NAS objective. SNAS, due to their smart manipulation of architectural gradients using Gumbel variables, still optimizes the same objective with NAS and ENAS, but has a smoother gradients.\\n\\nExperimental results in the paper show that SNAS finds architectures on CIFAR-10 that are comparable to those found by ENAS and DARTS, using a reasonable amount of computing resource. These architectures can also be transferred to learn competent models on ImageNet, like those of DARTS. Furthermore, experimental observations (Figure 3) are consistent with the theory above, that is:\\n\\n1. The search process of SNAS is more stable than that of ENAS (as SNAS samples with a smaller variance).\\n2. Architectures found by SNAS perform better than those of DARTS, as SNAS searches directly for the NAS reward of the sampled models.\", \"strengths\": \"1. SNAS unites the strengths and avoids the weaknesses of ENAS and DARTS\\n\\n2. SNAS provides a nice theory, which is verified through their experimental results.\", \"weaknesses\": \"I don\\u2019t really have any complaints about this paper. Some presentations of the paper might have been improved, e.g. the discussion on the ZERO operation in other comments should have been included.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"ZERO operation is just as other operations if no complexity constraint is added\", \"comment\": \"Thank you for your interesting discovery and the questions.\\n\\n0) derivation code for SNAS is the same as DARTS? \\nNo. We have implemented our derivation method to replace the one provided by DARTS as SNAS uses fundamentally different one. But in our replication of DARTS, we ran the implementation publicly released without checking the code for derivation. We are trying to replicate DARTS's result again, taking your claim into consideration. \\n\\n1) is the logit of ZERO the largest in most edges of the normal cell? \\nAs introduced in Section 2.4 in our paper, SNAS employs a complexity loss to encourage sparsity in child network. This is different from ENAS and DARTS which directly select two input edges for each node. With a relatively large hyperparameter for this complexity loss, the logit of ZERO dominates some of the edges, though the child network still have other non-ZERO edges to keep it connected. If no complexity loss is added, the child network tends to remain the complete topology, which is actually one of our motivations to introduce complexity loss. \\n\\n2) reason for this discrepancy \\nAssuming that your result is valid, it is explained by DARTS\\u2018s authors to be the underdetermined contribution and rescaling effect of ZERO in the mixed op. We have the following two hypotheses for the discrepancy between SNAS and DARTS, which would be added to revised version if we can prove them with mathematical deduction: \\na. different from the softmax attention in DARTS, SNAS employs Gumbel-Softmax, whose mechanism involves Gumbel random variables. With equivalent logit and temperature, the random variable vector from Gumbel-Softmax is possibly more one-hot than the deterministic softmax attention. The more discrete network could amplify the incapability of ZERO for a smaller loss, thus the logit of it would not be boosted. \\nb. the gradients back-propagated through deterministic softmax and Gumbel-Softmax are different. As provided in Section 2.3 and Appendix C in our paper, there is stochasticity (random variables Z) involved in the search gradients in SNAS. Given the special trait of ZERO that O(x)=0, it is possible to be a local optimum or a gradient blackhole for deterministic softmax, which is probably escaped by Gumbel-Softmax with the stochasticity.\"}", "{\"comment\": \"I have tried to run the released code of DARTS and found that in the code, the operation ZERO is omitted during the derivation process. I also checked the logit of ZERO operation learned by DARTS, and found that in the normal cell, it is the largest in most edges.\\n\\nIs the same derivation code used by SNAS and thus omitting ZERO operation in the experiments? If not, could you please give an explanation to:\\n1) whether the logit of ZERO is the largest in most edges of the normal cell learned by SNAS?\\n2) If the result is different from DARTS, why is this difference?\", \"title\": \"On ZERO operation\"}", "{\"title\": \"An incremental work on NAS with good experiment results.\", \"review\": \"This work refines the NAS method for efficient neural architecture search. The paper brings new methods for gradient/reward updates and credit assignment.\", \"pros\": \"1. An improvement on gradient calculation and reward back-propagation mechanism\\n2. Good experiment results and fair comparisons\", \"cons\": \"1. Missing details on how to use the gradient information to generate child network structures. In eq.2, multiplying each one-hot random variable Zij to each edge (i, j) in the DAG can obtain a child graph whose intermediate nodes are xj. However, it is still unclear how to generate the child graph. More details on generating child network based on gradient information is expected. \\n2. In SNAS, P(z) is assumed fully factorizable. Factors are parameterized with alpha and learnt along with operation parameters theta. The factorization of p(Z) is based on the observation that NAS is a task with fully delayed rewards in a deterministic environment. That is, the feedback signal is only ready after the whole episode is done and all state transitions distributions are delta functions. In eq. 3, the authors use the training/testing loss directly as reward, while the previous method uses a constant reward from validation accuracy. It is unclear why using the training/testing loss can improve the results?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"On child network derivation\", \"comment\": \"Thank you for the question.\\n\\nThe derivation method for DARTS is stated in Section 2.4 in their paper, details could also be found in the implementation publicly released by authors. In our paper we paraphrase it to give an explanation to the drop in accuracy after this derivation. As stated in your comments, \\\"these removed operations and edges make up a very large percentage of the softmax score\\\". Through this comparison we want to emphasize SNAS's consistency in child network derivation, because of explicitly taking it, i.e. sampling, into account in the searching loss. \\n\\nIn this comparison, child networks are directly tested after the derivation for both SNAS and our replication of DARTS. No re-scaling or any other extra transformation is involved. But we do find that by using the unrolled option, DARTS's result (54.66%) falls less than the non-unrolled one (34.37%). SNAS, in contrast, shows slightly better result (90.67%) after derivation than at the end of searching (88.54%), probably as the latter one is a Monte Carlo estimate of expectation. \\n\\nWe will add more details into the revised version, thanks again for this comment.\"}", "{\"comment\": \"It is unclear how the authors obtain the child network for DARTS. As is mentioned by the paper, the architecture derivation step in DARTS consists of two steps. (1) Remove operations with relatively weak attention and the zero operation. (2) Remove relatively ambiguous edges. As we know, these removed operations and edges make up a very large percentage of the softmax score. After the removal, the scale of the value of the nodes (output feature maps) drops significantly. Do we need to re-scale the value of output feature maps for compensation? I cannot reproduce the result of 54.66% in Table 1.\", \"title\": \"Missing detail\"}" ] }
HJGciiR5Y7
Latent Convolutional Models
[ "ShahRukh Athar", "Evgeny Burnaev", "Victor Lempitsky" ]
We present a new latent model of natural images that can be learned on large-scale datasets. The learning process provides a latent embedding for every image in the training dataset, as well as a deep convolutional network that maps the latent space to the image space. After training, the new model provides a strong and universal image prior for a variety of image restoration tasks such as large-hole inpainting, superresolution, and colorization. To model high-resolution natural images, our approach uses latent spaces of very high dimensionality (one to two orders of magnitude higher than previous latent image models). To tackle this high dimensionality, we use latent spaces with a special manifold structure (convolutional manifolds) parameterized by a ConvNet of a certain architecture. In the experiments, we compare the learned latent models with latent models learned by autoencoders, advanced variants of generative adversarial networks, and a strong baseline system using simpler parameterization of the latent space. Our model outperforms the competing approaches over a range of restoration tasks.
[ "latent models", "convolutional networks", "unsupervised learning", "deep learning", "modeling natural images", "image restoration" ]
https://openreview.net/pdf?id=HJGciiR5Y7
https://openreview.net/forum?id=HJGciiR5Y7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1lihjEWgE", "B1xopr2HCQ", "S1x8opbLaX", "BJeIwpbITQ", "Skls46b8TQ", "B1et32fpnX", "r1gXKJZ6nX", "r1xLI0153X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544797106546, 1542993346634, 1541967261909, 1541967197986, 1541967155472, 1541381297164, 1541373818795, 1541172814303 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper647/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper647/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper647/Authors" ], [ "ICLR.cc/2019/Conference/Paper647/Authors" ], [ "ICLR.cc/2019/Conference/Paper647/Authors" ], [ "ICLR.cc/2019/Conference/Paper647/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper647/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper647/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers are in general impressed by the results and like the idea but they also express some uncertainty about how the proposed actually is set up. The authors have made a good attempt to address the reviewers' concerns.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Latent model for images - presents convincing results on image impainting+\"}", "{\"title\": \"Response to the Author comment\", \"comment\": \"I agree the comment from the author mostly for my concerns.\\n\\n(1) From the interpolated images, a point in the latent space seems to be matched to corresponding image in the image distribution, which means that it does not simply memorizes the images.\\n\\n(2) By seeing the figure 8, I think this work can be tested in image generation task, either. In final version, I strongly want to see the Pure Image generation result. \\n\\nBased on the comment, I changed my previous rating.\"}", "{\"title\": \"Response to R3\", \"comment\": \"Thank you for the careful review. Fortunately, your main concern though very grave is due to a very simple misunderstanding. We hope that once the misunderstanding is resolved, the rating may be reconsidered.\\n\\n\\\"Equation 2 in the paper seems that it just fit the generator parameter theta to map the phi_i and x_i and memorize the mapping between the training images and the given latent convolutional variables. \\nIf the proposed algorithm just memorizes the training image and map them into given the latent convolution, the result cannot justify the proposal that the author proposes a new latent space.\\\"\\n\\nWe want to stress that all evaluations and qualitative examples are produced on the _hold-out_ test sets that were not in any way used to train the parameters theta of the generator network. So, we can very confidently say that the reason why the approach works is not memorization of the training set within theta. \\n\\n\\\"I want to see the (latent space) interpolation test for the proposed latent convolutional space.\\\"\\n\\nWe have added latent space interpolations to the appendix G (Figure 12) in the end of the paper. These interpolations were again done on a _hold-out_ set of images. The examples were ``cherry-picked'' for distinctiveness. In more details, in our (biased) view, LCM were always at least as good as other methods, but in some cases, e.g. for pairs of aligned perfectly frontal faces all interpolations look more or less the same, so we picked cases with clear difference between methods. Thank you for suggesting this comparison, it nicely illustrates the effect of the convolutional manifold constraint. If possible, please use zoom-in/large screen to view these results.\\n\\n\\\"..the interpolated point (phi_in, s_in) between two points: (phi_1, s_1) and (\\\\phi_2, s_2)..\\\"\\n\\nActually, the s vector is always fixed to some random noise value. I.e. it is not instance specific and is not modified by learning (one can add optimization over s, but in practice this does not change much).\"}", "{\"title\": \"Response to R1\", \"comment\": \"Thank you very much for the review.\\nWe would like to point out that there are no encoder network in our approach (although one can possibly discuss ways to add it). Also, note that our contribution is not that we only increase the resolution of the latent space, but that we suggest a specific regularization of the latent space (the convolutional manifold) that significantly improves the generalizability of the resulting latent model.\\n\\n\\\"Are test images included in the training the convolutional networks?\\\"\\nAll results (qualitative, quantitative, user study) are performed on hold-out sets, that were not used to train the parameters of the decoder (i.e. theta). The only exception is the progressive GAN baseline, for which there is a mix of training and test sets (since for the comparison we just reuse author-provided models trained on complete sets). This gives an advantage to the pGAN baseline (admittedly not a very big one, since GANs struggle to fit the training sets). To reiterate, all results of OUR method (LCM) are computed strictly on the hold-out test sets.\\n\\nTo train our model we use the Laplacian-L1 along with an MSE term with a weight of 1.0. We noticed that the MSE term speeds up convergence without affecting the results by much. The optimization is carried out using stochastic gradient descent with a learning rate of 1.0. We note that the code for the paper and the experiments will be released for reproducibility.\"}", "{\"title\": \"Response to R2\", \"comment\": \"Thank you for the careful review. Here are the responses.\\n\\n\\\"Did you try other standard restoration tasks, such as image denoising or deblurring? If not, do you think they would work equally well?\\\"\\nWe have tried denoising (with synthetic noise), where the relative performance is similar. We have not tried deblurring, although we expect the relative performance. \\n\\n\\\"- A limitation (at least as presented) is that the corruption process has to be known analytically (as a likelihood objective) and must be differentiable for gradient-based inference.\\\"\\nWhile technically we do assume that the corruption process is known, it is still possible to apply our approach with simplified (inaccurate) likelihood function. To show that we have added appendix H (Figure 13), which shows how restoration from heavy JPEG artifacts can be done using simple quadratic likelihood functional. The second limitation (need for optimization at test time) is indeed important. We can partially remedy it by adding encoder that would take a corrupted image and output a good starting point in a latent space. We have added discussion/acknowledgement of these limitations to the end of the conclusion section.\\n\\n\\\"- How dependent is the restoration result with respect to the initialization? For example, when starting gradient descent with the degraded image vs. a random image.\\\"\\nOur approach cannot start with the degraded image, since we do not know the corresponding latent space initialization. So we always start with a random latent vector. Generally, we found that initializing the latent networks using the same parameters as when the training started worked the best (so we always use the same random vector). Different starting points lead to results with very slightly worse visual quality (the perceptual loss increases by about 0.0006), which are still better than that of competing methods. Note, that we experimented with different initializations for all the models and chose the one that worked the best for each (to give baselines a fair treatment).\\n\\n\\\"Roughly, how many iterations and runtime is needed for inference?\\\"\\nFor a batch of 50 images, it takes about 1000-2000 iterations with takes between 6-12 minutes. Tasks like super-resolution can be done in about 1000 iterations or so and inpainting can take up to 1500-2000 iterations.\\n\\n\\\"- Did you try different optimizers, such as L-BFGS?\\\"\\nYes, we have tried L-BFGS for inference. We had to use a lower learning rate and were able to produce results similar to that of SGD. Generally, L-BFGS did not offer any significant advantages over SGD.\"}", "{\"title\": \"Latent Convolutional Models\", \"review\": \"This paper proposes to increase the latent space dimensionality of images, by stacking the latent representation vectors as a tensor. Then convolutional decoder and encoder networks are used to map the original data to latent space and vice versa. The learned latent representations can then be used in a universal framework for multiple tasks such as image inpainting, superresolution and colorization.\\n\\nThe idea of increasing the dimensionality of the latent space, although not sophisticated, seems to be performing very good. Indeed in some of qualitative experiments, the results are surprising. The authors should clarify that how is the training procedure performed in more details. Are test images included in the training the convolutional networks?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"universal image prior with compelling results, but more limited than specialized restoration nets\", \"review\": \"# Summary\\nThe paper proposes to embed natural images in a latent convolutional space of high dimensionality to obtain a universal image prior. Concretely, each image is embedded as a custom parameter vector of a CNN, which turns random noise into the input of a universal generator network to restore the image in pixel space.\\nInference for image restoration is performed by minimizing the energy of a likelihood objective while constraining the latent representation of the restored image to be part of the learned latent space. Experiments for inpainting, super-resolution, and colorization are performed to evaluate the proposed method.\\n\\n# Positive\\nAs mentioned in the paper, I agree that the idea of learning a universal image prior is appealing, since it can be applied to (m)any image restoration tasks without adjustment.\\nI am not very familiar with the related work, but if I understood correctly, the paper seems to combine deep latent modeling (GLO, Bojanowski et al., 2018) and deep image priors (Ulyanov et al., 2018). The experiments show good results which qualitatively appear better than those of related methods. A user study also shows that people mostly prefer the results of the proposed method.\\nDid you try other standard restoration tasks, such as image denoising or deblurring? If not, do you think they would work equally well?\\n\\n# Limitations\\nWhile I agree that a universal image prior is valuable, the paper should (briefly) mention what the disadvantages of the proposed approach are:\\n- A limitation (at least as presented) is that the corruption process has to be known analytically (as a likelihood objective) and must be differentiable for gradient-based inference.\\n- Furthermore, the disadvantage of the universal prior as presented in the paper is that restoring an image requires optimization (e.g. gradient descent). In contrast, corruption-specific neural nets typically just need a forward pass to restore the image and are thus easier and faster to use.\\n\\n# Restoration inference\\n- How dependent is the restoration result with respect to the initialization? For example, when starting gradient descent with the degraded image vs. a random image.\\n- Roughly, how many iterations and runtime is needed for inference?\\n- Did you try different optimizers, such as L-BFGS?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"[Review] Latent Convolutional Models\", \"review\": \"[Summary]\\n- This work proposes a new complex latent space described by convolutional manifold, and this manifold can map the image in a more robust manner (when some part of the image are to be restored).\\n\\n[Pros]\\n- The results show that the latent variable mapped to the image well represents the image, and it will be helpful for the image restoration problem.\\n- it seems novel to adapt the idea of DIP for defining complex latent space.\\n\\n[Cons]\\n- The main concern is that there is no guarantee that the defined latent space is continuous. \\nIt means that it is difficult to judge whether the interpolated point (phi_in, s_in) between two points: (phi_1, s_1) and (\\\\phi_2, s_2), will be matched to the image distribution. \\nEquation 2 in the paper seems that it just fit the generator parameter theta to map the phi_i and x_i and memorize the mapping between the training images and the given latent convolutional variables. \\nIf the proposed algorithm just memorizes the training image and map them into given the latent convolution, the result cannot justify the proposal that the author proposes a new latent space.\\n\\n[Summary]\\n- This work proposes an interesting idea of defining complex latent space, but It is doubtful that this work just memorized the mapping between the training images and the latent convolutional parameters.\\n- I want to see the (latent space) interpolation test for the proposed latent convolutional space. If the author provides a profound explanation of the problem, I would consider changing the rating.\\n\\n--------------------------\\nSee the additional comment for the changed rating\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
B1x9siCcYQ
SENSE: SEMANTICALLY ENHANCED NODE SEQUENCE EMBEDDING
[ "Swati Rallapalli", "Liang Ma", "Mudhakar Srivatsa", "Ananthram Swami", "Heesung Kwon", "Graham Bent", "Christopher Simpkin" ]
Effectively capturing graph node sequences in the form of vector embeddings is critical to many applications. We achieve this by (i) first learning vector embeddings of single graph nodes and (ii) then composing them to compactly represent node sequences. Specifically, we propose SENSE-S (Semantically Enhanced Node Sequence Embedding - for Single nodes), a skip-gram based novel embedding mechanism, for single graph nodes that co-learns graph structure as well as their textual descriptions. We demonstrate that SENSE-S vectors increase the accuracy of multi-label classification tasks by up to 50% and link-prediction tasks by up to 78% under a variety of scenarios using real datasets. Based on SENSE-S, we next propose generic SENSE to compute composite vectors that represent a sequence of nodes, where preserving the node order is important. We prove that this approach is efficient in embedding node sequences, and our experiments on real data confirm its high accuracy in node order decoding.
[ "Semantic", "Graph", "Sequence", "Embeddings" ]
https://openreview.net/pdf?id=B1x9siCcYQ
https://openreview.net/forum?id=B1x9siCcYQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HyeIFIidgE", "HJl2qAoFCQ", "HJxrXRjFCQ", "SygHt5jKC7", "S1egN7wGam", "S1lqXr9cnm", "r11pKi1i7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545283197839, 1543253651592, 1543253532806, 1543252605352, 1541727015753, 1541215521763, 1539451318515 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper646/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper646/Authors" ], [ "ICLR.cc/2019/Conference/Paper646/Authors" ], [ "ICLR.cc/2019/Conference/Paper646/Authors" ], [ "ICLR.cc/2019/Conference/Paper646/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper646/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper646/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper can also improved thorough a more thorough evaluation.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Not enough novelty and in a somewhat niche area\"}", "{\"title\": \"motivation\", \"comment\": \"Thanks a lot for the feedback. We just would like to highlight a couple of things:\\n\\n(I) Applications of SENSE-S: SENSE-S computes embeddings of single nodes using both graph structure and node features (text). In occasions where these both are important, SENSE-S is very applicable. For instance, \\n(a) Recommendation systems frequently use node embeddings these days on \\u201cuser-product\\u201d interaction (bipartite) graphs. This helps understand which products should be recommended to which users. Quite often in this situation, bootstrapping new products is a problem since no users have viewed or bought them. In this case, using their textual descriptions (along with graph structure) will help us obtain reasonable initial embeddings.\\n(b) Link prediction in social networks is important to suggest new friends. This would depend on current friends of users (graph structure) as well as profile of users (textual descriptions). This is similar to the recommendation problem above, albeit in a different context. \\n\\n(II) Applications of SENSE: SENSE computes embeddings of node sequences with the same dimension as that of individual nodes. This is useful in a variety of scenarios, across different fields of computer science, where we want to represent a set of nodes while preserving the order. Representative applications include:\\n(a) Source routing: This refers to a routing strategy in Internet, where the sender of a packet specifies the path that this packet takes through the network. The path specifies a certain order that needs to be preserved and SENSE can effectively do so.\\n(b) Service composition: Recently microservices are getting popular, where a service is composed of several smaller microservices. Here, the order of execution becomes important and SENSE represents these complex services in the form of vectors to enable effective learning on vector representations. \\n(c) Reading order of pages: For instance, reading pages in Wikipedia.\\n(d) Representing any path in a graph or a network: For instance, representing the shortest/least congested/load balanced path between a pair of nodes.\"}", "{\"title\": \"Thanks a lot!\", \"comment\": \"We thank the reviewer for the constructive feedback! To the best of our knowledge SENSE-S is different from\\n(i) TADW (Network representation learning with rich text information), because TADW is based on DeepWalk, whereas SENSE-S has the flexibility that node2vec has in incorporating different ways of sampling node neighborhood to give different weightage to graph properties like homophily and structural equivalence as required. Moreover, SENSE-S is able to easily trade-off between the importance of graph weightage versus text weightage. In addition, TADW uses TFIDF matrix to incorporate text information. In contrast, since we use skip-gram model, we can also account for the context in which different words are used within a document.\\n(ii) HSCA (Homophily, structure, and content augmented network representation learning), because HSCA just builds on TADW and adds additional term to ensure learning homophily. However, SENSE-S is able to easily trade-off between the importance of graph weightage versus text weightage. Also, like TADW, HSCA also uses TFIDF matrix to incorporate text information.\\n(iii) PLANE (Probabilistic latent document network embedding), because the objective of PLANE is to maximize the likelihood that neighboring nodes have similar embeddings, which is not always the case in practice because neighboring nodes may be semantically different; more critically, it relies on strong assumptions of statistical distributions of words and edges in the network.\\n(iv) VGAE (Variational Graph Auto-Encoders), is sensitive to the input node feature matrix X, as they do not provide a specific way to construct X. Further, SENSE-S is flexible enough to trade-off the relative importance of graph and node features.\\n(v) AANE (Accelerated attributed network embedding), because unlike SENSE-S, AANE does not account for the context in which different words are used within a document. Attributes are either keywords of blogs, tags from images in Flickr or bag-of-words representation of reviews in Yelp.\\n(vi) ANRL (ANRL: Attributed Network Representation Learning via Deep Neural Networks), because just like AANE, ANRL does not account for the context in which different words are used within a document.\\nAlso please note that even with 512 dimension node embedding, we are able to decode workflow of length 10 nodes with over 90% accuracy.\"}", "{\"title\": \"Thanks a lot for your feedback!\", \"comment\": \"Objective of embedding sequences: Computes embeddings of node sequences with same dimension as that of individual nodes. This is useful in a variety of scenarios, across different fields of computer science, where we want to represent a set of nodes while preserving the order. Representative applications include:\\n(a) Source routing: This refers to a routing strategy in Internet, where the sender of a packet specifies the path that this packet takes through the network. The path specifies a certain order that needs to be preserved and SENSE can effectively do so.\\n(b) Service composition: Recently microservices are getting popular, where a service is composed of several smaller microservices. Here, the order of execution becomes important and SENSE represents these complex services in the form of vectors to enable effective learning on vector representations. \\n(c) Reading order of pages: For instance, reading pages in Wikipedia.\\n(d) Representing any path in a graph or a network: For instance, representing the shortest/least congested/load balanced path between a pair of nodes.\\n\\nAlthough we have to keep individual node embeddings, when workflow embeddings have to be communicated over a network, representing it with the same dimension as that of individual node vectors, as opposed to a list of node vectors, is helpful in reducing the total size of the transmitted information, while also providing secured information delivery because without the individual node embeddings, potential attackers are not able to decode a workflow.\\n\\nSENSE can also work on embeddings obtained via any other techniques (not just SENSE-S).\\n\\nSVM uses linear kernel.\\n\\nTrain/Valid/Test split -- for node classification, it is by nodes; for link prediction, it is by links.\"}", "{\"title\": \"Interesting topics are introduced but some corrections and clarifications are necessary\", \"review\": [\"The authors introduce the problem of learning embeddings that consider both text information and graph structures, as well as the embedding of a sequence of nodes with embeddings.\", \"However, the proposed algorithm, SENSE-S, is incremental in the sense of aggregating two simple structures. In the evaluation, it is compared only with the heuristic combination of node2vec and paragraph2vec, not with any existing work about the graph embeddings that incorporate node features even though they are mentioned in the related work.\", \"Furthermore, the objective of node sequence embedding is not clear. What do we want to represent from the embedding of node sequences? It looks like we have to keep the node embeddings anyway, and then what is the problem of just storing node ordering instead of having representation? Or can we aggregate node embeddings in some way with storing the order of nodes? These kinds of questions can be raised, mainly because of uncertain objectives. The description of preserving both ordering and node properties is too vague.\", \"Also, SENSE does not seem to have any connection with SENSE-S. Why is SENSE-S special to SENSE? Are they independent?\", \"Finally, the authors claim that SENSE is necessary to overcome the space issue that needs q*d dimension. However, from Figure 5, it seems that the proposed algorithm actually needs O(q*d) dimensions to represent the sequence correctly. It is somewhat related to the question about i.i.d. assumption in Theorem 2, where embedding does not guarantee the orthogonality across the dimensions.\", \"Details\", \"In the introduction, \\\"first\\\" is repeated in the last paragraph of Page 1.\", \"N_G(v) and N_T(\\\\phi) are said to be independent, but it should be the assumption since they are not the fact.\", \"Eq. (2) is not aligned with Eq (3) or (4). Either one needs to be fixed or the derivation needs to be described.\", \"How SVM is used needs to be described. Usage of embedding might be different depending on the usage of RBF kernel or linear kernel.\", \"Using the smaller number of random walks for Citation Network because it is a larger dataset needs some explanation.\", \"The calculation on the improvement percentage is completely misleading. If the accuracy is improved from 95% to 96%, it is about 1% improvement, not 20% improvement based on the error rate calculation.\", \"How the training/validation/test sets are split needs more description. Is it split by nodes or edges?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"This paper presents two approaches: one called SENSE-S for embedding nodes in attributed networks; the other one called SENSE for embedding a sequence of nodes. SENSE-S follows the structure of Skip-gram model. The main difference is that SENSE-S considers both node and words in node content as input and output for learning their embedding.\", \"review\": \"This paper presents two approaches: one called SENSE-S for embedding nodes in attributed networks; the other one called SENSE for embedding a sequence of nodes. SENSE-S follows the structure of Skip-gram model. The main difference is that SENSE-S considers both node and words in node content as input and output for learning their embedding. For generating embedding vector for a sequence of nodes, SENSE takes summation of cyclically shifted unit-vectors constructed by SENSE-S on nodes in a sequence.\\n\\nThe paper is well written with a clear definition of the studied problem and a clear introduction of the presented methods. Evaluation was conducted on two real-world data sets (Wikipedia and citation network). It is an interesting idea to represent a sequence by the summation of cyclically shifted unit-vectors of nodes in a sequence. However, there are several concerns about the work presented in this paper. \\n1) the evaluation of SENSE-S is not sufficient. The baseline methods used in comparison are the simple ones that take concatenation of vectors induced from text and graph, or use one for initializing the learning of the other. There existing several approaches that learn node embedding vectors from attributed graph (considering both the node content text and graph topology structure), such as TADW [1], HSCA [2], PLANE [3],GAE[4], AANE[5], ANRL [6]. SENSE-S should be compared with these methods for showing its effectiveness. \\n2) the embedding vector of a node sequence is evaluated by showing the decoding accuracy. It would be more interesting to show how these vectors can be used for some real applications. And, to have high decoding accuracy, the embedding dimension for sequences of 10 nodes should be up to 1024, which is quite expensive for computing and for storage, making the presented method unpractical in real-world applications. \\n\\n\\n[1] C. Yang, Z. Liu, D. Zhao, M. Sun, E. Y. Chang, Network representation learning with rich text information. IJCAI, 2015\\n[2] D. Zhang, J. Yin, X. Zhu, C. ZHang, Homophily, structure, and content augmented network representation learning. ICDM 2016. \\n[3] T. M. V. Le and H. W. Lauw. Probabilistic latent document network embedding. ICDM, 2014.\\n[4] Thomas N Kipf, Max Welling. Variational Graph Auto-Encoders. NIPS Workshop on Bayesian Deep Learning. 2016\\n[5] Xiao Huang, Jundong Li, Xia Hu. Accelerated attributed network embedding. SDM 2017.\\n[6] Zhen Zhang, Hongxia Yang, Jiajun Bu, Sheng Zhou, Pinggang Yu, Jianwei Zhang, Martin Ester, Can Wang. ANRL: Attributed Network Representation Learning via Deep Neural Networks. IJCAI, 2018\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting idea and fleshed-out experiments, but somewhat niche appeal.\", \"review\": \"The paper proposes node embedding methods for applications where nodes are sequentially related. An example application is the \\\"Wikispeedia\\\" dataset, in which nodes are connected in a graph, but a datapoint (a wikispeedia \\\"game\\\") consists of a sequence of nodes that are visited. Each node is further attributed with textual information.\\n\\nThe methods proposed are most closely related to skipgrams, whereby the sequence of nodes are treated like words in a sentence. Then, node attributes (i.e., text) and node representations must be capable of predicting neighboring nodes/words. (Fig.s 1/2 are a pretty concise overview of the proposed architecture).\\n\\nPositively, this is a quite sensible extension and modification of existing ideas in order to support a new (or different) problem setting.\\n\\nNegatively, I'd say the applications for this technique are fairly niche, which may limit the paper's readership. The method is mostly fairly straightforward and not methodologically groundbreaking (probably borderline in terms of expected methodological contribution for ICLR). I also didn't understand whether the theoretical claims were significant.\\n\\nThe wikispedia/physics experiments feel a bit more like proofs-of-concept rather than demonstrating that the technique has compelling real-world uses. The experiments are quite well fleshed-out and detailed though.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rkxciiC9tm
NADPEx: An on-policy temporally consistent exploration method for deep reinforcement learning
[ "Sirui Xie", "Junning Huang", "Lanxin Lei", "Chunxiao Liu", "Zheng Ma", "Wei Zhang", "Liang Lin" ]
Reinforcement learning agents need exploratory behaviors to escape from local optima. These behaviors may include both immediate dithering perturbation and temporally consistent exploration. To achieve these, a stochastic policy model that is inherently consistent through a period of time is in desire, especially for tasks with either sparse rewards or long term information. In this work, we introduce a novel on-policy temporally consistent exploration strategy - Neural Adaptive Dropout Policy Exploration (NADPEx) - for deep reinforcement learning agents. Modeled as a global random variable for conditional distribution, dropout is incorporated to reinforcement learning policies, equipping them with inherent temporal consistency, even when the reward signals are sparse. Two factors, gradients' alignment with the objective and KL constraint in policy space, are discussed to guarantee NADPEx policy's stable improvement. Our experiments demonstrate that NADPEx solves tasks with sparse reward while naive exploration and parameter noise fail. It yields as well or even faster convergence in the standard mujoco benchmark for continuous control.
[ "Reinforcement learning", "exploration" ]
https://openreview.net/pdf?id=rkxciiC9tm
https://openreview.net/forum?id=rkxciiC9tm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HkloE16MgN", "B1gaPZunTm", "B1e3qRw3p7", "HyeGY3w3p7", "HJlwfVrtpm", "HJlU1mHKp7", "B1xcdgFE67", "rkxN4ktETm", "S1g0_Au4pm", "Syx5z6ONTm", "S1lzrrz637", "HkxrqOb6nm", "HyeuvtzF2X", "BJlh3CZ8nQ", "HyenFPKShQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1544896306558, 1542386021395, 1542385299595, 1542384761981, 1542177807476, 1542177501751, 1541865586274, 1541865260500, 1541865078118, 1541864722037, 1541379385651, 1541376140593, 1541118303522, 1540918963757, 1540884356180 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper645/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper645/Authors" ], [ "ICLR.cc/2019/Conference/Paper645/Authors" ], [ "ICLR.cc/2019/Conference/Paper645/Authors" ], [ "ICLR.cc/2019/Conference/Paper645/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper645/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper645/Authors" ], [ "ICLR.cc/2019/Conference/Paper645/Authors" ], [ "ICLR.cc/2019/Conference/Paper645/Authors" ], [ "ICLR.cc/2019/Conference/Paper645/Authors" ], [ "ICLR.cc/2019/Conference/Paper645/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper645/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper645/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper645/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The authors have proposed a new method for exploration that is related to parameter noise, but instead uses Gaussian dropout across entire episodes, thus allowing for temporally consistent exploration. The method is evaluated in sparsely rewarded continuous control domains such as half-cheetah and humanoid, and compared against PPO and other variants. The method is novel and does seem to work stably across the tested tasks, and simple exploration methods are important for the RL field. However, the paper is poorly and confusingly written and really really needs to be thoroughly edited before the camera ready deadline. There are many approaches which are referred to without any summary or description, which makes it difficult to read the paper. The three reviewers all had low confidence in their understanding of the paper, which makes this a very borderline submission even though the reviewers gave relatively high scores.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta-learning\"}", "{\"title\": \"Response to follow-up(2)\", \"comment\": \"Q1. Even under the justification of MuProp, the estimate we provided is biased?\\nThe idea of MuProp is to find a Taylor expansion of f(z), which is expected to be close to f(z). However, given the strong non-linearity in neural networks, we admit that there could be some bias introduced in this approximation. But this bias is believed to be acceptable in the literature of both probabilistic model and reinforcement learning. For example, as cited in Appendix C, [1] is a special case of MuProp [2] which reduces the variance with small bias. In the experiments of both [1] and [2], it is shown that this straight through gradient estimator outperforms unbiased estimators when there is only one layer of random variables, which is exactly the case for NADPEx. Another example is the Generalized Advantage Estimation (GAE) [3] in the reinforcement learning literature, which reduces the variance of high dimensional policy at a cost of bias. In Eq. 10 we use it in the second term, which basically reduces variance from the same cause as in Eq. 12: the high dimensionality of q_{\\\\phi}. \\nWe understand your concern that this bias may influence the effect of regularization to some extent. This concern drove us to further enforce the idea to make dropout policies to be close to each other. More details are introduced in Appendix C. \\n\\nQ2. Why in our development of the approximation do we calculate the KL divergence of q_{\\\\phi} with Monte Carlo method given that it can be calculated analytically for both Gaussian and Bernoulli dropout? \\nThanks for pointing it out, this is a brilliant discovery. We really appreciate your prudence in this review. The usage of Monte Carlo Estimate for KL divergence is inherited from PPO [4], which is introduced in Section 2.1 and Appendix A of our paper. Though not explicitly pointed out in their paper, in our empirical study we found that regularizing with an analytical KL tends to hinder the convergence. (Note that in continuous tasks actions are always modeled with diagonal Gaussian whose KL divergence also has analytical form.) We reckon that a Monte Carlo estimate of KL could soften this constraint, sharing the same motivation with the \\\"trust region\\\" in TRPO. (Without a trust region, this KL constraint may be too hard for an agent to acquire learning progress efficiently). Alternative to this Monte Carlo relaxation, there is [5] in the literature to combine the advantage of trust region relaxation and gradient co-optimization of KL divergence. Basically, the KL regularizer with trust region is implemented as an adaptive cutting mechanism. Gradients from surrogate loss will only be regularized with gradients from KL when the KL is larger than the trust region \\\\delta. We provide another proof from this perspective that KL of q_{\\\\phi} almost never violates this constraint of trust region in our updated version. That explains why gradients from KL(q_{\\\\phi}||q_{\\\\phi^{old}}) could be stopped in NADPEx even though it has analytical form. \\n\\nIn sum, we feel grateful for your time and effort in helping us improve this paper. Our discussion has triggered us to deep-dive into the relaxation of KL divergence. We find it would be pretty interesting to investigate the relaxing effect of trust region and Monte Carlo in the future. In the same time, we want to emphasize that the possibility to conduct this theoretical analysis might be regarded as one of our work's contribution, as we provide a concrete form for NADPEx policies. NADPEx PPO is only one example. In the future, NADPEx could be combined with new on-policy policy gradient methods to help agent explore consistently and learn stably. \\n\\n[1] Raiko et al., \\\"Techniques for learning stochastic feedforward neural networks\\\", ICLR 2015. \\n[2] Gu et al., \\\"MuProp: Unbiased Backpropagation for Stochastic Neural Networks\\\", ICLR 2016. \\n[3] Schulman et al., \\\"High dimensional continuous control with generalized advantage estimation\\\", ICLR 2016.\\n[4] Schulman et al., \\\"Proximal policy optimization algorithms\\\", arXiv 2017.\\n[5] Wang et al., \\\"Sample Efficient Actor-Critic with Experience Replay\\\", ICLR 2017.\"}", "{\"title\": \"Response to follow-up(1)\", \"comment\": \"As explained in our last response, \\\\phi is still updated with gradients from surrogate loss i.e. Eq. 10 and Eq. 11. Note that if there was stop gradient operation, there would be two streams of gradients in NADPEx when a KL regularizer is added: one from surrogate loss, another one from KL divergence. We only stop gradients w.r.t. \\\\phi from the KL divergence. \\\"\\\\phi is not updated\\\" means gradients from surrogate loss are also stopped. Actually in our paper, it is referred to as bootstrap, named with BootstrapDQN [1], for which we provided a comparison with NADPEx in Section 4.3.\\n\\n[1] Osband et al., \\\"Deep exploration via bootstrapped dqn\\\", NIPS 2016.\"}", "{\"title\": \"Manuscript updated (2)\", \"comment\": \"We have updated Appendix C to provide an alternative approximation deduction in response to AnonReviewer3's follow-up.\"}", "{\"title\": \"Follow-up question (2)\", \"comment\": \"On page 14 of the revised version, \\\"To further reduce the variance, the first term is sometimes omitted with acceptable biased introduced (Raiko et al., 2015). \\\" Although the authors attempt to use the MuProp as justification, the first term is still ignored. Is it correct?\\n\\nLet's look at this term. (It is the last term of the last line at Eq 17) as shown below.\\n\\\\nabla_{\\\\phi} \\\\int q_{\\\\phi}(z) \\\\int \\\\pi_{\\\\theta|z}(a|s) \\\\log \\\\frac{ q_{\\\\phi}(z) \\\\pi_{\\\\theta|z}(a|s) } { q_{\\\\phi^{old}}(z) \\\\pi_{\\\\theta^{old}|z}(a|s) } da dz (*)\\n\\nThe authors argue that this term (*) should be ignored due to the high variance. Is it correct?\\n\\nNote that (*) can be decomposed into two terms as shown below:\\n\\\\nabla_{\\\\phi} \\\\int q_{\\\\phi}(z) \\\\pi_{\\\\theta|z}(a|s) \\\\log \\\\frac{ q_{\\\\phi}(z) } { q_{\\\\phi^{old}}(z) } dz (**)\\n+ \\\\nabla_{\\\\phi} \\\\int q_{\\\\phi}(z) \\\\int \\\\pi_{\\\\theta|z}(a|s) \\\\log \\\\frac{ \\\\pi_{\\\\theta|z}(a|s) } { \\\\pi_{\\\\theta^{old}|z}(a|s) } da dz (***)\\n\\n(**) can be computed without using any samples since q_{\\\\phi}(z) is either a factorized Gaussian or factorized Bernoulli distribution. In other words, the authors cannot ignore (**). Note that (**) is the KL term between q_{\\\\phi}(z) and q_{\\\\phi^{old}}(z). \\n\\nIf the authors' reasoning is correct, including (**) into Eq 12 should also work. Can the authors comment on this? Why (**) should be ignored? The authors should clarify this point.\"}", "{\"title\": \"Follow-up question (1)\", \"comment\": \"\\\"Q4: Yes \\\\theta and \\\\phi are jointly and simultaneously optimized at Eq. 12, though the gradients w.r.t. \\\\phi from the KL divergence is stopped.\", \"q7\": \"Due to the stop-gradient manipulation in the KL divergence, gradients w.r.t. \\\\phi remains the same as in stated in last subsection.\\\"\\n\\nMy guess is that due to the stop-gradient manipulation, \\\\phi remains the same when optimizing Eq 12. In other words, \\\\phi is not updated. Is it correct? Can the authors comment on this?\"}", "{\"title\": \"Response to review\", \"comment\": \"Thank your very much for your review. We have updated the manuscript with more details in the derivation of the first order approximation of KL divergence.\\n\\n1) Elaborated derivation of Eq. 10\", \"q1\": \"We have added one more line to explain the derivation. Basically a baseline is subtracted, and GAE is introduced.\\n\\n2) Gradient update on \\\\phi from KL divergence\\nThe gradients w.r.t. \\\\phi from the KL divergence is stopped for variance reduction with acceptable bias, which we prove with MuProp [1]. Details could be found in Appendix C.\", \"q3\": \"Mean policy is not motivated by variance reduction, which is addressed as introduced above. Thank you for your suggestion.\", \"q4\": \"Yes \\\\theta and \\\\phi are jointly and simultaneously optimized at Eq. 12, though the gradients w.r.t. \\\\phi from the KL divergence is stopped.\", \"q7\": \"Due to the stop-gradient manipulation in the KL divergence, gradients w.r.t. \\\\phi remains the same as in stated in last subsection.\\n\\n3) Mean policy in the KL divergence\\nWhat motivates the mean policy is not variance reduction, but the idea that dropout policy had better to be close to each other. As intuitively \\\\phi is controlling the distance between dropout policies, it would further remedy the little bias mentioned above. However, the computation complexity for \\\"close to each other\\\" would be O(N^2), with N being the number of dropout policies in this batch. We employ mean policy to make it linear. And it could be regarded as an integration on a Gaussian approximation of the Monte Carlo estimate according to [3]. Details could be found in Appendix C.\", \"q2\": \"No the mean policy is not used due to the likelihood ratio trick. And the approximation of using mean policy is discussed in [3], with a sound deduction.\", \"q5\": \"In the updated version, we have explicitly pointed out that the gradients w.r.t. \\\\phi from KL divergence is stopped. Thanks for this suggestion.\\n\\nHope our response addresses your concerns! \\n\\n[1] Gu et al., \\\"MuProp: Unbiased Backpropagation for Stochastic Neural Networks\\\", ICLR 2016. \\n[2] Titsias et al., \\\"Local Expectation Gradients for Black Box Variational Inference\\\", NIPS 2015. \\n[3] Wang et al., \\\"Fast dropout training\\\", ICML 2013.\"}", "{\"title\": \"Response to review\", \"comment\": \"Thank you very much for your strong recommendation!\\n\\n1) Intuition about the improvement\\nThough not explained in Section 4. The intuition for NADPEx is given in Section 3. Interpretation for as efficient or even faster exploration in dense environment (4.1) is that NADPEx could encourage more diverse exploration, while absorb experience from it in a relatively efficient way. For sparse environments (4.2), where temporally consistent exploration is crucial for learning signal acquisition, NADPEx outperforms vanilla PPO. It could also beat parameter noise if difficulty is increased, because intuitively low variance in gradients is a boon for faster learning. Improvement in 4.3 and 4.4 are basically from the theoretical grounding of NADPEx, which we believe is one of our contributions. Specifically, improvement in 4.3 is from high level stochasticity's adaptation to the low level; while that in 4.4 could be interpreted with the idea of trust region, that policy should be updated to somewhere near the sampling policy in the policy space, such that collected experience are usable (on-policy). In NADPEx, trust region also contains the meaning that dropout policies are close to each other for more efficient exploration. \\n\\n2) Limitation of NADPEx\\nOne of the limitation we see from NADPEx is that dropout policies are not directly interpretable from their network structures, while interpretability and composibility are prerequisites for reusing them in more complicated tasks. Luckily, modeled as latent random variables, an information term could be added to the objective as in [1, 2]. This is also a direction for future research work. \\n\\n[1] Florensa et al., \\\"Stochastic neural networks for hierarchical reinforcement learning\\\", ICLR 2017. \\n[2] Hausman et al., \\\"Learning an Embedding Space for Transferable Robot Skills\\\", ICLR 2018.\"}", "{\"title\": \"Response to review\", \"comment\": \"Glad to know that you like our paper!\\n\\n1) Difference from parameter noise except for memory consumption:\\nAs stated in Section 3.3, we believe NADPEx is a generalization of parameter noise, with not only flexible memory consumption but also lower variance in gradients. This theory is examined in Section 4.2, where NADPEx shows faster convergence and lower variance in performance with different random seeds. \\nBesides, comparing with [1], our work provides a theoretical modeling for the idea \\\"a hierarchy of stochasticity for exploration\\\". We model the NADPEx policy as a joint distribution of dropout random variables and actions, such that it could be combined seamlessly with existing on-policy policy gradient methods. One example is the policy space constraint stated in Section 3.2. We also provide another distribution i.e. Bernoulli distribution for stochasticity at high level, for which we derive gradient alignment and policy space constraint, as well as empirical results. \\nAs a minor point, in [1], the stochasticity at the high level i.e. the variance of parameter noise, is adjusted in a heuristic manner. NADPEx, in contrast, aligns the stochasticity throughout the hierarchy with end-to-end gradient update. \\n\\n2) Other good side effects:\\nThe robustness of the NADPEx policy is orthogonal to our current work, but will be an interesting direction for the future. Currently we only have some preliminary results. For example, it is more robust to adversarial neural attacks. In the future we will investigate how robust NADPEx policies could be when the environment is perturbed, e.g. agents are dragged slightly by humans as in [2, 3]. \\nThat temporally consistent exploration is fairly important for physical robots is one of our motivations for this whole project. In the next step we will look for simulator environments with more authentic actuators to see how NADPEx could help solve that. Our ultimate goal is to find a safer and more efficient way for on-policy exploration on physical robots. \\nWe believe the application of NADPEx to off-policy exploration is straightforward. However, as stated in Section 1, off-policy methods benefit from stronger flexibility for experience sampler. This makes the gradient alignment and policy space constraint not as important as in the on-policy methods. As off-policy methods have the potential to be much more data-efficient, we will compare in the future how NADPEx performs comparing with auto-correlated noise in [4] and separate sampler in [5]. \\n\\n[1] Plappert et al., \\\"Parameter Space Noise for Exploration\\\", ICLR 2018. \\n[2] Tassa et al., \\\"Synthesis and stabilization of complex behaviors through online trajectory optimization\\\", IROS 2012. \\n[3] Clavera et al., \\\"Learning to Adapt: Meta-Learning for Model-based Control\\\", arXiv 2018. \\n[4] Lillicrap et al., \\\"Continuous control with deep reinforcement learning\\\", ICLR 2016. \\n[5] Xu et al., \\\"Learning to explore via meta-policy gradient\\\", ICML 2018.\"}", "{\"title\": \"Manuscript updated\", \"comment\": \"We thank all reviewers for your comments and recommendation! We have updated the manuscript in the following sections, taking your feedbacks into account:\\n1) In Section 3.2 POLICY SPACE CONSTRAINT, we make a clarification on the omission of gradient w.r.t. \\\\phi from KL divergence and the replacement with mean policy. Basically they serve for two separate concerns, variance reduction and a remedy from \\\\theta to this omission. In the previous version we mingled them together to make it simple and intuitive, sorry for the confusion it caused. \\n2) In Appendix C, we give a detailed derivation of this approximation for both binary dropout and Gaussian dropout, providing a proof that it makes the training robust and stable with little and acceptable bias. \\n3) In Section 4.2, we elaborate the expression \\\"gradually increase the difficulty\\\".\"}", "{\"title\": \"A novel on-policy exploration based on a distribution of plausible subnetworks and dropout strategy to achieve achieve on-policy temporally consistent exploration.\", \"review\": \"The authors introduce a novel on-policy temporally consistent exploration strategy, named Neural AdaptiveDropout Policy Exploration (NADPEx), for deep reinforcement learning agents. The main idea is to sample from a distribution of plausible subnetworks modeling the temporally consistent exploration. For this, the authors use the ideas of the standard dropout for deep networks. Using the proposed dropout transformation that is differentiable, the authors show that the KL regularizers on policy-space play an important role in stabilizing its learning. The experimental validation is performed on continuous control learning tasks, showing the benefits of the proposed.\\n\\nThis paper is very well written, although very dense and not easy to follows, as many methods are referenced and assume that the reviewer is highly familiar with the related works. This poses a challenge in evaluating this paper. Nevertheless, this paper clearly explores and offers a novel approach for more efficient on-policy exploration which allows for more stable learning compared to traditional approaches. \\n\\nEven though the authors answer positively to each of their four questions in the experiments section, it would like that the authors provide more intuition why these improvements occur and also outline the limitations of their approach.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"An interesting paper with unjustified approximations\", \"review\": \"The authors propose a new on-policy exploration strategy by using a policy with a hierarchy of stochasticity. The authors use a two-level hierarchical distribution as a policy, where the global variable is used for dropout. This work is interesting since the authors use dropout for policy learning and exploration. The authors show that parameter noise exploration is a particular case of the proposed policy. The main concern is the gap between the problem formulation and the actual optimization problem in Eq 12. I am very happy to give a higher rating if the authors address the following points.\\n\\nDetailed Comments \\n(1) The authors give the derivation for Eq 10. However, it is not obvious that how to move from line 3 to line 4 at Eq 15.\", \"minor\": \"Since the action is denoted by \\\"a\\\", it will be more clear if the authors use another symbol to denote the parameter of q(z) instead of \\\"\\\\alpha\\\" at Eq 10 and 15.\\n\\n(2) Due to the use of the likelihood ratio trick, the authors use the mean policy as an approximation at Eq 12. Does such approximation guarantee the policy improvement? Any justification?\\n\\n(3) Instead of using the mean policy approximation in Eq 12, the authors should consider existing Monte Carlo techniques to reduce the variance of the gradient estimation. For example, [1] could be used to reduce the variance of gradient w.r.t. \\\\phi. Note that the gradient is biased if the mean policy approximation is used.\\n\\n(4) Are \\\\theta and \\\\phi jointly and simultaneously optimized at Eq 12? The authors should clarify this point. \\n\\n(5) Due to the mean policy approximation, does the mean policy depend on \\\\phi? The authors should clearly explain how to update \\\\phi when optimizing Eq 12. \\n\\n(6) If the authors jointly and simultaneously optimize \\\\theta and \\\\phi, why a regularization term about q_{\\\\phi}(z) is missing in Eq 12 while a regularization term about \\\\pi_{\\\\theta|z} does appear in Eq 12? \\n\\n(7) The authors give the derivations about \\\\theta such as the gradient and the regularization term about \\\\theta (see, Eq 18-19). However, the derivations about \\\\phi are missing. For example, how to compute the gradient w.r.t. \\\\phi? Since the mean policy is used, it is not apparent that how to compute the gradient w.r.t. \\\\phi. \\nMinor, 1/2 is missing in the last line of Eq 19.\", \"reference\": \"[1] AUEB, Michalis Titsias RC, and Miguel L\\u00e1zaro-Gredilla. \\\"Local expectation gradients for black box variational inference.\\\" In Advances in neural information processing systems, pp. 2638-2646. 2015.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A nice paper on temporally consistent exploration\", \"review\": \"This paper proposed to use dropout to randomly choose only a subset of neural network as a potential way to perform exploration. The dropout happens at the beginning of each episode, and thus leads to a temporally consistent exploration. The paper shows that with small amount of Gaussian multiplicative dropout, the algorithm can achieve the state-of-the-art results on benchmark environments. And it can significantly outperform vanilla PPO for environments with sparse rewards.\\n\\nThe paper is clearly written. The introduced technique is interesting. I wonder except for the difference of memory consumption, how different it is compared to parameter space exploration. I feel that it is a straightforward extension/generalization of the parameter space exploration. But the stochastic alignment and policy space constraint seem novel and important.\\n\\nThe motivation of this paper is mostly about learning with sparse reward. I am curious whether the paper has other good side effects. For example, will the dropout cause the policy to be more robust? Furthermore, If I deploy the learning algorithm on a physical robot, will the temporally consistent exploration cause less wear and tear to the actuators when the robot explores. In addition, I would like to see some discussions whether this technique could be applied to off-policy learning as well.\\n\\nOverall, I like this paper. It is well written. The method seems technically sound and achieves good results. For this reason, I would recommend accepting this paper.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Details about sparse environments\", \"comment\": \"Thank you for your questions!\\n\\n1. 'gradually increase the difficulty'\\nBy 'gradually increase' we mean we ran experiments repeatedly with increasing difficulty, while the difficulty remains fixed in each experiment. And the motivation is to amplify the difference between NADPEx and parameter noise. In the three listed files under directory rllab/envs/sparse_envs, the difficulty is denoted with 'PROP', whose default value is 1. We have tried incrementing it by 0.1 in each repetition, and the final value is 2. \\n\\n2. large initial dropout rate\\nThe difference between 4.1/4.4 and 4.2 is the density of reward. \\nWhen the reward is dense (4.1, 4.4), temporally consistent exploration is not the crucial obstacle for the agent to acquire learning signals. (Though our experiments still reveal that a little temporally consistent dropout could help to reach a better optimum). In this circumstance, what matters more is the speed of convergence for stochastic policy optimization. A large initial dropout rate promotes high variance in gradients, a condition all stochastic neural networks try to avoid [1, 2]. Therefore, it is highly possible that NADPEx agents over-explore when the initial dropout rate is high, not making full use of their experience. \\nOn the other hand, when the reward is sparse, the role played by temporally consistent exploration becomes significant. With larger initial dropout rate, NADPEx policies possess higher stochasticity and thus exhibit more diverse behaviors. In sparse environment, agents with diverse behaviors are more likely to discover useful learning signals, requiring less data to converge to an optimum. While those with small initial dropout rates are prone to under-exploration, taking longer time to collect the useful information. \\n\\n[1] Gu et al., \\\"Mu-prop: Unbiased back-propagation for stochastic neural networks\\\", ICLR 2016.\\n[2] Gu et al., \\\"Q-prop: Sample efficient policy gradient with an off-policy critic\\\", ICLR 2017.\"}", "{\"comment\": \"I enjoyed your paper, and I have two questions.\\n\\n1. In page 7, you 'gradually increase the difficulty' of three sparse-reward environments.\\nCould you explain in detail about this sentence?\\nI want to know the period of increase and the corresponding threshold values for SparseDoublePendulum, HalfCheetah. and MountainCar.\\n\\n2. Unlike 4.1 and 4.4 in which 'large initial dropout rate may induce large variance', \\nwhy is it that large initial dropout rate was helpful for highest performance in 4.2?\", \"title\": \"Regarding Sparsity\"}" ] }
SkEYojRqtm
Representation Degeneration Problem in Training Natural Language Generation Models
[ "Jun Gao", "Di He", "Xu Tan", "Tao Qin", "Liwei Wang", "Tieyan Liu" ]
We study an interesting problem in training neural network-based models for natural language generation tasks, which we call the \emph{representation degeneration problem}. We observe that when training a model for natural language generation tasks through likelihood maximization with the weight tying trick, especially with big training datasets, most of the learnt word embeddings tend to degenerate and be distributed into a narrow cone, which largely limits the representation power of word embeddings. We analyze the conditions and causes of this problem and propose a novel regularization method to address it. Experiments on language modeling and machine translation show that our method can largely mitigate the representation degeneration problem and achieve better performance than baseline algorithms.
[ "Natural Language Processing", "Representation Learning" ]
https://openreview.net/pdf?id=SkEYojRqtm
https://openreview.net/forum?id=SkEYojRqtm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1lpfU9elE", "SygJPRi_yN", "Sye8DWvvJN", "r1gmPQReRQ", "B1xVDem6TQ", "BkxfbJmpTm", "HJgSCjMYh7", "ryeWCxFd3X", "rJgdnzqDnX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544754708810, 1544236630667, 1544151389994, 1542673242638, 1542430811917, 1542430457956, 1541118925169, 1541079240641, 1541018288296 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper644/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper644/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper644/Authors" ], [ "ICLR.cc/2019/Conference/Paper644/Authors" ], [ "ICLR.cc/2019/Conference/Paper644/Authors" ], [ "ICLR.cc/2019/Conference/Paper644/Authors" ], [ "ICLR.cc/2019/Conference/Paper644/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper644/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper644/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"although i (ac) believe the contribution is fairly limited (e.g., (1) only looking at the word embedding which goes through many nonlinear layers, in which case it's not even clear whether how word vectors are distributed matters much, (2) only considering the case of tied embeddings, which is not necessarily the most common setting, ...), all the reviewers found the execution of the submission (motivation, analysis and experimentation) to be done well, and i'll go with the reviewers' opinion.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"limited contribution but well executed\"}", "{\"title\": \"update\", \"comment\": \"I just updated my scores. Thanks for your clarification and update.\"}", "{\"title\": \"Thanks for your attention.\", \"comment\": \"Dear reviewer, we believe we have addressed your concerns and clarified your points in the rebuttal. Do you have an updated assessment (or concerns) of our paper? Thanks for your consideration.\"}", "{\"title\": \"Rebuttals from authors [additional results on WMT 2014 En-Fr]\", \"comment\": \"We thanks the reviewer for the comments.\\n\\nQ1. The theory in Section 4 suggests that the degeneration problem originates from underfitting, and should be solved by feeding more data instead of regularization.\", \"answer\": \"From Figure 3, we can see that the number of rare tokens (e.g., relative frequency < 10^{-4}) is still huge, so there is little experimental difference between applying the proposed loss to the whole vocabulary and to the rare words only. Not mention that an additional parameter (threshold) is needed to define what is **rare**.\", \"q5\": \"How about applying cosine regularization to rare words only.\"}", "{\"title\": \"Rebuttal from authors\", \"comment\": \"We thank the reviewer for the insightful comments.\", \"q1\": \"On un-tied parameters and experiment.\", \"answer\": \"For translation tasks, we use sub-word tokens. However, according to our study, the sub-word frequency distribution is similar to the word level one (the statistics and figures are provided in the appendix). From Figure 3 in the appendix, we can see that with BPE, there still exists a large number of rare subwords in the training data. Our experiments also show that by improving the expressiveness of the embeddings for tasks with either BPE-level tokens or word-level tokens, we achieve similar gain over the baselines.\", \"q2\": \"Regarding word token and sub word tokens (BPE).\"}", "{\"title\": \"Rebuttal from authors\", \"comment\": \"We thank the reviewer for the positive feedback.\", \"q\": \"why the representation degeneration problem is important in language generation\", \"answer\": \"We did make some discussions regarding the problem in the second paragraph of the intro section and section 3.2, we clarify it here: \\n\\nIn the language generation tasks, the word embedding parameters are tied with softmax weight matrix in the last layer, and thus it has a dual role in the model, serving as the input in the first layer and the weights in the last layer. The representation degeneration problem is important from the below two aspects:\\n\\n(1). Given its first role as input word embedding, it should be widely distributed to represent different semantic meanings which will be further used for different tasks. However, we observe that most of the trained word embedding in language generation tasks are positively correlated and spread in a narrow cone, which limits the expressiveness of the semantic word representations. \\n(2). Given its role as output softmax matrix, to achieve good prediction of next word in a target sentence, a more diverse distribution of word embeddings in the space is expected to obtain a large margin result with good generalization.\\n\\nAccording to the discussion above, we think the current learnt model needs improving. \\n\\nWe are not exactly targeting to have **a more uniform spectral density distribution** but there are some works which show that more uniformly distributed singular values of embedding matrix can bring better performance. [1] shows that by using simple post-processing approaches (removing the first several principal components in learnt word embeddings), we can get better performance of several downstream classification tasks.\\n\\n[1]. Jiaqi Mu, Suma Bhat, and Pramod Viswanath. All-but-the-top: simple and effective postprocessing for word representations. ICLR 2018.\"}", "{\"title\": \"A new understanding of word embedding in LM and NMT\", \"review\": \"The authors propose a new understanding of word embedding in natural language generation tasks like language model and neural machine translation.\\nThe paper is clear and original. The experiment results support their argument. \\n\\nThe problem they raised is quite interesting, however, it is not clear why the representation degeneration problem is important in language generation performance. In Figure 1, the classification is from MNIST, which is much different from words. The authors might want to explain more clearly why the uniformly distributed singular values are helpful in language generation tasks.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A simple regularization to solve a representation degeneration problem\", \"review\": \"This work proposes a simple regularization term which penalize cosine similarity of word embedding parameters in the loss function. The motivation comes from empirical studies of word embedding parameters in three tasks, translation, word2vec and classification, and showed that the parameters for the translation task are not distributed when compared with other tasks. The problem is hypothesized by the rare word problem especially when parameters are tied for softmax and input embedding, and proposes a cosine similarity regularization. Experiments on English/German show consistent gains over non-regularized loss.\", \"pros\": [\"The proposed method is well motivated from empirical studies by visualizing parameters of three tasks, and the analysis on rare words are convincing.\", \"Good performance in language modeling and translation tasks by incorporating the proposed regularization.\"], \"cons\": [\"The visualization might be slightly miss leading in that the size of classification, e.g., the vocabulary size, is different, e.g., BPE for translation, word for word2vec and categories of MNIST. I'd also like to see visualization for comparable experiments, e.g., language modeling with or without tied parameters.\", \"Given that BPE is used in translation, the analysis might not hold since rare words would not occur very frequently, and thus, the gain might come from other factors, e.g., tied source/target embedding parameters in Transformer.\", \"I'd like to see experiments under un-tided parameters with the proposed regularization.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"review\", \"review\": \"The paper presents and discusses a new phenomenon that infrequent words tend to learn degenerate embeddings. A cosine regularization term is proposed to address this issue.\\n\\nPros\\n1. The degenerate embedding problem is novel and interesting.\\n2. Some positive empirical results.\\n\\nCons and questions\\n1. The theory in Section 4 suggests that the degeneration problem originates from underfitting; i.e., there's not enough data to fit the embeddings of the infrequent words, when epsilon is small. However, the solution in Section 5 is based on a regularization term. This seems contradictory to me because adding regularization to an underfit model would not make it better. In other words, if there's not enough data to fit the word embeddings, one should feed more data. It seems that a cosine regularization term could only make the embeddings different from each other, but not better.\\n2. Since this is an underfitting problem (as described in Section 4), I'm wondering what would happen on larger datasets. The claims in the paper could be better substantiated if there are results on larger datasets like WT103 for LM and en-fr for MT. Intuitively, by increasing the amount of total data, the same word gets more data to fit, and thus epsilon gets large enough so that degeneration might not happen.\\n3. \\\"Discussion on whether the condition happens in real practice\\\" below Theorem 2 seems not correct to me. Even when layer normalization is employed and bias is not zero, the convex hull can still contain the origin as long as the length of the bias vector is less than 1. In fact, this condition seems fairly strong, and surely it will not hold \\\"almost for sure in practice\\\".\\n4. The cosine regularization term seems expensive, especially when the vocab size is large. Any results in terms of computational costs? Did you employ tricks to speed it up?\\n5. What would happen if we only apply the cosine term on infrequent words? An ablation study might make it clear why it improves performance.\", \"update\": \"I think the rebuttal addresses some of my concerns. I am especially glad to see improvement on en-fr, too. Thus I raised my score from 5 to 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HyEtjoCqFX
Soft Q-Learning with Mutual-Information Regularization
[ "Jordi Grau-Moya", "Felix Leibfried", "Peter Vrancx" ]
We propose a reinforcement learning (RL) algorithm that uses mutual-information regularization to optimize a prior action distribution for better performance and exploration. Entropy-based regularization has previously been shown to improve both exploration and robustness in challenging sequential decision-making tasks. It does so by encouraging policies to put probability mass on all actions. However, entropy regularization might be undesirable when actions have significantly different importance. In this paper, we propose a theoretically motivated framework that dynamically weights the importance of actions by using the mutual-information. In particular, we express the RL problem as an inference problem where the prior probability distribution over actions is subject to optimization. We show that the prior optimization introduces a mutual-information regularizer in the RL objective. This regularizer encourages the policy to be close to a non-uniform distribution that assigns higher probability mass to more important actions. We empirically demonstrate that our method significantly improves over entropy regularization methods and unregularized methods.
[ "reinforcement learning", "regularization", "entropy", "mutual information" ]
https://openreview.net/pdf?id=HyEtjoCqFX
https://openreview.net/forum?id=HyEtjoCqFX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1e6IzcbxV", "HJe0z7jRJE", "Hkg8ZQo0k4", "ryea6zsRy4", "rkeC9MsAkE", "SkeEX3Q6JV", "SkgnPit3k4", "SyejfjKhkN", "H1l9u2InyN", "HJepxiJ507", "SJe43cJcR7", "BJgtvCRFCQ", "rJl62p0F0X", "ryl__Ymjh7", "B1lIeTW537", "Byx_Yunv37", "Skgno2oEhX", "rJx6sNdCom", "rkxW2SgCo7", "HJxlECDns7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment", "official_comment", "comment" ], "note_created": [ 1544819284544, 1544626966495, 1544626942037, 1544626885002, 1544626838488, 1544530972297, 1544489828134, 1544489747361, 1544477809564, 1543269108839, 1543269036050, 1543265888814, 1543265717250, 1541253487558, 1541180654265, 1541027967782, 1540828324244, 1540420772532, 1540388265055, 1540288040226 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper643/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper643/Authors" ], [ "ICLR.cc/2019/Conference/Paper643/Authors" ], [ "ICLR.cc/2019/Conference/Paper643/Authors" ], [ "ICLR.cc/2019/Conference/Paper643/Authors" ], [ "ICLR.cc/2019/Conference/Paper643/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper643/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper643/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper643/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper643/Authors" ], [ "ICLR.cc/2019/Conference/Paper643/Authors" ], [ "ICLR.cc/2019/Conference/Paper643/Authors" ], [ "ICLR.cc/2019/Conference/Paper643/Authors" ], [ "ICLR.cc/2019/Conference/Paper643/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper643/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper643/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper643/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper643/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a new RL algorithm (MIRL) in the control-as-inference framework that learns a state-independent action prior. A connection is provided to mutual information regularization. Compared to entropic regularization, this approach is expected to work better when actions have significantly different importance. The algorithm is shown to beat baselines in 11 out of 19 Atari games.\\n\\nThe paper is well written. The derivation is novel, and the resulting algorithm is interesting and has good empirical results. A few concerns were raised in initial reviews, including certain questions about experiments and potential negative impacts of the use of nonuniform action priors in MIRL. The author responses and the new version were quite helpful, and all reviewers agree the paper is an interesting contribution.\\n\\nIn a revised version, the authors are encouraged to\\n (1) include a discussion of when MIRL might fail, and\\n (2) improve the related work section to compare the proposed method to other entropy regularized RL (sometimes under a different name in the literature), for example the following recent works and the references therein:\", \"https\": \"//arxiv.org/abs/1705.07798\", \"http\": \"//proceedings.mlr.press/v80/dai18c.html\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting contribution that improves on the widely used entropy regularized algorithms\"}", "{\"title\": \"reply\", \"comment\": \"We are thankful to the reviewer for noticing the improvements and raising the score.\"}", "{\"title\": \"reply\", \"comment\": \"We thank the reviewer for appreciating the improvements of the paper.\\n\\n\\nThe attached link indeed shows a different epsilon value for evaluation (and other hyperparameters) used in this particular DQN implementation. An epsilon value for evaluation that differs from 0.05 was used in some of the previous literature (e.g. distributed DQN in Bellemare et al. 2017, prioritized double DQN in Schaul et al. 2016). However, earlier DQN papers do report an epsilon value of 0.05 for evaluation (original DQN in Mnih et al. 2015, double DQN in van Hasselt et al. 2016, prioritized DQN in Schaul et al. 2016). While an epsilon value of 0.01 might improve evaluation results, we feel a value of 0.05 is not unreasonable since we compare all methods under the same evaluation procedure. Additionally, we chose the other hyperparameters following the original DQN paper (Mnih et. al. 2015).\"}", "{\"title\": \"reply\", \"comment\": \"We thank the reviewer for the feedback leading to improvements of the paper.\\n\\nIn the final version, we will add a couple of additional sentences clarifying why a limit on information rate might be beneficial at initial stages of learning. In short, in prior work, it has been shown that the rate-distortion framework improves generalization in a supervised learning setting (Leibfried and Braun 2016). The intuition is that limits in transmission rate prevent overfitting on the training set. Similarly, in our work for the RL setting, limits in transmission rate prevent the agents to bootstrap with a \\u2018harsh\\u2019 max-operator that would lead to overestimation and sample inefficiency, but instead use a softened version less prone to overestimation with an adaptive prior that additionally improves exploration.\"}", "{\"title\": \"reply\", \"comment\": \"We thank the reviewer for raising the score and for the additional suggestions on analyzing potential limitations and drawbacks of our method. We will include a paragraph clarifying where our method might fail according to our pilot experiments, and perform additional experiments with a reward structure discouraging an infrequent action that is required to eventually succeed.\"}", "{\"title\": \"Raised score to 7\", \"comment\": \"I would like to thank the authors for their comments (both to mine and other's reviews) and the updated paper.\\n\\nThe changes improve the paper, correspondingly I raised my score from 6 to 7.\\n\\nHowever, I still believe that more informative experiments about the limitations and drawbacks of the proposed method would highly increase the value to the community as it would allow readers to better judge whether the method should be incorporated in their work and, more importantly, it could point towards further research opportunities to improve on the presented work.\\nConsequently, I would strongly encourage the authors to incorporate such experiments in their CRC version if the paper gets accepts. \\n(I don't believe the current gridworld experiment actually shows the limitations as its reward structure doesn't discourage the infrequent action only until _after_ the first and only reward was already found).\"}", "{\"title\": \"RE:\", \"comment\": \"These answers address my questions.\"}", "{\"title\": \"RE:\", \"comment\": \"1. Great, the changes have improved clarity.\\n\\n2. The values are substantially different than previous work. See here for a summary of previous settings (https://github.com/google/dopamine/tree/master/baselines). This raises a red flag for the experiments.\\n\\n3. Great, appreciate the new experiments.\"}", "{\"title\": \"Response to author's\", \"comment\": \"Thank you for response.\\n\\nI think this mostly addresses the concerns I raised.\\n\\nI appreciate the additional information regarding the rate-distortion, although I'm not sure that this view is adding much over the more usual view (why limit the rate of information encoded by policy?).\\n\\nOverall, I think this is interesting work and now better addresses prior work.\\n\\nMy score was marginally positive, and I remain at this mostly due the idea being relatively straightforward and the gains being fairly marginal.\"}", "{\"title\": \"Continuation\", \"comment\": \"Comments:\\nThe abstract claims state-of-the-art performance, however, what is actually shown is that MIRL outperforms DQN and SQL.\\n\\n---------->[Attenuated wording] We have adjusted the formulation regarding the performance in the paper. We outperform DQN and SQL, both recent and high-performing algorithms (though not the best algorithms on ATARI). Our normalized scores are also close to those reported in the recent state-of-the art RAINBOW paper, but we cannot make a direct comparison over different implementations and subsets of games.\\n\\n\\n With a fixed prior, the action prior can be absorbed into the reward (e.g., Levine 2018), so it is of no loss of generality to assume a uniform prior.\\n\\n--------------->[Absorbing prior into reward] In case of a uniform prior that is unaffected in the course of training, this is possible. In our algorithm, the prior is adapted in the course of training. In this case, keeping the prior separate allows for overcoming the problem of non-stationarity in the reward function.\\n\\nCould state that the stationary distribution is assumed to exist and be unique.\\n\\n------------>[Unique stationary state distribution] We state now in the paper that the stationary distribution is assumed to exist and be unique.\\n\\n\\nIn Sec 3.1, why is the prior state independent?\\n---------->[State-independent prior] We base our formulation on the rate-distortion framework that generalizes entropy regularization by having optimal state independent priors. We provide some intuition for the one-step decision-making case in the background section.\\n\\n\\nIn Sec 3.1, p(R = 1|\\\\tau) is defined to be proportional to exp(\\\\beta \\\\sum_t r_t). Is this well-specified? How would we compute the normalizing constant since p(R = 0 | \\\\tau) is not defined?\\n\\n----------->[Normalization constant] It is not required to compute the normalization constant explicitly since it would appear in Equation 5 as a constant that is unaffected by the optimization. More explicitly, the expectation of the log of the normalization constant of p(R=1|\\\\tau) w.r.t. q(\\\\tau) is just the log of the normalization constant of p(R=1|\\\\tau) without the expectation.\\n\\nThroughout, I suggest that the authors not use the phrases \\\"closed form\\\" and \\\"analytic\\\" for expressions that are in terms of intractable quantities.\\n\\n----------->[Wording] We modified the wording accordingly in the current version of the paper.\\n\\nIt should be noted that Sec 3.2 Optimal policy for a fixed prior \\\\rho follows from Levine 2018 and others by transforming the fixed prior into a reward bonus.\\n\\nIn Sec 3.2, the last statement does not appear to be necessary for the next subsection. Remove or clarify?\\n---------->[[Clarity] We added some clarifications to this section.\\n\\nI believe that the connection to MI can be simplified. Plugging in the optimal \\\\rho into Eq 3, we can see that Eq 3 simplifies to \\\\max_\\\\pi E_q[ \\\\sum_t \\\\gamma^t r_t] - (1 - gamma)/\\\\beta MI_p(s, a) where p(s, a) = d^\\\\pi(s) * \\\\pi(a | s) and d^\\\\pi is the discounted state visitation distribution. Thus Eq 3 can be thought of as a lower bound on the MI regularized objective.\\n----------->[On simplified connection to MI] We moved the connection to Mutual information for the case of gamma -> 1 to the appendix, and adopted another way to show this connection similar to what the reviewer has proposed.\\n\\nIn Sec 4, the authors state the main difference between their soft operator and the typical soft operator. What other differences are there? Is that the only one?\\n------------>The two main differences are an adaptive prior and adaptive beta.\\n\\nSec 5 references the wrong Haarnoja reference in the first paragraph. In Sec 5, alpha_beta = 3 * 10^5. Is that correct?\\n----------->We corrected this typo. It should be 3*10^-5.\"}", "{\"title\": \"Reply\", \"comment\": \"We are sorry for the delayed reply (the deadline was extended to the end of 26th November Anywhere on Earth time). We state the reviewers comments and denote with arrows ( ---------> ) our replies.\\n\\nThe authors take the control-as-inference viewpoint and learn a state-independent prior (which is typically held fixed). They claim that this leads to better exploration when actions have different importance. They relate this objective to a mutual information constrained RL objective in a limiting case. They then propose a practical algorithm, MIRL and compare their algorithm against DQN and Soft Q-learning (SQL) on 19 Atari games and demonstrate improvements over both.\\n\\nGenerally I found the idea interesting and at a high level the deficiency of entropy regularization makes sense. However, I had great trouble understanding the reasoning behind their method and did not find the connection to mutual information helpful. Furthermore, I had a number of questions about the experiments. If the authors can clarify their motivation and reasoning and strengthen the experiments, I'd be happy to raise my score.\\n\\nIn Sec 3.1, why is it sensible to optimize the prior? Can the authors give intuition for maximizing \\\\log p(R = 1) wrt to the prior? This is critical for justifying their approach. Currently, the authors provide a connection to MI, but don't explain why this matters. Does it justify the method? What insight are we supposed to take away from that?\\n\\n-------------> [On prior optimization and mutual-information] We extended the paper with an explanation on mutual information and rate distortion theory, in order to help with an intuitive understanding of why this prior can help learning. We also added a related work section to note that other algorithms have considered optimizing the ELBO with respect to both variational and prior policy. However, these approaches do not use the marginal prior or have any connection to mutual information but instead optimise the policy while staying close to the previous policy. Additionally, we moved the connection to Mutual information for the case of gamma -> 1 to the appendix, and adopted another way to show this connection similar to what the reviewer has proposed.\", \"the_experiments_could_be_strengthened_by_addressing_the_following\": [\"What was epsilon during training? Why was epsilon = 0.05 in evaluation? This is quite high compared to previous work, and it makes sense that this would degrade MIRLs performance less than DQN and SQL.\", \"----------->[Epsilon in training and evaluation] Epsilon during training was decayed from 1.0 to 0.1 over the first 10^6 steps of the experiment. We used a fixed evaluation epsilon of 0.05. This procedure is standard in the literature for ATARI, as introduced by the DQN paper (see e.g. Mnih et al, 2015 ). We understand that in later DQN papers (e.g. Rainbow) different values for these hyperparameters have been used but we feel our choice is not unreasonable.\", \"What is the performance of SQL if we use \\\\rho as the action selector in \\\\epsilon-greedy. This would help understand if the performance gains are due to the impact on the policy or due to the changes in the behavior policy.\", \"----------->[On marginal exploration] We have run additional experiments combining SQL with marginal exploration. Using the marginal exploration helps SQL, but MIRL still achieves the best performance.\", \"Plotting beta over time\", \"----------->[Plotting beta] We include the beta values evolving over time in the appendix. Additionally, we also include a more relevant term (beta x Qvalues).\", \"Comparing the action distributions for SQL and MIRL to understand the impact of the penalty. In general, a deeper analysis of the impact on the policy is important.\", \"Are their environments we would expect MIRL to outperform SQL based on your theoretical understanding? Does it? * How many seeds were run per game?\", \"----------->[Policy and grid world] Responding the previous two questions: We have added additional experiments and plots to the paper in an effort to provide more insight into the behavior of our method. These experiments include a simple grid world in which we expect MIRL to outperform SQL and a grid world in which we expect the prior to have negative effects (as suggested by another reviewer).\", \"How and why were the 19 games selected from the full set?\", \"------------->[On other aspects] Due to computational constraints we were not able to run experiments on the full set of ATARI games. Therefore, we selected a subset of 20 random games, without prior experimentation on any of the games. We then evaluated our method using a single seed for every game. Data for experiments on 1 game were lost because of a cloud instance failure.\"]}", "{\"title\": \"Added Grid World experiments, Related Work section and better connection to Mutual Information\", \"comment\": \"We thank the reviewer for the comments. Below we attempt to address each of the points raised by the reviewer.\", \"background_and_related_work\": \"We have expanded the paper with a section highlighting the connection between the rate distortion framework and the mutual information constraint. We hope that this connection can help providing some intuitive insight into why our method can improve performance.\\n\\nWe have also added a related work section more clearly positioning our work with respect to existing algorithms (such as MPO and DistRL).\", \"experiments\": \"We have included a new set of experiments on a small tabular domain. While simple, we hope that this domain can provide more insight into the performance of the algorithm.\\n\\n\\nDue to computational constraints we were not able to perform a complete search for optimal hyperparameter combinations in the Atari domain. Hyperparameter values were chosen by using values reported in the literature. Values for the new parameters introduced by MIRL were fixed by running a small number of exploratory experiments. Overall, we found the algorithm to be robust to changes in these values. All other hyperparameters were kept the same for all algorithms. \\n\\n\\nWhile it is true that the prior does not converge in all of our ATARI experiments, we note that during the later stages of learning the plots do show a higher probability for subsets of actions. We have empirically observed that convergence of the prior can take a very long time, especially when the learner is still improving. We expect that, given enough time, the probabilities of the marginal policy will eventually settle. Additionally, in these experiments we used a non-decaying learning rate for the marginal policy. This means that we can expect some oscillation due to tracking behaviour of our approximation, while the policy and state distribution still change.\"}", "{\"title\": \"Added results on Grid World and added Related Work section\", \"comment\": \"We thank the reviewer for the comments.\\n\\nWe have updated the manuscript with additional experiments in a grid-world domain aimed at answering the reviewer\\u2019s concerns. The additional experiments are aimed at better understanding the behaviour of our mutual-information constraint. We demonstrate that our method clearly improves learning speed when there is a strong preference for a single action in the optimal policy. We also examine an example in which the optimal policy crucially depends on an action with low probability in the marginal distribution. While MIRL does not improve performance in this case, it does not exhibit negative effects. We show that the learnt policy overcomes the prior when necessary for performance. \\n\\nAdditionally, we have added a related work section that positions and compares our work to the existing literature on inference-based RL and maximum entropy RL in particular.\"}", "{\"title\": \"Interesting idea, more experimental results needed\", \"review\": \"** Summary: **\\n\\nThe authors use the reformulation of RL as inference and propose to learn the prior policy. The novelty lies in learning a state-independent prior (instead of a state-dependent one) that can help exploration in the presence of universally unnecessary actions. They derive an equivalence to regularizing the mutual information between states and actions.\\n\\n** Quality: **\\nThe paper is mathematically detailed and correct.\\n\\n** Clarity: **\\nThe paper is sufficiently easy to follow and explains all the necessary background.\\n\\n** Originality & Significance: **\", \"the_paper_proposes_a_novel_idea\": \"Using a learned state-independent prior as opposed to using a learned state-dependent prior. While not a big change in terms of mathematical theory, this could lead to positive and interesting results empirically for exploration. Indeed they show promising results on Atari games: It is easy to see how Atari games could benefit as they have up to 18 different actions, many of which are redundant.\", \"my_two_main_points_where_i_think_the_paper_could_improve_are\": [\"More experimental results, in particular, how strong are the negative effects of MIRL if we have actions that are important, but have a lower probability in the stationary action distribution?\", \"A related work section comparing their approach to the many recent similar papers in Maximum Entropy RL\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Simple approach that appears to work well\", \"review\": \"This work introduces SoftQ with a learned, state-independent prior. One derivation of this objective follows standard approaches from an RL as inference to derive the ELBO objective.\\n\\nA more novel view derived here connects this objective with the rate-distortion problem to view the objective as an RL objective subject to a constraint on the mutual information between the state and action distribution.\\n\\nThey also outline a practical off-policy algorithm for optimizing this objective and compare it with Soft Q Learning (essentially, the same method but with a flat-prior) and DQN. They find that this results in small gains across most Atari games, with big gains for a few games.\\n\\nThis work is well-explained except in one-aspect. The rate-distortion view of the objective is not well-justified. In particular, why is it desirable in the context of RL to constrain this mutual information?\\n\\nEmpirical Deep RL performance is notoriously difficult to test (e.g. Henderson et al., 2017). The hyper-parameters are simply stated here, but no justification is given for how they are chosen / whether the baselines perform better under different choices. Given the gains compared with SoftQ are not that large, this information is important for understanding how much weight to place on the empirical result.\\n\\nThe fact that the prior does not converge in some environments (e.g. Seaquest) is noted, but it seems this bears further discussion.\", \"overall_it_appears_this_work_provides\": [\"An algorithm for Soft Q learning with a learned independent prior\", \"Moderate evidence for gains compared with a flat prior on Atari.\", \"A connection with this approach and regularization by constraining the mutual information between state and action distributions.\", \"It could be made a stronger piece of work by showing improvements in domains others than Atari, justifying the choice of regularization more. It would also benefit from positioning this work more clearly in relation to related approaches such as MPO (non-parametric state-dependent prior) and DistRL (state-dependent prior but shared across all games).\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting, but motivation and experiments need improvements\", \"review\": \"The authors take the control-as-inference viewpoint and learn a state-independent prior (which is typically held fixed). They claim that this leads to better exploration when actions have different importance. They relate this objective to a mutual information constrained RL objective in a limiting case. They then propose a practical algorithm, MIRL and compare their algorithm against DQN and Soft Q-learning (SQL) on 19 Atari games and demonstrate improvements over both.\\n\\nGenerally I found the idea interesting and at a high level the deficiency of entropy regularization makes sense. However, I had great trouble understanding the reasoning behind their method and did not find the connection to mutual information helpful. Furthermore, I had a number of questions about the experiments. If the authors can clarify their motivation and reasoning and strengthen the experiments, I'd be happy to raise my score.\\n\\nIn Sec 3.1, why is it sensible to optimize the prior? Can the authors give intuition for maximizing \\\\log p(R = 1) wrt to the prior? This is critical for justifying their approach. Currently, the authors provide a connection to MI, but don't explain why this matters. Does it justify the method? What insight are we supposed to take away from that?\", \"the_experiments_could_be_strengthened_by_addressing_the_following\": [\"What was epsilon during training? Why was epsilon = 0.05 in evaluation? This is quite high compared to previous work, and it makes sense that this would degrade MIRLs performance less than DQN and SQL.\", \"What is the performance of SQL if we use \\\\rho as the action selector in \\\\epsilon-greedy. This would help understand if the performance gains are due to the impact on the policy or due to the changes in the behavior policy.\", \"Plotting beta over time\", \"Comparing the action distributions for SQL and MIRL to understand the impact of the penalty. In general, a deeper analysis of the impact on the policy is important.\", \"Are their environments we would expect MIRL to outperform SQL based on your theoretical understanding? Does it?\", \"How many seeds were run per game?\", \"How and why were the 19 games selected from the full set?\"], \"comments\": \"The abstract claims state-of-the-art performance, however, what is actually shown is that MIRL outperforms DQN and SQL.\\n\\nWith a fixed prior, the action prior can be absorbed into the reward (e.g., Levine 2018), so it is of no loss of generality to assume a uniform prior.\\n\\nCould state that the stationary distribution is assumed to exist and be unique.\\n\\nIn Sec 3.1, why is the prior state independent?\\n\\nIn Sec 3.1, p(R = 1|\\\\tau) is defined to be proportional to exp(\\\\beta \\\\sum_t r_t). Is this well-specified? How would we compute the normalizing constant since p(R = 0 | \\\\tau) is not defined?\\n\\nThroughout, I suggest that the authors not use the phrases \\\"closed form\\\" and \\\"analytic\\\" for expressions that are in terms of intractable quantities. \\n\\nIt should be noted that Sec 3.2 Optimal policy for a fixed prior \\\\rho follows from Levine 2018 and others by transforming the fixed prior into a reward bonus.\\n\\nIn Sec 3.2, the last statement does not appear to be necessary for the next subsection. Remove or clarify?\\n\\nI believe that the connection to MI can be simplified. Plugging in the optimal \\\\rho into Eq 3, we can see that Eq 3 simplifies to \\\\max_\\\\pi E_q[ \\\\sum_t \\\\gamma^t r_t] - (1 - gamma)/\\\\beta MI_p(s, a) where p(s, a) = d^\\\\pi(s) * \\\\pi(a | s) and d^\\\\pi is the discounted state visitation distribution. Thus Eq 3 can be thought of as a lower bound on the MI regularized objective.\\n\\nIn Sec 4, the authors state the main difference between their soft operator and the typical soft operator. What other differences are there? Is that the only one?\\n\\nSec 5 references the wrong Haarnoja reference in the first paragraph.\\n\\nIn Sec 5, alpha_beta = 3 * 10^5. Is that correct?\\n\\n=====\\n11/26\\nAt this time, the authors have not responded to the reviews. I have read the other reviews and comments, and I'm not inclined to change my score.\\n\\n====\\n12/7\\nThe authors have addressed most of my concerns, so I have raised my score. I'm still concerned that the exploration epsilon is quite different than existing work (e.g., https://github.com/google/dopamine/tree/master/baselines).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Reply\", \"comment\": \"Both our algorithm and MPO can be seen as optimizing the same evidence lower bound (ELBO). MPO proposes a general coordinate ascent type optimization in which the ELBO is updated in alternating steps, either with respect to the variational policy or the prior policy (while the other policy is kept fixed). Different design choices for the policies and optimization procedures give rise to different, but related algorithms. This approach is also common in variational inference based policy search and describes a large family of related policy search algorithms (see Deisendroth et al, 2013 for an overview.)\\n\\nOur algorithm follows recent soft Q-learning algorithms (e.g. Fox et al, 2016, Haarnoja et al. 2017). These algorithms consider the same ELBO, but omit the optimization with respect to the prior policy and only optimize the variational policy pi. This can be seen as an entropy-regularized version of standard Q-learning algorithms. When the prior is fixed to be a constant uninformative policy, this procedure reduces to max-entropy policy learning. The algorithm replaces the classic Bellman operator with a soft Bellman-operator to prevent deviations from a state-independent fixed prior policy. Several papers (e.g.Haarnoja et al 2017, Schulman et al 2017 ) have shown that these \\u201csoftened\\u201d algorithms offer advantages over their unsoftened counterparts, in terms of exploration, generalization and composability. Our approach further improves on soft Q-learning (as shown in our Atari experiments) by allowing for optimizing the prior (while still being state-independent). As shown in the paper, this results in a mutual information constraint (rather than a max entropy constraint) on the resulting policy.\\n\\nSo while we follow the same general scheme as soft Q-learning, we do update our prior policy as in the MPO algorithm. However, contrary to MPO, we do not consider the alternating, coordinate descent style optimization. Rather than executing a separate prior maximization step, we solve the ELBO for the optimal prior in the special case of state-independent priors. We then directly estimate this optimal prior in our algorithm, instead of performing a gradient style update on the ELBO. While it is possible to consider the same class of state-independent priors with MPO, the way in which both algorithms optimize the ELBO will still be different. \\n\\nA modified MPO that uses a state-independent generative policy would converge to a solution that is penalized by an optimal marginal policy. However, since the parameter epsilon (that determines the deviation between the variational and the generative policy) is fixed and not scheduled in the course of training, the final solution is still constrained by the marginal policy which is sub-optimal because it is state-independent. This constraint would essentially limit the asymptotic performance of such a modified MPO. Of course, this could be alleviated by setting epsilon to a large value but this would correspond to an ordinary actor critic approach without any regularization in the policy.\\n\\nIf the prior policy in our algorithm is replaced by a state-dependent prior, the optimal solution for such a prior is the variational policy (i.e. pi) itself. This essentially would eliminate the KL-constraint and reduce our algorithm to standard Q-learning. Q-learning is known to suffer from sample-inefficiency caused by the hard max-operator in the target (this leads to overestimated q-values). This is exactly the problem that was been addressed by soft Q-learning with entropy regularization. \\n\\nDeisenroth, M. P., Neumann, G., & Peters, J. (2013). A survey on policy search for robotics. Foundations and Trends\\u00ae in Robotics, 2(1\\u20132), 1-142.\\n\\nSchulman, J., Chen, X., & Abbeel, P. (2017). Equivalence between policy gradients and soft q-learning. arXiv preprint arXiv:1704.06440.\\n\\nHaarnoja, T., Tang, H., Abbeel, P., & Levine, S. (2017). Reinforcement learning with deep energy-based policies. arXiv preprint arXiv:1702.08165.\\n\\nFox, Roy, Ari Pakman, and Naftali Tishby. Taming the noise in reinforcement learning via soft updates. UAI (2016).\"}", "{\"comment\": \"Thank you very much for the reply.\\n\\nThen if MPO use a state-independent generative policy, it will reduce to the proposed algorithm?\\nI understand that a learned state-independent generative policy is better than a uniform one. My question is that, why state-independent generative policy should be better than state-dependent generative policy as used by MPO?\", \"title\": \"Question\"}", "{\"title\": \"Differences between MPO and our approach\", \"comment\": \"Thank you for your comment.\\n\\nFraming RL as an inference problem has been addressed before in the literature [1,2] and can be done in different ways. The difference between the variational inference formulation in MPO and our variational inference formulation is the following:\\n- The policy of the generative model in our case is state-independent (similar to [1]) with the optimal solution being the marginal distribution over actions ([1] does not consider an optimal marginal distribution though). In contrast, in MPO the generative policy is state-dependent and given by the previous-round behavioural policy. \\n\\nImportantly, our specific choice of state-dependent variational policy and state-independent generative policy directly leads to a mutual information regularizer. Note that the mutual information is not any expected KL, but a specific expected KL under the assumption of an optimal marginal policy (which is exactly what we model). MPO does not have the notion of an optimal marginal policy (in the sense of a state-independent marginal policy) and therefore the expected KL in MPO is not a mutual information.\\n\\nIn our experimental section we empirically validate that our mutual information regularized objective leads to improvements over soft-q learning (see [1]) where the generative policy is also state-independent but not subject to optimization (but instead given by a uniform distribution). \\n\\nWe will clarify this point in a revised version of the manuscript.\\n\\n[1] Levine, S. Reinforcement Learning and Control as Probabilistic Inference: Tutorial and Review. arXiv 2018.\\n[2] Neumann, G. Variational Inference for Policy Search in changing Situations. ICML 2011.\"}", "{\"comment\": \"Hello,\\n\\nThanks for the paper. I would like to point out a paper from ICLR2018 that shares similarities in both \\n\\n1- The derivations of RL objective from Inference perspective \\n2- The resulting objective function for learning the prior \\n\\nplease see,\\n\\nMaximum a-Posteriori Policy Optimisaiton\", \"https\": \"//arxiv.org/pdf/1806.06920.pdf\\n\\nIn the paper above, the mutual information (Or expected KL ) regularized objective is derived in E-step (see equation 7). And the optimal solution is given in (8) when a non parametric variational distribution is used. \\n\\nIt would be useful if authors discuss the connections and differences.\\n\\nThank you,\", \"title\": \"Connection to prior work\"}" ] }
HyztsoC5Y7
Learning to Adapt in Dynamic, Real-World Environments through Meta-Reinforcement Learning
[ "Anusha Nagabandi", "Ignasi Clavera", "Simin Liu", "Ronald S. Fearing", "Pieter Abbeel", "Sergey Levine", "Chelsea Finn" ]
Although reinforcement learning methods can achieve impressive results in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations or unseen situations cause proficient but specialized policies to fail at test time. Given that it is impractical to train separate policies to accommodate all situations the agent may see in the real world, this work proposes to learn how to quickly and effectively adapt online to new tasks. To enable sample-efficient learning, we consider learning online adaptation in the context of model-based reinforcement learning. Our approach uses meta-learning to train a dynamics model prior such that, when combined with recent data, this prior can be rapidly adapted to the local context. Our experiments demonstrate online adaptation for continuous control tasks on both simulated and real-world agents. We first show simulated agents adapting their behavior online to novel terrains, crippled body parts, and highly-dynamic environments. We also illustrate the importance of incorporating online adaptation into autonomous agents that operate in the real world by applying our method to a real dynamic legged millirobot: We demonstrate the agent's learned ability to quickly adapt online to a missing leg, adjust to novel terrains and slopes, account for miscalibration or errors in pose estimation, and compensate for pulling payloads.
[ "meta-learning", "reinforcement learning", "meta reinforcement learning", "online adaptation" ]
https://openreview.net/pdf?id=HyztsoC5Y7
https://openreview.net/forum?id=HyztsoC5Y7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1gmGW1-g4", "ByxoVJewyN", "S1lzTCLUJN", "r1eBKWeYCX", "SJeiiK_NAm", "rkxmhBF10Q", "ByeKOSYJRm", "H1x4xrK1Rm", "SJeknkUiTQ", "Skgo5yLsT7", "BJxo8zPU67", "ryx-_D6z67", "rJlYYIcC2X", "rkl7x9ea3X", "BJeoWBEghm", "S1gLnA52oQ" ], "note_type": [ "meta_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1544773899475, 1544122163106, 1544085177933, 1543205244657, 1542912418728, 1542587818885, 1542587760757, 1542587628002, 1542311847161, 1542311827059, 1541988946774, 1541752681435, 1541478017514, 1541372395506, 1540535555233, 1540300461557 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper642/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper642/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper642/Authors" ], [ "ICLR.cc/2019/Conference/Paper642/Authors" ], [ "ICLR.cc/2019/Conference/Paper642/Authors" ], [ "ICLR.cc/2019/Conference/Paper642/Authors" ], [ "ICLR.cc/2019/Conference/Paper642/Authors" ], [ "ICLR.cc/2019/Conference/Paper642/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper642/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper642/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper642/Authors" ], [ "ICLR.cc/2019/Conference/Paper642/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper642/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper642/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The authors consider the use of MAML with model based RL and applied this to robotics tasks with very encouraging results. There was definite interest in the paper, but also some concerns over how the results were situated, particularly with respect to the related research in the robotics community. The authors are strongly encouraged to carefully consider this feedback, as they have been doing in their responses, and address this as well as possible in the final version.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Promising work, should make sure final version carefully references robotics literature\"}", "{\"title\": \"Thank you for your comment\", \"comment\": \"In the real-robot data collection is expensive, training a single model for the different terrains and conditions allows us to make a more efficient use of the data. Instead, in simulation we have separate experiments to have a more controlled comparison.\\n\\nThe task distribution during training and testing does indeed need to match in theory. However, generating sufficiently diverse tasks requires a considerable engineering effort. It's often hard to get the kind of task diversity needed to evaluate things effectively with distributions that truly match. For instance, in the disabled half-cheetah task the agent just has 6 joints, so if we want one held-out joint there will be inevitably some distribution mismatch. While this is certainly a shortcoming in our experiments, we believe this setup overall is reasonable, and the comparison to prior methods is informative about overall performance of each method.\\n\\nRegarding the neural network size, we used a 3 layer NN with 512 units per layer and ReLU activations for all the feed-forward models, and a LSTM with 512 hidden units for the recurrent models. We will incorporate it to the paper.\\n\\nPlease, let us know if you have further doubts or questions. And, thanks for pointing out the typo!\"}", "{\"comment\": \"This paper proposes a model-based meta-reinforcement learning method that achieves good results and enables fast adaptation in dynamics environment. While I can understand the paper better with the comments of the reviewers and the recent improvements done to the paper, there are still a few things that seem not so clear to me. It is without question that the assumption on whether the meta-test tasks are drawn from the same task distribution as the meta-training tasks is subjective , I would like to know how did the authors decide about the task distribution for training and testing? For example, for the experiments of half-cheetah (HC), the authors presented the results separately for HC disabled joint, HC slope terrains and HC pier and I suppose that three different models are trained for each of these experiments. Is this because of the differences in the task distributions between these three experiments? However, in the experiments with the millirobot, the authors meta-trained the agent on three different terrains with random trajectories, but tested the agent on various meta-test tasks such as missing leg, slope, added payload, etc that do not seem obvious to me that they are from the same task distribution as the meta-training tasks. Did the authors make assumption that these tasks are supposed to be in the same task distribution as the meta-training tasks? If so, why didn't the authors make the assumption that HC disabled joint, HC slope terrains and HC pier come from the same task distribution and just train one model for these three experiments, just like the experiments with the millirobot?\\n\\nMy second concern is that the deep neural networks architectures for the experiments are not at all mentioned. Since the expressive power of neural networks are limited by their size, so I wonder if the architecture and size of deep neural networks will also limit the adaptation capability of the agent to different tasks.\\n\\nPlease correct me if I have some misunderstandings about the paper. Thank you.\\n\\nIf I am not mistaken, there is a typo on page 8, in section 6.3: \\\"in comparison to the aforementioned methods.\\\"\", \"title\": \"Subjective assumption on whether the meta-test tasks are drawn from the same task distribution as the meta-training tasks\"}", "{\"title\": \"Paper updated to address reviewer feedback\", \"comment\": [\"We believe that we addressed all of the reviewer's concerns. We would appreciate if the reviewers could take a look at our changes and let us know if they would like to revise their rating or request additional changes that would alleviate their concerns.\", \"In summary, here are the main changes that we made to the paper:\", \"Ran a sensitivity analysis over the parameters K and M, and added a discussion section in the appendix regarding the selection of these values (R3)\", \"Edited the experiments section to clarify our main empirical insights (R4)\", \"Fixed the notational discrepancy in section 3 and added an explanation in section 4 regarding environments and rollouts (R4)\", \"Edited and added to the plot in section 6.2 to now include all experiments/comparisons of interest (R1, R4)\", \"Edited the related work to clarify the technical contributions of our method over MAML and prior work (R1, R4)\", \"Extended the related work section work to incorporate citations for model-based RL, adapting inverse dynamic models, and the suggested recent model-based RL citation (R1)\", \"Edited introduction to scope the claims more carefully (R1)\", \"Edited the experiments section's text and citations to clarify the misunderstanding regarding our choice of MPC controllers for each method of our comparisons (R1)\", \"Edited and clarified the methods and experiments to address all 4 of R1\\u2019s requested experimental comparisons (explicitly including #1/2/4, and running experiments to confirm that #3 does indeed fail) (R1)\", \"Edited the text to make it clear that we do already perform the suggested model-bootstrapping (R1)\", \"Edited the experiments to clearly differentiate meta-training time from test time (R1)\"]}", "{\"title\": \"Regarding related work\", \"comment\": \"Regarding prior work, as requested, we have extended the related work section by incorporating prior work on model-based control. In particular, we have added references on adaptive control methods [1-7], and online system identification [8]. Please let us know if we should include any specific paper, we will be happy to include it and discuss it.\\n\\n\\n[1] Sastry, Sosale Shankara and Isidori, Alberto. Adaptive control of linearizable systems.IEEE Transactions on Automatic Control, 1989.\\n[2] Meier, Franziska and Schaal, Stefan. Drifting Gaussian Processes with Varying Neighborhood Sizes for Online Model Learning. ICRA 2016.\\n[3] Meier, Franziska and Kappler, Daniel and Ratliff, Nathan and Schaal, Stefan. Towards Robust Online Inverse Dynamics Learning. IROS 2016.\\n[4] P. Pastor and Ludovic Righetti and M. Kalakrishnan and S Schaal. Online movement adaptation based on previous sensor experiences. IROS 2011.\\n[5] Underwood, Samuel J and Husain, Iqbal. Online parameter estimation and adaptive control of permanent-magnet synchronous machines. Transactions on Industrial Electronics 2010\\n[6] Kelouwani, Sousso and Adegnon, Kokou and Agbossou, Kodjo and Dube, Yves. Online system identification and adaptive control for PEM fuel cell maximum efficiency tracking. Transactions on Energy Conversion 2012.\\n[7] Rai, Akshara and Sutanto, Giovanni and Schaal, Stefan and Meier, Franziska. Learning Feedback Terms for Reactive Planning and Control. ICRA 2017. \\n[8] Manganiello, Patrizio and Ricco, Mattia and Petrone, Giovanni and Monmasson, Eric and Spagnuolo, Giovanni. Optimization of Perturbative PV MPPT Methods Through Online System Identification. Transactions on Industrial Electronics 2014.\"}", "{\"title\": \"Thank you for your feedback!\", \"comment\": \"We thank the reviewer for their valuable feedback and agree that the strength of our approach comes from being able to adapt the dynamics model to the local dynamics. We do include a model-free RL algorithm in our experiments, but this is a prior method that is included only for comparison: we clarify that both of our approaches are model-based, and neither are model-free.\\n\\nWe also clarify that we do not choose \\\"M steps within a K-long horizon.\\\" We have edited section 4 to paper to properly specify it. We use information from the past M steps to adapt the meta-learned model and predict the future K steps; this is done at every time-step of the rollout. In this setup, K and M are simply hyperparameters\\n\\nWe have added to the appendix D a sensitivity analysis of the values K and M for GrBAL. The results show that our approach is not particularly sensitive to those values. We also added a discussion in Appendix D of how the values can be determined -- the optimal values depend on various task details, such as the amount of information present in the state (a fully-informed state variable precludes the need for additional past timesteps) and the duration of a single timestep (a longer timestep duration makes it harder to predict more steps into the future).\\n\\nLastly, given our clarifications, it would really help us if the reviewer could clarify what they meant by \\\"optimal selection of the recovery points\\\" -- what does \\\"recovery points\\\" mean in this context?\"}", "{\"title\": \"Thanks again for the feedback\", \"comment\": \"Thank you for taking the time to respond, we really appreciate your detailed feedback. We believe that we can address all of your concerns, please let us know if the revisions and modifications we describe below have addressed these issues. Thanks again for helping us improve the paper!\\n\\n(i) Meta-learning for online adaptation to dynamics has not been proposed in prior work. [1] trains for episodic adaptation, rather than online adaptation, showing good adaptation performance after several trials rather than several timesteps. We have clarified the introduction to scope the claims more carefully, but we do believe this is a novel contribution. However, if there is any other citation that covers this approach, we would be happy to reference it and discuss. We have made a best faith attempt to cover all topics you referenced in your comment.\\nRegarding prior work on model based meta-RL, to our knowledge [1] is the only prior work that uses both meta-learning and model-based RL together. If there are any others, we would be happy to cite and discuss them as well. While we agree that [1] makes a valuable contribution, the technique is very different from ours and is specific to non-parametric latent variable models, while our method addresses parametric models. Further, we explicitly train for online adaptation (i.e., using only M timesteps of data for adaptation). Instead, their approach trains for episodic adaptation (i.e. using around M trajectories). Finally, we evaluate our approach on a real-world robotic system, while the prior paper evaluates on cart-pole and pendulum.\\n\\n(ii) We have edited sections 2 and 4 to highlight the technical contributions over MAML. Our method is not a straightforward application of MAML to model-based RL. MAML requires a distribution over tasks to be hand-specified in advance. Our method removes this assumption by developing an online formulation of meta-learning where \\u201ctasks\\u201d correspond to segments of time and are provided implicitly by the environment. In addition with the empirical contributions, we believe that this does constitute a novel conceptual contribution.\\n\\n(iii)\\n- Regarding section 6.2 & MPPI: we added learning curves of MB and MB+DE to the plots. We edited all of the plot legends to clarify which planner is used for each method. All simulated comparisons use MPPI for all methods. We have fixed the citation of Nagabandi et. al. 2017a by replacing it with [2].\\n\\n- Regarding model bootstrapping: Sorry for the misunderstanding on our end. We edited the paper to clarify the following --- at training time, we iteratively collect and aggregate data using the MPPI with MPC for all model-based methods (our method, MB, and MB+DE), collecting data in the loop of training. As a result, our \\u201cMB\\u201d and \\u201cMB+DE\\u201d comparisons corresponds to MPPI with model-bootstrapping [2], with and without adaptation (respectively) when collecting roll-outs during training & run-time. For our method, we also use bootstrapping, meta-learning the dynamics models iteratively (see line 5 and 6 of Alg. 1). We therefore believe that our comparison is set up properly and that the paper adequately communicates this, but we would appreciate any feedback you might have here, and we would be happy to alter the comparison if needed.\\n\\n- \\u201cmake it very explicit\\u201d: We edited the paper to make it clear that there is a training and run-time phase (which we refer to as meta-training and testing).\\n\\nFinally, we would emphasize that results on difficult problems, including substantial performance gains on 5 distinct tasks and including real-world robotic control problems, are also a contribution of our work. Algorithms that improve on prior work in terms of efficiency and generalization are of interest to the community, even when they build on ideas that were presented in prior work. If this were not the case, then most papers on model-based RL (a very old idea in itself) and RL for robotics would not be publishable. Therefore, we do not think that the criticism that there are other model-based RL papers, other meta-learning papers, or even other meta-learning model-based RL papers by itself precludes publication. We do however strongly agree with the reviewer that citing and discussing all relevant prior work, and appropriately scoping the claims, is critical, and we have endeavored to do so. We are grateful for any help and advice to do this better.\\n\\n[1] Meta Reinforcement Learning with Latent Variable Gaussian Processes, UAI 2018\\n[2] MPPI with model-bootstrapping: Information Theoretic MPC for Model-Based Reinforcement Learning , ICRA 2017\"}", "{\"title\": \"Thank you for the feedback!\", \"comment\": \"We thank the reviewer for the valuable feedback, and we clarify the individual points below. We have edited the paper to address each of the concerns raised in the review, and we would appreciate additional feedback regarding whether we have addressed the reviewer's concerns about the paper or if the reviewer has anything else they would like us to improve.\\n\\n\\\"Results not unexpected\\\":\\nWe agree that the sample efficiency of model-based RL is generally known, and we have revised Section 6.2 to explicitly state it. Our intent is not to claim that model-based RL is more sample-efficient than model-free RL (which, as the reviewer stated, is well known), but rather to show that meta-training for fast adaptation can improve over directly running online model updates with a model trained with standard model-based RL. Note that the comparison to \\\"MB-DE\\\" in Section 6.3 is precisely this comparison: adapting our meta-trained models outperforms adapting these standard model-based RL models by a large margin.\\n\\nThe takeaway of this work is fast adaptation of expressive dynamics models. For instance, a real robot adapting online (in milliseconds) to unseen and drastic dynamics changes has not been shown in prior work that we know of. We emphasize that our meta-trained model can adapt in less than a second, whereas model-based RL from scratch takes minutes or hours.\\n\\nRelation to MAML/prior work:\\nWe have edited section 2 of the paper to clarify the relation to MAML. In summary: MAML assumes access to a hand-designed distribution of tasks. Instead, one of our primary contributions is the online formulation of meta-learning, where tasks correspond to temporal segments, enabling \\u201ctasks\\u201d to be constructed automatically from the experience in the environment; MAML is a very general algorithm, but it has not been previously demonstrated on online learning problems.\", \"equation_3\": \"We have fixed the discrepancy and added several clarifications in Section 4. Our method uses M consecutives steps to predict the next K steps, which makes the assumption that the environment is constant for M+K timesteps. As a result, only a fraction of the roll-out, i.e. M+K timesteps, has to correspond to the same environment. The underlying assumption is that the subsequence of data does indeed come from the same environment. In our experiments, M+K is 0.5 seconds, making this assumption true most of the time. The fast adaptation (F.A.) environments in Section 6.3 show this adaptation occurring as the environment keeps changing within the rollouts.\\n\\nSection 6.2:\", \"we_fixed_the_typo_that_was_originally_in_the_caption\": \"GrBAL and ReBAL are our proposed meta-learning algorithms, so there is indeed meta-learning in this experiment. In this plot, we aim to show that our model-based meta-learning approaches achieve high performance while using 1000x less data than the two model-free approaches. Finally, we edited this plot by adding two more comparisons to further clarify the benefit of our model-based meta-learning approach over standard non-meta-learned model-based approaches.\\n\\nThe reviewer's comment about the asymptotic performance is very relevant, so we added it to the text in section 6.2. We agree that the development of some model-based/model-free hybrid would be great, and plan to do this in future work.\"}", "{\"title\": \"continued\", \"comment\": \"Some general comments:\\n-----------------------\\nThe presentation of your approach would benefit from making it very explicit that you have a training time (model-based RL to learn models) and a run-time phase (model-predictive control with model adaptation). This is a particularly confusing component of your evaluations, so please be clear about what phase your in, and if your evaluations are meant to evaluate run time adaptation then you should explain how all methods were initialized.\\n\\nDo not cite work as baseline, that you actually do not use as a baseline. MPPI with neural networks for dynamics models exists.\", \"to_mppi_with_model_bootstrapping_you_say\": \"\\\"The difference between the requested point (4) and our existing point (2) is the collection of expert data for initializing the training data set. Being able to collect expert samples is a strong assumption, requiring either human demonstrations or knowledge of the ground truth model, and does not fall under the assumptions of our problem setting.\\\"\\n\\n\\n\\nI don't understand this comment. MPPI with model-bootstrapping does not require an initial training data set, but it would help of course. What I meant is that at run time, you could continue to update the model (so essentially you continue with the model-based RL setup) - the difference is that you're not resetting the model at each time step. You could argue that this is exactly not what you want to do, you don't want to update your model continuously. But then you should argue why you wouldn't want to do this, in your introduction.\"}", "{\"title\": \"major concerns remain\", \"comment\": \"Thank you for your response and addressing my concerns (at least partially). I'd like to re-iterate what my main concerns with this manuscript are. To summarize\\n\\ni ) work is not put in the context of existing relevant related work (not really addressed)\\nii) minor/questionable technical contribution (not really addressed)\\niii) evaluations are not designed to evaluate fast model adaptation (was partially addressed)\", \"in_more_detail\": \"Before going into detail of my concerns, I'd like to quickly summarize your approach:\\n\\n1. at train time you use a model-based RL algorithm to learn a dynamics model. You utilize existing meta-learning methods/ideas to learn representations that can be utilized to adapt the dynamics model fast at test time. Specifically you present a) GrBAL, at training time you use MAML to learn dynamics model parameters that can quickly be adapted to changes in the dynamics b) ReBAL, you learn a recurrent-based update policy that can update the dynamics model parameters effectively online.\\n2. at test time you use a model-predictive controller with the learned dynamics model and adapt it online based on recent observations. At each controller time step you reset the dynamics model to the dynamics model learned in phase 1.\", \"in_that_context_my_concerns_are\": \"i) utilizing meta-learning in model-based RL is not a novel idea, yet you write most of your manuscript as if it were. Utilizing meta-learning to quickly adapt dynamics models online is also not novel, yet your writing makes readers believe that it is. While you've added the references I've mentioned, you have not really discussed how your proposed methods improve over other relevant work. Your introduction should highlight were current methods fall short, and how your proposed work improves over existing work. Furthermore, you have not added any references for model-based control. There is a ton of related work that uses model-based controllers and adapts dynamics models online, with and without meta-learning. This needs to be acknowledged.\\n\\nii) I'm still not clear on what your work exactly addresses. You're using 2 very different meta-learning approaches to learn models/update policies such that adaptation is fast at test time. Neither of them involve a significant contribution. Using MAML in this model-based RL context, reduces to using MAML in a regression problem (no technical advances here). Learning the recurrent-based update policy is something that has been extensively explored in the learning-to-learn community. It's not clear what you're adding here. You cite relevant work in the related work section, but you don't explain how your work differs from them. If there are technical issues that arise from applying these methods in the model-based RL framework, you do not describe them. Maybe there is a technical contribution here - but if there is you are not highlighting it. \\n\\niii) when evaluating your methods, you want to highlight that your meta-learning approach leads to models/policies that adapt faster. However, I can still not infer that this is true, here is why:\\n\\n1. Section 6.2: It's not clear whether this evaluation evaluates sample efficiency at training (meta-learning) time or at test time (how many samples you need to adapt online). In either case, if you want to highlight sample efficiency of your proposed approach (meta-learning to learn models that adapt fast at run time), you need to compare to model-based RL methods that do not use your meta-learning approach. And you need to use the same model-predictive controller. There is no point in comparing to model-free methods here.\\n\\n2. Section 6.3. You say you use the same model-predictive controller in your experiments for model-based RL (MPPI), however you cite other papers that do not use MPPI. For instance, you say \\n\\\"a non- adaptive model-based method (\\u201cMB\\u201d), which employs a feedforward neural network as the dynamics model and selects actions using MPC (Nagabandi et al., 2017a), \\\"\\nyour non-adaptive model-based method should be MPPI with a fixed neural network model (ideally the same that you use to initialize your methods). This is particularly problematic, because recent model-based RL methods have by far outperformed the work you cite (Nagabandi et al., 2017a). \\n\\nI want to re-iterate that you need to present the ablation study I suggested in my earlier review, and also present it as such (if you're already doing the experiments that I suggest, then change the plots and experiment description to make this clear) . \\n\\nto be cont'd\"}", "{\"title\": \"Shows superior sample complexity of model-based Meta RL, but not much further insight\", \"review\": \"The paper proposes using meta-learning and fast, online adaptation of models to overcome the mismatch between simulation and the real world, as well as unexpected changes and dynamics. This paper proposes two model-based meta-learning reinforcement algorithms, one based on MAML and the other based on recurrence, and experimentally shows how they are more sample efficient and faster at adapting to test scenarios than prior approaches, including prior model-free meta-learning approaches.\\n\\nI do have an issue with the way this paper labels prior work as model-free meta-learning algorithms, since for example, MAML is a general algorithm that can be applied to model-free and model-based algorithms alike. It would be more accurate in my opinion to label the contributions of this paper as model-based instantiations of prior existing algorithms, rather than new algorithms outright.\\n\\nI\\u2019m a bit confused with equation 3, as the expectation is over a single environment, and the trajectory of data is also sampled from a single environment. But in the writing, the paper describes the setting as a potentially different environment at every timestep. Equation 3 seems to assume that the subsequence of data comes from a single environment, which contradicts what you say in the text. As described, equation 3 is then not really much different from previous episodic or task based formulations.\\n\\nThe results themselves are not unexpected, as there has already been prior work that this paper also mentions showing that model-based RL algorithms are more sample efficient than model-free.\\n\\nSection 6.1, I like this comparison and showing how the errors are getting better.\\n\\nFor section 6.2, judging from the plots, it doesn\\u2019t seem you are doing any meta-learning in this experiment, so then are you just basically running a model-based RL algorithm? I\\u2019m very confused what you are trying to show. Are you trying to show the benefit of model-based vs model-free? Prior work has already done that. Are you trying to show that even just using a meta-learning algorithm in an online setting results in good online performance? Then you should be comparing your algorithm to just a model-based online RL algorithm. You also mention that the asymptotic performance falls behind, is this because your model capacity is low, or maybe your MPC method is insufficient? If so, then wouldn\\u2019t it be more compelling to, like prior work, combine this with a model-free algorithm and get the best of both worlds?\\n\\nSection 6.3 results look good.\\n\\nSection 6.4, I really like the fact you have results on a real robot.\\n\\nOverall I think the paper does successfully show the sample complexity benefits and fast adaptation of model-based meta-RL methods. The inclusion of a real world robot experiment is a plus. However the result is not particularly surprising or insightful, as prior work has already shown the massive sample complexity improvement of model-based RL methods.\\n\\nUPDATE (Dec 4, 2018):\\n\\nI have read the author response and they have addressed the specific concerns I have brought up. I am overall positive about this paper and the new changes and additions so I will slightly increase my score, though I am still concerned about the significance of the results themselves.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Thank you for the feedback!\", \"comment\": \"We thank the reviewer for the feedback.\\n\\nThe main concern of the reviewer is that we did not control for the choice of controller. This is a misunderstanding. We implemented the same controller for all of the model-based comparisons; hence, all comparisons reported in the paper are fair. To be precise, we used MPPI for all simulation experiments, and random-shooting MPC for all real-world experiments (since the action spaces were of lower dimension and did not need iterations of refinement). We updated the paper to clarify this.\", \"related_work\": \"We thank the reviewer for pointing out these recent works. We updated the paper to incorporate citations for model-based RL, adapting inverse dynamic models, and the suggested recent model-based RL citation.\", \"sample_efficiency\": \"In this section, we do both of the things the reviewer mentions: we compare our MB meta-learning method against a state-of-the-art MF meta-learning method to show the benefit of model-based over model-free, and against a state-of-the-art MF method to show the benefit of meta-learning.\", \"evaluation\": \"The reviewer suggested 4 points to evaluate. Points (1) and (2) are exactly the results we show in Section 6.3: (1) corresponds to our full GrBAL/ReBAL, meta-learning with adaptation, and (2) corresponds to our MB baseline, which has neither meta-learning nor adaptation. We further have a DE baseline, which addresses the combination of adaptation without meta-learning.\\n\\nPoint (3) suggests metalearning the prior with the adaptation objective, but then not adapting it at test-time. We ran this experiment on the real robot, and it performed worse than (1) and (2), failing to solve the task. This is expected; the meta-learned model parameters (theta*) were optimized to be used only after adapting them. We can add these numerical results to our results. \\n\\nThe difference between the requested point (4) and our existing point (2) is the collection of expert data for initializing the training data set. Being able to collect expert samples is a strong assumption, requiring either human demonstrations or knowledge of the ground truth model, and does not fall under the assumptions of our problem setting.\", \"contribution\": \"Our contribution is a new model-based meta-RL algorithm that incorporates elements of meta-learning and model-based RL. While our method is relatively simple, we are not aware of prior works that show that meta-learning can be used to enable online adaptation to varying dynamics in the context of model-based RL. Further, our experiments, which include domains that are more complex than the cartpole and double pendulum in [1], demonstrate the effectiveness of the approach. If we are mistaken regarding prior works, please let us know!\\n\\nWe would like to emphasize that our work presents an extensive comparative evaluation, and we believe that these results should be taken into consideration in evaluating our work. We compare multiple approaches across more than 6 simulated tasks as well as 4 tasks on a real-world robotic locomotion task. Hopefully our clarifications are convincing in terms of explaining why the evaluation is fair and rigorous, and we would of course be happy to modify it as needed.\"}", "{\"title\": \"Important problem, minor technical contribution, missing related work and poor evaluation.\", \"review\": \"This work addresses the problem of online adapting dynamics models in the context of model-based RL. Learning globally accurate dynamics model is impossible if we consider that environments are dynamic and we can't observe every possible environment state at initial training time. Thus learning dynamics models that can be adapted online fast, to deal with unexpected und never seen before events is an important research problem.\\n\\nThis paper proposes to use meta-learning to train an update policy that can update the dynamics model at test time in a sample efficient manner. Two methods are proposed\\n- GrBAL: this method uses MAML for meta-learning\\n- ReBAL: this method trains a recurrent network during meta-training such that it can update the dynamics effectively at test time when the dynamics change\\n\\nBoth methods are evaluated on several simulation environments, which show that GrBAL outperforms ReBAL (on average). GrBAL is then evaluated on a real system.\", \"the_strengths_of_this_paper_are\": [\"this work addresses an important problem and is well motivated\", \"experiments on both simulated and on a real system are performed\"], \"the_weaknesses\": [\"the related work section is biased towards the ML community. There is a ton of work on adapting (inverse) dynamics models in the robotics community. This line of work is almost entirely ignored in this paper. Furthermore some important recent references for model-based RL are not provided in the related work section (PETS [3] and MPPI [2]), although MPPI is the controller that is used in this work as a framework for model-based RL. Additionally, existing work on model-based RL with meta-learning [1] has not been cited. This is unacceptable.\", \"There is no significant technical contribution - the \\\"contribution\\\" is that existing meta-learning methods have been applied to the model-based RL setting. Even if no-one has had that idea before - it would be a minor contribution, but given that there is prior work on meta-learning in the context of model-based RL, this idea itself is not novel anymore.\", \"Two methods are provided, without much analysis. Often authors refer to \\\"our approach\\\" - but it's actually not clear what they mean by our approach. The authors can't claim \\\"model-based meta RL\\\" as their approach.\", \"While I commend the authors for performing both simulation and real-world experiments, I find the that experiments lack a principled evaluation. More details below.\"], \"feedback_on_experiments\": \"Section 6.2 (sample efficiency)\\n\\nYou compare apples to oranges here. I have no idea whether your improvements in terms of sample-efficiency are due to using a model-based RL approach or because your deploying meta-learning. It is well known that model-based RL is more sample efficient, but often cannot achieve the same asymptotic performance as model-free RL. Since MPPI is your choice of model-based RL framework, you would have to include an evaluation that shows results on MPPI with model bootstrapping (as presented in [2]) to give us an idea of how much more sample-efficient your approach is.\\n\\nSection 6.3 (fast adaptation and generalization)\\n\\nWhile in theory one can choose the meta-learning approach independently from the choice of model-based controller, in practice the choice of the MPC method is very important. MPPI can handle model inaccuracies very well - almost to the point where sometimes adaptation is not necessary. You CANNOT evaluate MPPI with online adaptation to another MPC approach with another model-learning approach. This does not give me any information of how your meta-learning improves model-adaptation. In essence these comparisons are meaningless. To make your results more meaningful you need to use the same controller setup (let's say MPPI) and then compare the following:\\n1. MPPI with your meta-trained online adaptation\\n2. MPPI results with a fixed learned dynamics model - this shows us whether online adaptation helps\\n3. results of MPPI with the initial dynamics model (trained in the meta-training phase) -without online adaptation. This will tell us whether the meta-training phase provides a dynamics model that generalizes better (even without online adaptation)\\n4. MPPI with model bootstrapping (as presented in [2]). This will show whether your meta-trained online adaptation actually outperforms simple online model bootstrapping in terms of sample-efficiency\\n\\nThe key here is that you need to use the same model-based control setup (whether its MPPI or some other method). Otherwise you cannot detangle the effect of controller choice from your meta-learned online adaptation.\\n\\n6.4 Real-world: same comments as above, comparisons are not meaningful\\n\\n[1] Meta Reinforcement Learning with Latent Variable Gaussian Processes, UAI 2018\\n[2] MPPI with model-bootstrapping: Information Theoretic MPC for Model-Based Reinforcement Learning , ICRA 2017\\n[3] Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models, NIPS 2018\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"This paper proposes a novel algorithm for online adaptation of a model-based RL approach, showing significant improvements in terms of speed, and also in terms of performance compared to standard approaches such as MAML-RL and non-adaptive model-based RL.\", \"review\": \"The authors introduce an algorithm that addresses the problem of online policy adaptation for model-based RL. The main novelty of the proposed approach is that it defines an effective algorithm that can easily and quickly adapt to the changing context/environments. It borrows the ideas from model-free RL (MAML) to define the gradient/recursive updates of their approach, and it incorporates it efficiently into their model-based RL framework. The paper is well written and the experimental results on synthetic and real world data show that the algorithm can quickly adapt its policy and achieve good results in the tasks, when compared to related approaches.\\n\\nWhile applying the gradient based adaptation to the model-free RL is trivial and has previously been proposed, in this work the authors do so by also focusing on the \\\"local\\\" context (M steps within a K-long horizon, allowing the method to recover quickly if learning from contaminated data, and/or its global policy cannot generalize well to the local contexts. Although this extension is trivial it seems that it has not been applied and measured in terms of the adaptation \\\"speed\\\" in previous works. Theoretically, I see more value in their second approach where they investigate the application of fast parameter updates within model-based RL, showing that it does improve over the MAML-RL and non-adaptive model-based RL approaches. This is expected but to my knowledge has not been investigated to this extent before. \\n\\nWhat I find is lacking in this paper is insight into how sensitive the algorithm is in terms of the K/M ratio, and also how it affects the adaptation speed vs performance (tables 3-5 show an analysis but those are for different tasks); no theoretical analysis was performed to provide deeper understanding of it. The model does solve a practical problem (reducing the learning time and having more robust model), however, it would add more value to the current state of the art in RL if the authors proposed a method for optimal selection of the recovery points and also window ratio R/L depending on the target task. This would make a significant theoretical contribution and the method could be easily applicable to a variety of tasks. where the gains in the adaptation speed are important.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Thank you for your suggestions. Our work considers the problem of online adaptation (in less than a second) to various disturbances, rather than a trial-and-error method of adaptation (in minutes/hours) to damage.\", \"comment\": \"Thank you for your suggestions. The biggest clarification that we would like to offer is that our method adapts online (in less than a second), and not minutes/hours. For example, when the agent sees a new terrain, when it encounters a slope, or when the system's pose estimation system become miscalibrated, we don't need to run multiple trials in this new setting (and we don't need an external reward signal like \\\"distance travelled\\\" to trigger/guide the adaptation). Instead, the agent constantly uses its past few data points, in a self-supervised way, to perform online adaptation of its dynamics model. Note that it successfully does this even when it encounters tasks that it did not see during training. This ability to adapt model parameters using such few data points is crucial, and we achieve it through meta-learning.\\n\\nWe will add discussion of these works in the next version of our paper, to be thorough. We would, however, like to emphasize that the purpose of our work is not adapting to damage. The purpose of our work is an algorithm that uses meta-learning to enable *online* model adaptation. Although recovering from damage is included in our experiments, it is merely one example in this category of experiencing unexpected disturbances at test time: We also evaluate other tasks such as a pier of differing buoyancy from that seen during training, slopes that were never seen during training, and pulling an unknown payload. A big difference between our work and the suggested work is that we are not performing trial and error learning. The problem statement itself is very different, and thus it does not make sense to perform such a comparison.\"}", "{\"comment\": \"I think this paper might be the first to apply meta-learning ideas to adaptation with a real robot. However, this is far from being the first paper to demonstrate that data-efficient reinforcement learning can be used for adapting to damage.\\n\\nUnfortunately, the author of the submitted paper does not compare their result to this state-of-the-art... and does not even cite any of the previous paper on the topic (see below). This is worrisome because the previous papers require only 1-2 minutes of interaction time for adapting in similar tasks (legged robot with a blocked joint, a lost leg, etc.), compared to 1.5-3 hours in the submitted paper.\", \"a_few_relevant_papers_about_adaptation_and_damage_recovery_with_data_efficient_rl\": \"Active learning of a model / model-identification + direct policy search: \\nBongard J, Zykov V, Lipson H. Resilient machines through continuous self-modeling. Science. 2006 Nov 17;314(5802):1118-21. http://www.cs.uvm.edu/~jbongard/papers/2006_Science_Bongard_Zykov_Lipson.pdf\\n\\nPrior from simulation + Bayesian optimization:\\nCully A, Clune J, Tarapore D, Mouret JB. Robots that can adapt like animals. Nature. 2015 May;521(7553):503. https://arxiv.org/pdf/1407.3501\", \"see_also\": \"https://arxiv.org/pdf/1709.06919\", \"model_based_policy_search_with_priors\": \"Chatzilygeroudis K, Mouret JB. Using Parameterized Black-Box Priors to Scale Up Model-Based Policy Search for Robotics. 2018. Proc. of IEEE ICRA. https://arxiv.org/pdf/1709.06917\\n\\nRepertoire of policies + high-level model:\\nChatzilygeroudis K, Vassiliades V, Mouret JB. Reset-free trial-and-error learning for robot damage recovery. Robotics and Autonomous Systems. 2018 Feb 28;100:236-50. https://arxiv.org/abs/1610.04213\", \"bio_inspired_approach\": \"Ren G, Chen W, Dasgupta S, Kolodziejski C, W\\u00f6rg\\u00f6tter F, Manoonpong P. Multiple chaotic central pattern generators with learning for legged locomotion and malfunction compensation. Information Sciences. 2015 Feb 10;294:666-82. https://arxiv.org/abs/1407.3269\\n\\n\\\"Classic\\\" RL:\\nErden MS, Leblebicio\\u011flu K. Free gait generation with reinforcement learning for a six-legged robot. Robotics and Autonomous Systems. 2008 Mar 31;56(3):199-212.\", \"title\": \"The literature about adaptation/damage compensation with data-efficient RL is ignored\"}" ] }
SkGtjjR5t7
Learning to Drive by Observing the Best and Synthesizing the Worst
[ "Mayank Bansal", "Alex Krizhevsky", "Abhijit Ogale" ]
Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert's driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress -- the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a car in the real world ( https://sites.google.com/view/learn-to-drive ).
[ "Imitation Learning", "End-to-End Driving", "Learning to drive", "Autonomous Driving" ]
https://openreview.net/pdf?id=SkGtjjR5t7
https://openreview.net/forum?id=SkGtjjR5t7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1l_1dGglN", "HJlI5WecCm", "Hkg_TlxqAQ", "HkeRsL0QTm", "rJl5LICQa7", "rkxhXU07aX", "ryx7irA7aX", "HJgYFS07pQ", "HJlaL4CX6Q", "Hkl6CDSR3Q", "rJl-iWUTnX", "rJefh7l5hm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544722399536, 1543270798241, 1543270591901, 1541822118401, 1541822033670, 1541821987564, 1541821850998, 1541821825181, 1541821525465, 1541457876736, 1541394840652, 1541174185642 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper641/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper641/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper641/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper641/Authors" ], [ "ICLR.cc/2019/Conference/Paper641/Authors" ], [ "ICLR.cc/2019/Conference/Paper641/Authors" ], [ "ICLR.cc/2019/Conference/Paper641/Authors" ], [ "ICLR.cc/2019/Conference/Paper641/Authors" ], [ "ICLR.cc/2019/Conference/Paper641/Authors" ], [ "ICLR.cc/2019/Conference/Paper641/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper641/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper641/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The authors present a method for training a policy for a self-driving car. The inputs to the policy are map-based perceptual features and the outputs are waypoints on a trajectory, and the method is an augmented imitation learning framework that uses perturbations and additional losses to make the policy more robust and effective in rare events. The paper is clear and well-written and the authors do demonstrate that it can be used to control a real vehicle. However, the reviewers all had concerns about the oracle feature representation which is the input and also concerns about the lack of baselines such as optimization based methods. They also felt that the approach was limited to self-driving cars and thus would have limited interest for the community.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}", "{\"title\": \"R3 Acknowledgement\", \"comment\": \"I have posted my response to the authors. As mentioned below, I feel that the paper would be interesting to the community, however with certain flaws as pointed out in my review and especially by other reviewers. This prompted me to remain with my original recommendation of weak accept.\"}", "{\"title\": \"Reviewer response\", \"comment\": \"I would like to thank the authors for their feedback and the various updates to the paper, the text is definitely clearer now.\\n\\nWhen it comes to evaluation, I am still not convinced that the proposed baselines would not be useful. E.g., Figure 5 results could have these extra baselines included and their displacement error could help quantify the proposed method.\\nNevertheless, I would like to stay with my recommendation, as I feel that the paper would be interesting to the community, however with certain flaws as pointed out in my review and especially by other reviewers.\"}", "{\"title\": \"Author Response\", \"comment\": \"We have responded to these concerns in our feedback to your review but we paste the relevant responses again below for the benefit of AnonReviewer1.\\n\\n\\u201cFeasible paths\\u201d: We train this network on time sampled expert driving trajectories and hence the network has seen only valid trajectories that satisfy the kinematic and dynamic constraints of the non-holonomic agent. In addition, to generate synthesized trajectories from the perturbed waypoints, we also employ a non-linear optimizer that obeys the same constraints to ensure that the generated synthesized trajectories are feasible to drive as well. Therefore, the network output is constrained to obey these constraints implicitly. Furthermore, we do not use the output trajectory waypoints directly but rather pass them through the same non-linear optimizer to generate driving controls and to smooth out any pixel noise that may push the output trajectory into the infeasible zone. We have not found an instance in practice where the network has produced an infeasible trajectory. The reviewer is correct that the more structure you impose the more you can do with your data, but the comment is somewhat surprising in light of many contemporary published works [1,2] that try to learn everything end-to-end. For the driving task, the hardest decisions often are mid-level decisions -- where to drive, how to pass / break / accelerate -- we make these via coarse sampled predictions -- the main goal of our net. Controllers are good for refining those.\\n\\n[1] Codevilla, Felipe, et al. \\\"End-to-end driving via conditional imitation learning.\\\" 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018.\\n[2] Hecker, Simon, Dengxin Dai, and Luc Van Gool. \\\"End-to-end learning of driving models with surround-view cameras and route planners.\\\" European Conference on Computer Vision (ECCV). 2018.\\n\\n\\u201cInput Image Extents\\u201d: The input image only defines a region where the network sees the current context around the agent. Since this region moves with the agent at 5Hz, the network always sees an updated context with a range of 64 meters ahead of the agent (please see supplemental videos that illustrate this). We do not see any fundamental flaw in this approach since it is akin to having a self-driving car with sensors that see only up to 64 meters. In fact, most teams on the DARPA Urban Challenge employed the Velodyne HDL-64E as their primary sensor which had an effective vehicle detection range of only 60 meters [3]. Additionally, our approach allows flexibility in choosing a larger rendering range at the expense of a more computationally heavy network but the range is not a fundamental limitation of our approach.\\n\\n[3] Montemerlo, Michael, et al. \\\"Junior: The stanford entry in the urban challenge.\\\" Journal of field Robotics 25.9 (2008): 569-597.\\n\\n\\u201cEvaluation\\u201d: As noted in the paper, we found that simple driving scenarios are handled well by our model and the ablation experiments thus focus on complex driving behaviors, which illustrate the gains from the techniques introduced in the paper. Specifically for the nudging experiment, we generate several variations by changing the starting speed of the agent relative to the parked car. This creates situations of increasing difficulty, where the agent approaches the parked car at very high relative speed and thus does not have enough time to nudge around the car given the dynamic constraints. A 10% collision rate in this case is thus not a measure of the absolute performance of the model since we don\\u2019t have a perfect driver which could have performed well at all the scenarios here.\"}", "{\"title\": \"Author Response (Part 2 of 2)\", \"comment\": \"\\u201cAblation experiments\\u201d: As noted in the paper, we found that simple driving scenarios are handled well by our model and the ablation experiments thus focus on complex driving behaviors, which illustrate the gains from the techniques introduced in the paper. Specifically for the nudging experiment, we generate several variations by changing the starting speed of the agent relative to the parked car. This creates situations of increasing difficulty, where the agent approaches the parked car at very high relative speed and thus does not have enough time to nudge around the car given the dynamic constraints. A 10% collision rate in this case is thus not a measure of the absolute performance of the model since we don\\u2019t have a perfect driver which could have performed well at all the scenarios here. The focus of our experiments is on the relative improvement in performance across models. M3 and M4 only differ in the use of either a weighted or a dropout strategy for combining imitation vs environment losses. In the case of the slowing down experiment, M4 collides with the curb (not the vehicle) in one scenario in trying to pass the slow vehicle (again for the variation at the extreme initialization of approaching at the highest relative speed) but we find that it does much better in all other scenarios and is thus our proposed model. The fact that M3 collides 45% of the time in the nudging experiments points to its inability to deal with real collisions.\\n\\n\\u201cReal driving experiments\\u201d: Being able to drive a real vehicle with this approach is the most exciting aspect of this work -- no other imitation learning system has ever accomplished this. Most recent works demonstrate the performance of their setup either in an open-loop setting or in a closed-loop simulation setup like CARLA which does not suffer from challenges like controller errors, actuator delays and perception errors. Our video results from the real-drive illustrate not only the smoothness of the network\\u2019s driving ability, but also its ability to deal with stop-signs and turns and to drive for long durations in full closed-loop control without deviating from the trajectory (as would be the case if one were to use pure behavior cloning). Performing quantitative evaluations in the real world safely, and developing the right metrics to compare against other planners including classical planners remains future work.\\n\\n\\u201cEquation 3\\u201d: The ground truth distribution P_k^{gt} is simply a dirac delta function with a value of 1 at the spatial coordinate of the target waypoint and zero everywhere else. The loss L_p = H(P_k, P_k^{gt}) thus measures the cross-entropy between the two probability distributions where P_k is the predicted distribution. We have clarified this in the revised version.\"}", "{\"title\": \"Author Response (Part 1 of 2)\", \"comment\": \"Thanks for reviewing the paper and for your valuable feedback. We have uploaded a revised version which clarifies several technical details. Responses to your concerns follow:\\n\\n\\u201cNo-regret formulation\\u201d: As suggested, this would require interaction with an expert. In a real world setting, the car would have to start driving, perform out-of-distribution maneuvers, and have the human driver correct it at every step. This is not a safe setup and instead, one would need to rely on offline data or accurate simulation. The novelty of our approach is to demonstrate that we can learn to drive well in complex scenarios beyond simple lane following from offline expert data without requiring a simulator and without doing RL.\\n\\n\\u201cPath feasibility\\u201d: We train this network on time sampled expert driving trajectories and hence the network has seen only valid trajectories that satisfy the kinematic and dynamic constraints of the non-holonomic agent. In addition, to generate synthesized trajectories from the perturbed waypoints, we also employ a non-linear optimizer that obeys the same constraints to ensure that the generated synthesized trajectories are feasible to drive as well. Therefore, the network output is constrained to obey these constraints implicitly. Furthermore, we do not use the output trajectory waypoints directly but rather pass them through the same non-linear optimizer to generate driving controls and to smooth out any pixel noise that may push the output trajectory into the infeasible zone. We have not found an instance in practice where the network has produced an infeasible trajectory. The reviewer is correct that the more structure you impose the more you can do with your data, but the comment is somewhat surprising in light of many contemporary published works [1,2] that try to learn everything end-to-end. For the driving task, the hardest decisions often are mid-level decisions -- where to drive, how to pass / break / accelerate -- we make these via coarse sampled predictions -- the main goal of our net. Controllers are good for refining those.\\n\\n[1] Codevilla, Felipe, et al. \\\"End-to-end driving via conditional imitation learning.\\\" 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018.\\n[2] Hecker, Simon, Dengxin Dai, and Luc Van Gool. \\\"End-to-end learning of driving models with surround-view cameras and route planners.\\\" European Conference on Computer Vision (ECCV). 2018.\\n\\n\\u201cTrajectory perturbations\\u201d: We have presented ablation experiments that clearly demonstrate the benefit of introducing these perturbations along with the corresponding loss terms. Specifically, we have shown that these help in (a) handling the covariate shift during closed loop control by teaching the network the idea of course correction, and (b) teaching the network the idea of collision avoidance without having expert demonstrations of the same in the training data. Furthermore, we introduce the general idea of synthesizing perturbations and if required, more complex kinds of perturbations can be introduced to the network in a similar fashion. We show that perturbations help, but expecting them to handle the long tail is too high of a bar that no imitation learning based system would meet.\\n\\n\\u201cEnvironment extent\\u201d: We run the network at 5Hz and since the coordinate system of the top-down view moves with the agent, we found it to be sufficient for control on surface streets. Our predicted trajectory goes to 2s into the future and is sampled every 0.2s. We see 64m ahead of the agent and at 25 mph this gives us a time horizon of 5.72s. We do not see any fundamental flaw in this approach since it is akin to having a self-driving car with sensors that see only up to 64 meters. In fact, most teams on the DARPA Urban Challenge employed the Velodyne HDL-64E as their primary sensor which had an effective vehicle detection range of only 60m [3]. The only practical limits we have found from this range are in cases like T-junctions where we have limited visibility on the sides. However, this is not a limitation of the core concepts presented in the paper -- we can easily increase the extent of the input image at the cost of additional computation.\\n\\n[3] Montemerlo, Michael, et al. \\\"Junior: The stanford entry in the urban challenge.\\\" Journal of field Robotics 25.9 (2008): 569-597.\\n\\n\\u201cSelf-driving\\u201d: Our work is about getting imitation learning to the level where it has a shot at driving a real vehicle; although the same insights may apply to other domains, these domains might have different constraints and opportunities, so we do not want to claim contributions there. We have revised the paper to reflect this. Furthermore, we believe that the self-driving domain is an area of broad interest for the machine learning research community as is evident from the surge in papers on using machine learning techniques for planning and prediction in this domain. \\n\\n<response continued below>\"}", "{\"title\": \"Author Response (Part 2 of 2)\", \"comment\": [\"\\u201cEvaluation\\u201d: As mentioned in the paper, we evaluate on specific complex situations to illustrate the difference in performance between the different models. This also makes it easy to see how specific non-learned baselines would perform in the said situations.\", \"For example, a baseline predicting the route:\", \"[Nudging] Would collide with the parked vehicle in all cases (100% collision) since the route intersects with the parked vehicle, and there is no speed modulation to bring it to a stop.\", \"[Traj Perturb] Would not be able to recover from the trajectory perturbation as the route is no longer under the agent at the starting point and the controller would not be able to execute a point jump (100% stuck).\", \"[Slow Car] Would collide with the slow car since there is no speed modulation information in the route (100% collision).\", \"Similarly, a baseline continuing to drive the agent as before:\", \"[Nudging] Would collide with the parked vehicle as it would not slow down at all (100% collision).\", \"[Traj Perturb] Would continue along the perturbed trajectory and go offroad (100% stuck).\", \"[Slow Car] Would collide with the slow car again since it would not slow down at all (100% collision).\", \"We are not aware of any other baselines that would help the understanding of these results but are open to adding them if the reviewer finds them useful.\", \"\\u201cRelated Work\\u201d: These works relate more to the prediction of other agent\\u2019s trajectories to aid decision making for the self-driving car. Our focus in this paper is on the prediction of a drivable trajectory for the self-driving car and then using this trajectory in a closed-loop setting to actually drive the car. However, we agree that these are important references to discuss in the related work to highlight this distinction and we have added them in the revised version.\"]}", "{\"title\": \"Author Response (Part 1 of 2)\", \"comment\": \"Thank you for reviewing the paper and for your valuable feedback. We have uploaded a revised version that clarifies the technical details but to answer your questions more directly:\\n\\n\\u201cPixel sizes, time resolution, vehicle position etc.\\u201d: As suggested in the author guidelines, we listed these details in Table 3 in the Appendix as we considered them important for reproducibility but not for the core understanding of the paper. We have moved this table into the main text in the revised version. We will appreciate your feedback on any other key details that we might have missed.\\n\\n\\u201cTraffic lights\\u201d: The sequence dimension represents past timesteps like the other channels. Each frame in the sequence is a gray-scale image with a specific intensity representing a specific traffic light state for each lane. For example, we use an intensity of 96 to represent a green signal and 224 to represent a red signal. We have revised the description in the new paper version.\\n\\n\\u201cVideos\\u201d: Each frame of the video represents the current state of the environment as follows: video_t = max(roadmap_t, route_t, current_agent_box_t, past_agent_poses_t, predicted_future_poses_{1,10}, traffic_lights_t, dynamic_boxes_past), with specific mapping of specific channels to individual colors in the output for ease of viewing. To aid visualization of the movement of dynamic objects, we combine the sequence of dynamic_boxes_{t-1,...,t} using a weighted sum to generate the dynamic_boxes_past channel above.\\n\\n\\u201cDashed arrows\\u201d: The dashed arrows represent the recurrent feedback of predictions from iteration k as inputs into iteration k+1. We have clarified this in the paper now.\\n\\n\\u201cRegression tower\\u201d: We use a shallow convolutional network with a fully-connected layer at the end for this. We use \\u201ctowers\\u201d to imply any building block consisting of a sequence of basic convolutional, space-to-depth, depth-to-space, fully-connected and activation layers. We have clarified this in the paper.\\n\\n\\u201cEqn 3\\u201d: The logits P_k represent a probability distribution and the cross-entropy function H thus computes a single cross-entropy value for the input P_k and P_k^gt without the need for a summation over pixels.\\n\\n\\u201cSection 4.1.2\\u201d: We have added further details to the revised version. The prediction P_k is a probability distribution over the next predicted waypoint and we pick and arg-max over this distribution to sample an integer coordinate. To refine this to sub-pixel resolution, the agent-meta prediction network produces subpixel values \\\\delta{p_k} = (\\\\delta_x, \\\\delta_y) which are then compared with the ground-truth values using L1 norm as in equation 6. Similarly, it produces a speed value s_k which is compared with the target value using L1 norm as well. Theses losses are included as part of the imitation losses during the training loop similar to the other losses as described in section 5.3.\\n\\n\\u201cPast motion dropout\\u201d: We keep only the current position of the agent in the past_agent_poses channel. This is a fixed point (u_0, v_0) in the top-down view. The other remaining input channels remain unchanged.\\n\\n\\u201cFigure 6\\u201d: Thanks for pointing this out! We have removed this reference from the main text.\\n\\n\\u201cPerturbation about vertical axis\\u201d: This is only done during training as a data augmentation mechanism. Clarified in section 6.2.\\n\\n\\u201cOpen- and closed-loop\\u201d: Consider a sequence of logged data obtained from an expert driving the car. At each timestep in this sequence, we have information about the agent\\u2019s past trajectory and we can use this (along with the other features) to create an input for the network. The network then predicts a set of waypoint coordinates which can then be compared to the actual future trajectory of the agent from the log. The L2 distance between the predicted and ground-truth waypoints is plotted in Fig. 5 as the open loop evaluation. Note that at each time step in this evaluation, the agent follows the logged pose exactly and we are just evaluating the network predictions against the ground-truth. In a closed-loop setting, we would like to actually drive the agent using the predicted trajectory and for this, we first convert the predicted trajectory to a set of controls and then use a vehicle simulator to drive the agent forward. This will drive the agent independent of the log from this point forward ultimately generating poses which were never seen in the log. This can thus put the input to the network outside the training distribution thus leading to poor predictions and hence poor driving behavior. However, this is what we care about and hence the evaluations in closed-loop settings are crucial. We have also clarified this in the revised paper.\\n\\n<response continued below>\"}", "{\"title\": \"Author Response\", \"comment\": \"Thanks for reviewing the paper and for your valuable feedback.\\n\\n\\u201cExisting motion planning approaches\\u201d: We have included this reference in the revised draft. Motion planning is really useful in cases where the cost function, constraints, and dynamics are clear; none of that is true for driving -- writing down the true cost we want optimized is hard and contextual, and the dynamics are hard especially because they involve other people in the environment and what they will do; we thus think it's useful to see how far imitation learning can be pushed as an alternative. We would also like to emphasize that this is not only useful for driving the car, but also can be integrated within a motion planner as a model of how other people will act.\\n\\n\\u201cData collection details\\u201d: Data about the prior map of the environment (roadmap) and the speed-limits along the lanes is collected apriori. For the dynamic scene entities like objects and traffic-lights, we employ a separate perception system based on laser and camera data similar to existing works in the literature [1,2]. We have clarified this in the revised version.\\n\\n[1] Fairfield, Nathaniel, and Chris Urmson. \\\"Traffic light mapping and detection.\\\" Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011.\\n[2] Yang, Bin, Ming Liang, and Raquel Urtasun. \\\"HDNET: Exploiting HD Maps for 3D Object Detection.\\\" Conference on Robot Learning. 2018.\\n\\n\\u201cOther references\\u201d: Our paper includes recent references to works on autonomous driving using the mid-level input/output representations. In the revised version, we have also included the suggested references which specifically target use-cases like predicting other agents\\u2019 intent.\\n\\n\\u201cEvaluation details\\u201d: The training data is collected from real-world driving. We perform testing on these kinds of data:\", \"simulated_data\": \"For this evaluation, we create specific scenarios as described in section 6.3.1 within our simulator to allow us to test specific conditions without introducing the complexities of real world all at the same time and the quantitative results in Fig. 4 point to these results [ https://sites.google.com/view/learn-to-drive#h.p_XLYMjRiONt1e ].\", \"logged_data\": \"For this evaluation, we take logs from our real-driving test data (separate from our training data), and use our trained network to drive the car using the vehicle simulator keeping everything else the same i.e. the dynamic objects, traffic-light states etc. are all kept the same as in the logs. These drives are shown in the supplemental website [ https://sites.google.com/view/learn-to-drive#h.p_cxFQRIZYOQ7o ].\", \"logged_ablation_data\": \"This is the same as above, except that we modify some of the rendered inputs like removing the stop-signs or other dynamic objects to generate the input ablation results in section 6.3.2 and the videos on the supplemental site [ https://sites.google.com/view/learn-to-drive#h.p_WjtxfxJsNmJT ].\", \"real_drive\": \"This is where we let the network drive a real car [ https://sites.google.com/view/learn-to-drive#h.p_zId3Ux6DONGv ].\\n\\n\\u201cContribution Summary\\u201d: We have updated the introduction to clarify the contributions.\"}", "{\"title\": \"Flawed Approach with Poor Results\", \"review\": [\"The paper describes a framework for training a self-driving policy by augmenting imitation loss with additional loss terms that penalize undesired behaviors and that encourage progress. The policy takes as input a parsed representation of the scene (rather than raw images) and outputs pose trajectories for a down-stream controller. The method is trained on simulated data that includes perturbations to improve generalizability. The framework is evaluated in simulation through a series of ablations to better understand the contribution of the different loss terms.\", \"STRENGTHS\", \"Paper acknowledges the difficulty of end-to-end (pixels-to-torque) learning for autonomous driving and instead reasons over pre-processed inputs in the form of lower-dimensional images (and image sequences) that capture obstacles as bounding boxes and simple lines for routes, grayscale intensities, etc. Similarly, the output is a trajectory that is then fed to a controller responsible for tracking this trajectory.\", \"WEAKNESSES\", \"The insufficiency of behavioral cloning is not surprising, as noted, given the covariate shift. It would be interesting to consider a no-regret formulation analogous to Ross et al., 2011, even though it would require interaction with a human.\", \"The limitation of producing paths in this way is that the network does not explicitly reason over the feasibility of the path, which is important for non-holonomic vehicles. Instead, the network must learn the kinematic and dynamic constraints.\", \"Perturbations of the simulated trajectories are used to expose the model to collisions and other rare events, but is not clear that simple trajectory perturbations such as those used here provide a sufficient exposure to these rare events.\", \"The fact that the 2D image that expresses the vehicle's position is absolute limits the environment in which the network is valid. The experiments are conducted on images corresponding to an 80m x 80m environment, which is trivially small.\", \"The proposed framework is highly specific to self-driving and the extent to which it provides insights for other domains is not clear.\", \"The ablation experiments are not very compelling. In the case of the nudging experiment, all models result in collisions with M4 being the best model with a 10% collision rate. The trajectory perturbation results are better. In the case of the slowing experiment, M3 is the only version to not result in collision, whereas M4 collides 5% of the time. It isn't clear than which model is preferable since, while M3 never collides in the case of the slowing down experiment, it collides 45% of the time in the nudging experiment, almost as frequently as the M0 baseline.\", \"The paper claims that the model was run on a real robot, but there is no experimental evaluation of the results, only a reference to videos. The results of these experiments should be quantified and discussed or the reference to running on a real vehicle should be toned down, if not removed.\", \"Equation 3 requires knowledge of the ground-truth distribution. How is this determined?\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting examples, somewhat weaker evaluation\", \"review\": [\"The authors present a very interesting work on predicting future motion of a self-driving vehicle given image inputs that represent its surrounding and history. The authors use RNNs for this task, and data augmentation to make their model more robust. They also present a number of very interesting videos showcasing the performance.\", \"Several key aspects of the work are not well explained. E.g., what are pixel sizes, time resolution, where is the vehicle positioned within the image? All this is missing.\", \"Traffic lights are represented as \\\"a sequence of grayscale images\\\", how exactly, one for each state? Or some other way.\", \"How were videos generated, how were various channels collapsed?\", \"Dashed arrows not explained in Fig 2.\", \"\\\"a small regression tower\\\", this needs to be elaborated. As well as other mentions of \\\"towers\\\".\", \"In (3), is the sum over all pixels missing?\", \"Section 4.1.2 is not clear, this needs to be expanded. It is not well explained how exactly these losses are computed and used.\", \"For past-motion dropout, then you simply give blank input?\", \"Figure 6 is referenced in the regular text although it is located in the appendix.\", \"Orienting vertical axis with delta of +-25deg (as explained in Section 6.2) is not observed in the given videos, seems that there is no delta there. Is that done only during training?\", \"What is the exact difference between open- and closed-loop experiments? Given that a number of other key aspects are missing, I am not sure I fully understand a difference here as well.\", \"One of major issues in the evaluation section is that other baselines are missing (especially in the context of Fig 5). Even the more obvious ones would help a lot with understanding the performances, such as vehicle continuing to do what it was doing, or baseline predicting the route). This is a major flaw of the paper.\", \"Some recent related work missing, see [1], [2], and related work.\", \"[1] Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion Forecasting With a Single Convolutional Net, Luo, Wenjie, Bin Yang, and Raquel Urtasun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\", \"[2] Short-Term Motion Prediction of Traffic Actors for Autonomous Driving using Deep Convolutional Networks, Djuric, N., Radosavljevic, V., Cui, H., Nguyen, T., Chou, F.-C., Lin, T.-H., Schneider, J., arXiv preprint:1808.05819, 2018.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A reasonable approach for self-driving vehicle control.\", \"review\": \"Summary.\\nThe paper proposes a vehicle\\u2019s trajectory planner that iteratively predict next-step (longitudinal and latitudinal) position of an ego-vehicle. Instead of using a raw image, a set of handcrafted features (i.e., the status of traffic lights, route, roadmap, etc) are mapped onto a fixed-size of bird-eye view map, which is then fed into the recurrent neural network. Additional regularizing loss terms are explored for the robustness of the model. The effectiveness of the method is demonstrated in simulation and real-world experiment.\\n\\nStrengths.\\n- Impressive demonstrations in simulation and real-world experiments.\\n- The paper is generally well-written and easy to follow.\\n\\nvs. Existing motion planning approaches.\\nThere exists a large volume of papers on vehicle motion planning, which has largely been explored for controlling self-driving vehicles. Some of them successfully demonstrated their effectiveness for navigating a vehicle in typical driving scenarios, including \\u201cslowing down for a slow car\\u201d.\", \"a_notable_survey_may_include\": \"[1] Paden et al., \\u201cA survey of motion planning and control techniques for self-driving urban vehicles,\\u201d IEEE Transactions on intelligent vehicles, 2016. \\n\\nHowever, the paper provides neither any works of literature on existing motion planners nor any types of comparison with them. This makes hard to judge the proposed learning-based motion planner outperforms others including conventional optimization-based methods. \\n\\nMissing data collection details.\\nThis work depends hugely on its own human-designated oracle-like map, which provides driving-related features, such as lane, the status of traffic lights, speed limits, desired route, dynamic objects, etc. Generating this map would not be a trivial task, but details are missing on (1) how this data collected and (2) how this data can be collected during the testing time (especially for dynamic objects/traffic light status). Section 6.2 should be explained more in detail.\\n\\nA weak novelty of using intermediate-level input/output representation.\\nThere exist similar approaches that utilized similar representations to determine a vehicle\\u2019s behaviour, examples may include:\\n\\n[1] Lee et al., \\u201cConvolution Neural Network-based Lane Change Intention Prediction of Surrounding Vehicles for ACC,\\u201d IEEE ITSC 2017.\\n[2] We et al., \\u201cModeling trajectories with recurrent neural networks,\\u201d IJCAI, 2017.\\n\\nMissing evaluation details.\\nIn Section 6.2, (though not mentioned) it seems that a training dataset is collected from 60-days of real-world driving (given the context). But, in the testing phase, it seems that the authors used a simulator to evaluate different driving scenarios with various initial condition (i.e., speed, heading angle, position, etc). Can authors clarify details of the evaluation environment?\\n\\nMinor concerns.\\nA paragraph of contribution summary (in Introduction section) will help. \\nTypos (e.g., Section 2 line 17: \\u2018off of\\u2019)\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
Hyxtso0qtX
Adversarial Exploration Strategy for Self-Supervised Imitation Learning
[ "Zhang-Wei Hong", "Tsu-Jui Fu", "Tzu-Yun Shann", "Yi-Hsiang Chang", "Chun-Yi Lee" ]
We present an adversarial exploration strategy, a simple yet effective imitation learning scheme that incentivizes exploration of an environment without any extrinsic reward or human demonstration. Our framework consists of a deep reinforcement learning (DRL) agent and an inverse dynamics model contesting with each other. The former collects training samples for the latter, and its objective is to maximize the error of the latter. The latter is trained with samples collected by the former, and generates rewards for the former when it fails to predict the actual action taken by the former. In such a competitive setting, the DRL agent learns to generate samples that the inverse dynamics model fails to predict correctly, and the inverse dynamics model learns to adapt to the challenging samples. We further propose a reward structure that ensures the DRL agent collects only moderately hard samples and not overly hard ones that prevent the inverse model from imitating effectively. We evaluate the effectiveness of our method on several OpenAI gym robotic arm and hand manipulation tasks against a number of baseline models. Experimental results show that our method is comparable to that directly trained with expert demonstrations, and superior to the other baselines even without any human priors.
[ "adversarial exploration", "self-supervised", "imitation learning" ]
https://openreview.net/pdf?id=Hyxtso0qtX
https://openreview.net/forum?id=Hyxtso0qtX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJeg8MrQgN", "r1l6EvQS14", "SygTzvXHJV", "rkxrJw7HkV", "HkeGILmSkV", "HJgWrZ2u0X", "SkxwoWYnaQ", "HJgAY-KnaX", "SJgi2xY26Q", "SJx05eY3TQ", "BkxL-Cdha7", "SkltsaunaQ", "S1xCH6unTX", "Syl-u4YshX", "H1l5QTm5nQ", "rkxzS_7qnm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544929864170, 1544005429278, 1544005397352, 1544005341407, 1544005194397, 1543188793050, 1542390175117, 1542390149992, 1542389938902, 1542389910119, 1542389245862, 1542389153223, 1542389062502, 1541276777327, 1541188898127, 1541187642203 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper640/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper640/Authors" ], [ "ICLR.cc/2019/Conference/Paper640/Authors" ], [ "ICLR.cc/2019/Conference/Paper640/Authors" ], [ "ICLR.cc/2019/Conference/Paper640/Authors" ], [ "ICLR.cc/2019/Conference/Paper640/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper640/Authors" ], [ "ICLR.cc/2019/Conference/Paper640/Authors" ], [ "ICLR.cc/2019/Conference/Paper640/Authors" ], [ "ICLR.cc/2019/Conference/Paper640/Authors" ], [ "ICLR.cc/2019/Conference/Paper640/Authors" ], [ "ICLR.cc/2019/Conference/Paper640/Authors" ], [ "ICLR.cc/2019/Conference/Paper640/Authors" ], [ "ICLR.cc/2019/Conference/Paper640/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper640/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper640/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a method for incentivizing exploration in self-supervised learning using an inverse model, and then uses the learned inverse model for imitating an expert demonstration. The approach of incentivizing the agent to visit transitions where a learned model performs poorly. This relates to prior work (e.g. [1]), but using an inverse model instead of a forward model. The results are promising on challenging problem domains, and the method is simple. The authors have addressed several of the reviewer concerns throughout the discussion period.\\nHowever, three primary concerns remain:\\n(A) First and foremost: There has been confusion about the problem setting and the comparisons. I think these confusions have stemmed from the writing in the paper not being sufficiently clear. First, it should be made clear in the plots that the \\\"Demos\\\" comparison is akin to an oracle. Second, the difference between self-supervised imitation learning (IL) and traditional IL needs to be spelled out more clearly in the paper. Given that self-supervised imitation learning is not a previously established term, the problem statement needs to be clearly and formally described (and without relying heavily on prior papers). Further, the term self-supervised imitation learning does not seem to be an appropriate term, since imitation learning from an expert is, by definition, not self-supervised, as it involves supervisory information from an expert. Changing this term and clearly defining the problem would likely lead to less confusion about the method and the relevant comparisons.\\n(B) The \\\"Demos\\\" comparison is meant as an upper bound on the performance of this particular approach. However, it is also important to understand what the upper bound is on these problems in general, irrespective of whether or not an inverse model is used. Training a policy with behavior cloning on demonstrations with many (s,a) pairs would be able to better provide such a comparison.\\n(C) Inverse models inherently model the part of the environment that is directly controllable (e.g. the robot arm), and often do not effectively model other aspects of the environment that are only indirectly controllable (e.g. the objects). If the method overcomes this issue, then that should be discussed in the paper. Otherwise, the limitation should be outlined and discussed in more detail, including text that outlines which forms of problems and environments this approach is expected to be able to handle, and which of those it cannot handle.\\n\\nGenerally, this paper is quite borderline, as indicated by the reviewer's scores. After going through the reviews and parts of the paper in detail, I am inclined to recommend reject as I think the above concerns do not outweigh the pros.\\n\\nOne more minor comment is that the paper should consider mentioning the related work by Torabi et al. [2], which considers a similar approach in a slightly different problem setting.\\n\\n[1] Stadie et al. https://arxiv.org/abs/1507.00814\\n[2] Torabi et al. IJCAI '18 (https://arxiv.org/abs/1805.01954)\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta review\"}", "{\"title\": \"Response to reviewer 3 (Part 4/4)\", \"comment\": \"Comment: The fact that on the majority of tasks \\u201cDemos\\u201d baseline is so far from 100% leaves me puzzled. Was it not enough training iterations? Would GAIL perform a lot of better?\", \"response\": \"We would like to address the reviewer\\u2019s questions in three aspects.\\n\\nFirst, we would like to bring to the reviewer\\u2019s kind attention that more iterations do not necessarily resulting to a higher success rate for the \\u201cDemo\\u201d baseline. In the responses and the revised manuscript we posted on OpenReview last time, we have included an additional experiment in Section S.10 for comparing the performance of the Demo baseline trained with different number of iterations. Fig. 7 illustrates the performance of Demo with different number of expert demonstrations. Demo(100), Demo(1,000), and Demo(10,000) correspond to the Demo baselines with 100, 1,000, and 10,000 episodes of demonstrations, respectively. It is observed that the three curves in Fig. 7 saturate to similar performance for most of the tasks. This experiment can therefore serve as an evidence that the performance of the Demo baseline is not fully determined by the number of iterations.\\n\\nThe gap between the real success rate and 100% for the Demo baseline is attributable to the model architecture, which limits the final performance of the imitator. Different model architectures do lead to different success rates of the self-supervised IL models. However, as the main contribution of this paper is an adversarial exploration strategy for self-supervised imitation learning, we fixed the model architecture for all of our experiments. Discussion of the most effective model architecture for IL is not within the scope of discussion of this paper. \\n\\nFinally, we hope the reviewer could understand that GAIL and self-supervised IL are different in their problem formulations. GAIL trains the imitator with expert demonstrations during the training phase, and uses the learned policy to perform predefined tasks in the evaluation phase without any additional demonstrations. On the other hand, self-supervised IL aims at training an effective inverse dynamics model, and execute it in the evaluation phase by following an expert\\u2019s observations online. GAIL is unable to perform this kind of tasks if the corresponding expert demonstrations are not provided during the training phase. As a result, these two streams of methods are not comparable because of their distinct problem formulations.\"}", "{\"title\": \"Response to reviewer 3 (Part 3/4)\", \"comment\": \"Comment: I guess the main disagreement between myself and the authors of the paper have is the question of how such work should be positioned and evaluated. The authors suggest that their method should only be compared only to other methods from the narrow subfield of learning general purpose inverse models without any supervision and then using them for what effectively is one-shot imitation learning (at least that\\u2019s my understanding). They argue that any statistically significant improvement upon baselines from this narrow subfield is sufficient for publication. With all due respect to the hard work that the authors have conducted, I am inclined to disagree.\\n\\nMy opinion is that these methods should be considered in a broader context of imitation learning in general. Yes, methods such as behavioral cloning, GAIL and IRL are less generic than the proposed one and use more expert data, but on the other hand they come with a clear guarantee that the more data you collect the better your performance is, and for the time being are probably superior to fully unsupervised methods. I believe that research such as this paper should make it clear how much of the gap between fully unsupervised imitation learning and conventional imitation learning is bridged by the proposed method. I believe conventional imitation learning baselines should be established with better diligence than what this work does.\", \"response\": \"We respectfully disagree with the reviewer\\u2019s opinion, as self-supervised imitation learning (IL) and traditional IL belong to two different branches of research directions. In addition, the primary target of this paper is to enhance the training efficiency of self-supervised IL, rather than proposing a method to compete with existing IL methods. We illustrate the difference between self-supervised imitation learning (IL) and traditional IL (e.g., BC, GAIL, etc) as the following:\", \"traditional_il\": \"\", \"https\": \"//www.dropbox.com/s/g6bycuzefa9u4k3/Self-Supervised_IL.jpeg?dl=0\\n\\nWe summarize our perspectives in the following.\\n\\nFirst, as described in Section 1, self-supervised IL allows an imitator to collect training data by itself instead of using pre-defined extrinsic reward functions or expert supervision during training. It only needs demonstration during inference, drastically decreasing the time and effort required from human experts. In complex environments such as Mujoco [5], Roboschool [6], OpenAI robotic arm and hand task [7], and real robotic tasks, it is extremely difficult to collect sufficient expert demonstrations in reasonable amount of time. Traditional methods of data collection are usually inefficient and time-consuming. Inefficient data collection results in poor exploration, giving rise to a degradation in robustness to varying environmental conditions (e.g., noise in motor control) and generalizability to difficult tasks. The proposed method enables an IL model to learn from the data prepared by the DRL agent, significantly removing the need of human intervention.\\n\\nThe second difference is that during the training phase, self-supervised IL motivates an an inverse dynamics model based imitator to learn by interacting with the environment, while traditional IL requires a DRL agent based imitator to learn from expert demonstrations. \\n\\nThe third difference is that during the evaluation phase, self-supervised IL requires an imitator to infer the transitional action (a_t) between the current observation (x_t) and the given observation (\\\\hat{x}_{t+1}) from the expert\\u2019s demonstration, while traditional IL does not use any expert demonstration during the evaluation phase. The imitator trained by traditional IL simply follows the policy learned during the training phase, and is unable to adapt to tasks not included in the training data. \\n\\nFinally, in terms of objectives, self-supervised IL aims at training an effective inverse dynamics model, and execute it in the evaluation phase by following an expert\\u2019s observations online. On the other hand, traditional IL methods focus on learning a policy and perform predefined tasks only.\\n\\nIn summary, self-supervised IL differs from traditional IL in several aspect, including their methods of training data preparation, training and evaluation procedures, as well as their objectives. Hence, we consider that these two methods are not directly comparable because of the above reasons.\\n\\n[5] E. Todorov, T. Erez, and Y. Tassa, \\u201cMujoco: A physics engine for model-based control view,\\u201d in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), pp. 5026-5033, Oct. 2012.\\n[6] Roboschool, available at https://blog.openai.com/roboschool/.\\n[7] Ingredients for Robotics Research environments, available at https://blog.openai.com/ingredients-for-robotics-research/.\", \"self_supervised_il\": \"\"}", "{\"title\": \"Response to reviewer 3 (Part 2/4)\", \"comment\": \"Comment: Thank you for commenting on the number of demonstrations. I am still not fully satisfied though: my understanding is that given enough examples, behavioural cloning should eventually perform perfectly, which is not the case, according to Fig. 7. Do you train for not long enough? Not having a clear baseline makes it hard to put your results in context and understand how useful they actually are.\", \"response\": \"We would like to bring to the reviewer\\u2019s kind attention that the \\u201cDemo\\u201d baseline is different from behavior cloning. For the \\u201cDemo\\u201d baseline presented in this paper, the inverse dynamics model is trained with the data obtained from expert demonstrations. Behavior cloning (BC), on the other hand, does not train an inverse dynamics model. Instead, BC requires a model to learn a policy directly from expert demonstrations. We illustrate the difference between self-supervised imitation learning (IL) and traditional IL (e.g., BC, GAIL, etc) as the following:\", \"traditional_il\": \"\", \"https\": \"//www.dropbox.com/s/g6bycuzefa9u4k3/Self-Supervised_IL.jpeg?dl=0\\n\\nThe reason why the performance of the \\u201cDemo\\u201d baseline is not perfectly 100% is attributable to the model architecture, which limits the final performance of the imitator. Different model architectures do lead to different success rates of the self-supervised IL models. As the main contribution of this paper is the adversarial exploration strategy for self-supervised imitation learning, we fixed the model architecture for all of our experiments. Discussion of the most effective model architecture for IL is not within the scope of discussion of this paper. \\n\\nFig. 7 illustrates the performance of Demo with different number of expert demonstrations. Demo(100), Demo(1,000), and Demo(10,000) correspond to the Demo baselines with 100, 1,000, and 10,000 episodes of demonstrations, respectively. It is observed that the three curves in Fig. 7 saturate to similar performance for most of the tasks. This experiment can therefore serve as an evidence that the performance of the Demo baseline is not fully determined by the number of iterations.\", \"self_supervised_il\": \"\"}", "{\"title\": \"Response to reviewer 3 (Part 1/4)\", \"comment\": \"Lastly, your plots in Figure 4 seem to be showing that the transformation performed in Eq. 7 decreases first and foremost the frequency of easy examples, and not the difficult ones (I\\u2019m comparing w/o stab and w. stab). I don\\u2019t think you discuss this discrepancy in the paper or in your response.\", \"response\": \"We are afraid that there seems to be some misunderstanding. In Section 4.5, we discussed that for each of the five cases, the mode of Ours (w stab) is close to the value of \\u03b4 (plotted in a dotted line), indicating that our reward structure presented in Eq. (7) does help to regulate L_I (and thus r_t) to be around \\u03b4. In other words, the DRL agent is not motivated to collect easy samples (L_I < \\u03b4) or hard samples (L_I > \\u03b4). It is only encouraged to collect moderately hard samples (L_I \\u2248 \\u03b4) to to train the inverse dynamics model. We illustrate the effect of Eq. (7) by the following figure.\", \"https\": \"//www.dropbox.com/s/mhivrt9a1eoxhyd/fig_rebuttal_reward_shaping.png?dl=0\\n\\n\\nWe would also like to bring to the reviewer\\u2019s attention that in the revised manuscript we posted on OpenReview last time, we have included the above figure in the manuscript as Fig. 8 along with an additional Section S.12 to visualize how rewards are shaped.\\n\\nIn addition, in Section 4.5 of the original manuscript we have discussed that from Fig. 4, it can be observed that the modes of Ours (w stab) are lower than those of Ours (w/o stab) in most cases, implying that the stabilization technique does motivate the DRL agents to favor those moderately hard samples.\\n\\nWe sincerely hope that we have adequately addressed your concerns.\"}", "{\"title\": \"Feedback from Reviewer 3\", \"comment\": \"Dear Authors,\\n\\nThank you for your clarifications, they were quite helpful. I think you should consider editing the spots of the paper that caused confusion, in particular, you could explain a gist of how self-supervised IL works as early as in the intro (1 sentence), and elaborate more on the sample complexity differences between discrete and continuous action spaces. \\n\\nIn Eq. 7 you are abusing the notation, by using r_t both on the right-hand side and the left-hand side. This is confusing. You could use e.g. \\\\hat{r}_t on the left-hand side. Also, this transformation is not really reward shaping in sense of (Ng et al, 1999) (https://goo.gl/t68wpH), which is the sense in which most of the community understand this term. Lastly, your plots in Figure 4 seem to be showing that the transformation performed in Eq. 7 decreases first and foremost the frequency of easy examples, and not the difficult ones (I\\u2019m comparing w/o stab and w. stab). I don\\u2019t think you discuss this discrepancy in the paper or in your response.\\n\\nThank you for commenting on the number of demonstrations. I am still not fully satisfied though: my understanding is that given enough examples, behavioural cloning should eventually perform perfectly, which is not the case, according to Fig. 7. Do you train for not long enough? Not having a clear baseline makes it hard to put your results in context and understand how useful they actually are. \\n\\nI guess the main disagreement between myself and the authors of the paper have is the question of how such work should be positioned and evaluated. The authors suggest that their method should only be compared only to other methods from the narrow subfield of learning general purpose inverse models without any supervision and then using them for what effectively is one-shot imitation learning (at least that\\u2019s my understanding). They argue that any statistically significant improvement upon baselines from this narrow subfield is sufficient for publication. With all due respect to the hard work that the authors have conducted, I am inclined to disagree. My opinion is that these methods should be considered in a broader context of imitation learning in general. Yes, methods such as behavioral cloning, GAIL and IRL are less generic than the proposed one and use more expert data, but on the other hand they come with a clear guarantee that the more data you collect the better your performance is, and for the time being are probably superior to fully unsupervised methods. I believe that research such as this paper should make it clear how much of the gap between fully unsupervised imitation learning and conventional imitation learning is bridged by the proposed method. I believe conventional imitation learning baselines should be established with better diligence than what this work does. The fact that on the majority of tasks \\u201cDemos\\u201d baseline is so far from 100% leaves me puzzled. Was it not enough training iterations? Would GAIL perform a lot of better?\\n\\nYou may want to consider studying in your future research how helpful your method is when combined with behavioral cloning or GAIL. That could be a compelling argument. You may try and explain better why all methods (including \\u201cDemos\\u201d) perform so poorly on the majority of tasks. But as is, I am not convinced that the paper is ready to be presented. This is my opinion, and the AC will have the last word, of course.\"}", "{\"title\": \"Response to reviewer 2 (Part 2/2)\", \"comment\": \"Q4: Figs. 4, 5, and 6 all relate to the stabilizer value delta, and I have a couple questions here: (i) for what delta does performance start to degrade? At delta=inf, I think it should be the same as no stabilizer, while at delta=0 is the exact opposite reward (i.e. negative loss, easy samples). (ii) delta=3 is evaluated, and performance looks decent for this in fig 6 --- but fig 4 shows that the peak PDF of \\\"no stabilizer\\\" is around 3 as well, yet \\\"no stabilizer\\\" performs poorly in Fig 5. Why is this, if it tends to produce actions with loss around 3 in both cases?\", \"response\": \"(i) Many thanks for raising this interesting question. We have conducted additional experiments to investigate this issue, and have analyzed the results in the updated version of our manuscript. In Fig. 6, we compare the learning curves of the imitator for different values of \\u03b4. For instance, Ours(0.1) corresponds to \\u03b4 = 0.1. It is observed that for most of the tasks, the success rates drop when \\u03b4 is set to an overly high or low value (e.g., 100.0 or 0.0), suggesting that a moderate value of \\u03b4 is necessary for the stabilization technique. The value of \\u03b4 can be adjusted dynamically by the adaptive scaling technique presented in [2], which is left as our future direction.\\n\\n(ii) We would like to clarify this issue as follows. Although the peaks of \\u201cOurs(w/o stab)\\u201d in Fig. 4 are around 3, it does not suggest that their final success rates are comparable to those of \\u201cOurs(3.0)\\u201d (with our stabilization technique) in Fig. 6. Please note that Fig. 4 only plots the first 2K training batches of the inverse dynamics model in the entire training process. After 2K, the peaks of \\u201cOurs(3.0)\\u201d and \\u201cOurs(w stab)\\u201d still stay close to their $\\\\delta$ values for the rest of the training processes, while that of \\u201cOurs(w/o stab)\\u201d gradually grows to around 1K, which is prohibitively higher than reasonable values. Such a high training loss could cause the gradient exploding problem, which typically leads to a severe performance drop [3]. This also explains why \\u201cOurs (w/o stab)\\u201d performs poorly in Fig. 5. As a result, \\u201cOurs(3.0)\\u201d is superior to \\u201cOurs (w/o stab)\\u201d due to its relatively stabler gradients. \\n\\nPlease also note that plotting the training losses of only the first 2K training batches in Fig. 4 is intended for enhancing the visualization and readability of the results. As the training losses of \\u201dOurs(w/o stab)\\u201d span from low values to extraordinarily high values, it is not feasible to be compare the PDF of \\u201dOurs(w/o stab)\\u201d directly with that of \\u201cOurs(w stab)\\u201d. Therefore, we selected a range of data (i.e., the first 2K batches) in which the training losses of \\u201cOurs(w/o stab)\\u201d are still under 10. We have incorporated additional figures in our supplementary material for illustrating the above observations.\\n\\n[1] P. Deepak et al., \\\"Zero-shot visual imitation,\\\" in Proc. Int. Conf. Learning Representations (ICLR), Apr.-May 2018.\\n\\n[2] M. Plappert et al., \\\"Parameter space noise for exploration.\\\" in Proc. Int. Conf. Learning Representations (ICLR), Apr.-May, 2018.\\n\\n[3] R. Pascanu, T. Mikolov, and Y. Bengio, \\\"On the difficulty of training recurrent neural networks,\\\" in Proc. Int. Conf. Machine Learning (ICML), pp.1310-1318, Jun. 2013.\"}", "{\"title\": \"Response to reviewer 2 (Part 1/2)\", \"comment\": \"Here is the PDF version of our responses: https://www.dropbox.com/s/707hka5ba9abg54/Response_2_ICLR_2019.pdf?dl=0 (anonymous link)\\n\\nThe authors appreciate the reviewer\\u2019s the time and efforts for reviewing this paper, and would like to respond to the questions in the following paragraphs.\", \"q1\": \"Overall, this method exploration seems quite effective on the tasks evaluated. I'd be curious to know more about the limits and failures of the method, e.g., in other types of environments.\", \"response\": \"We would like to sincerely apologize for the misunderstanding caused. In our experiments, pre-training with random samples is only applied to the HandReach task due to its high complexity in exploration. This is also the primary subject that we intend to discuss in Section 4.3. As a result, the success rates of all the methods are the same in the HandReach task for the first 30K samples, as you have noticed. For the other tasks, we do not pre-train the models with random samples. Therefore, their success rates before 30K do not coincide with that of the \\u201cRandom\\u201d baseline.\", \"q2\": \"p.2 mentions that the environments \\\"are intentionally selected by us for evaluating the performance of inverse dynamics model, as each of them allows only a very limited set of chained actions\\\". What sort of environments would be less well fit? Are there any failure cases of this method where other baselines perform better?\", \"q3\": \"Sec 4.3 notes that the self-supervised methods are pre-trained using 30k random samples before switching to the exploration policy, but in Fig 2, the success rates do not coincide between the systems and the random baseline, at either samples=0 or samples=30k --- should they? if not, what differences caused this?\"}", "{\"title\": \"Response to reviewer 1 (Part 2/2)\", \"comment\": \"Q3: Only accounts for the immediately controllable aspects of the environment which doesn't seem to be the hard part. Understanding the rest of the environment and its relationship to the controllable part of the state seems beyond the scope of this model.\", \"response\": \"Although most of the DRL works suggest that the rewards should be re-scaled or clipped within a range (e.g., from -1 to 1), the unbounded rewards do not introduce any issue during the training process of our experiments. The empirical rationale is that the rewards received by the DRL agent are regulated by Eq. (7) to be around \\u03b4, as described in Section 4.5 and depicted in Fig. 4. Without the stabilization technique, however, the learning curves of the inverse dynamics model degrade drastically (as illustrated in Fig. 5), even if the reward clipping technique is applied.\\n\\n[1] M. Fortunato et al., \\\"Noisy networks for exploration,\\\" in Proc. Int. Conf. Learning Representations (ICLR), Apr.-May, 2018. \\n\\n[2] \\u0141. Kidzi\\u0144ski et al., \\\"Learning to run challenge solutions: Adapting reinforcement learning methods for neuromusculoskeletal environments,\\\" arXiv:1804.00361, Apr. 2018.\", \"q4\": \"Nonetheless I can imagine it helping with initial random motions.\", \"q5\": \"From Eq. (6) the bonus seems to be unbounded and Eq. (7) doesn't seem to fix that. Is that not an issue in general ? Any intuition about that ?\"}", "{\"title\": \"Response to reviewer 1 (Part 1/2)\", \"comment\": \"Here is the PDF version of our responses: https://www.dropbox.com/s/0r2hztg7af87934/Response_1_ICLR_2019.pdf?dl=0 (anonymous link)\\n\\nThe authors appreciate the reviewer\\u2019s the time and efforts for reviewing this paper, and would like to respond to the questions in the following paragraphs.\", \"q1\": \"The paper proposes an exploration strategy for deep reinforcement learning agent in continuous action spaces.\", \"response\": \"We would like to address the reviewer\\u2019s concerns in the following two paragraphs.\\n\\nFirst, with regard to the \\u201cinstability\\u201d issue, we are not quite sure which aspect the reviewer refers to, and would appreciate it if the reviewer could kindly share some more information with us. We assume that the reviewer could be referring to either the variance of the training losses, or the variance in the learning curves of Fig. 3. For the former case, a stabilization technique is presented in Section 3.3, and its effectiveness is analyzed and validated in Section 4.5. For the latter case, the variance in the learning curves is mainly caused by the complexity of the high-dimensional observation spaces. As contemporary DRL techniques also suffer from the same problems when training with raw images (i.e., high-dimensional observation spaces) [1], the high variance in the learning curves can similarly be alleviated by enhancing the model architecture or the training algorithm. Please note that this issue also occurs in the learning curves of the other baseline methods. Discussion of model architectures and specific training techniques for high-dimensional observation spaces, however, is beyond the scope of this paper.\\n\\nSecond, we agree with the reviewer that learning a policy in an environment with a large state space has been a challenging research topic. However, in the past few years, a number of DRL methods have been proposed and achieved remarkable successes in such environments, including humanoid robotic control [2]. The successes of these methods indicate that even in an environment with a large state space, it is still possible to discover an effective policy that maximizes the expected return. In the proposed adversarial exploration strategy, we train a DRL agent to maximize the expected losses of the inverse dynamics model (Eq. (6)). As DRL methods have been shown effective in exploiting arbitrary reward functions in large state spaces in the literature, we consider that our method can be extended to environments with large state spaces, and exploit the losses of the inverse dynamics model to collect training samples accordingly.\", \"q2\": \"Seems unstable and not clear how it would scale in a large state space where most states are going to be very difficult to learn about in the beginning like a humanoid body.\"}", "{\"title\": \"Response to reviewer 3 (Part 3/3)\", \"comment\": \"Q10: \\\"Demos\\\" baseline doesn't perform much better, but what would happen with 10,000 demonstrations?\", \"response\": \"We would like to bring to the reviewer\\u2019s kind attention that the primary focus of this work is self-supervised imitation learning (IL), rather than traditional IL (e.g., GAIL and IRL). The formulation of these two branches are fundamentally different from each other. Self-supervised IL takes demonstrations in the testing phase only. As a result, it allows the tasks to be altered online by changing the contents (i.e., trajectories) of the demonstrations. On the other hand, traditional IL uses demonstrations in the training phase, and does not allow online changes to the tasks in the testing phase. Therefore, we consider that these two branches are not supposed to be compared due to their distinct problem formulations.\\n\\n[1] M. Fortunato et al., \\\"Noisy networks for exploration,\\\" in Proc. Int. Conf. Learning Representations (ICLR), Apr.-May, 2018.\", \"q11\": \"There is no comparison to behavioral cloning, GAIL, IRL. Would these methods perform better than learning IDM like \\\"Demos\\\" does?\"}", "{\"title\": \"Response to reviewer 3 (Part 2/3)\", \"comment\": \"Q6: It is commendable that 20 repetitions of each experiment were performed, but I am not sure if it is ever explained in the paper what exactly the upper and lower boundaries in the figures mean. Is it the standard deviation? A confidence interval?\", \"response\": \"We understand the reviewer\\u2019s concerns. However, we do have some reservations about the perspective of the comments, and would like to discuss our point of views with the reviewer in two different aspects. First, the main scope of this work is self-supervised data-collection strategies for training inverse dynamics models. The baseline methods selected for comparison are mostly published in this year. As demonstrated in Section 4, our method outperforms them significantly for most of the tasks in both low- and high-dimensional observation spaces, as well as high-dimensional action space. Second, to the best of our knowledge, almost none of the previous works in this domain has investigated adversarial exploration strategies for training inverse dynamics models. Our work is the first proof of concept to demonstrate that the proposed adversarial strategy does improve the training efficiency and the performance of inverse dynamics models. We hope that the reviewer could kindly correct us if we are wrong, and consider the points discussed above.\", \"q7\": \"Can you comment on the variance of the proposed approach, which seems to be very high, especially when I am looking at high-dimensional fetch-reach results?\", \"q8\": \"The results of \\u201cHandReach\\u201d experiments, where the proposed method works much worse than \\u201cDemos\\u201d are not discussed in the text at all\", \"q9\": \"Overall, there is no example of the proposed method making a difference between a \\u201cworking\\u201d and \\u201cnon-working\\u201d system, compared to \\u201cCuriosity\\u201d and \\u201cRandom\\u201d. I am wondering if improvements from 40% to 60% in such cases are really important. In 7 out of 9 plots the performance of the proposed method is less than 80% - not very impressive.\"}", "{\"title\": \"Response to reviewer 3 (Part 1/3)\", \"comment\": \"Here is the PDF version of our responses: https://www.dropbox.com/s/mxhetdyy7m4nkp6/Response_3_ICLR_2019.pdf?dl=0 (anonymous link)\\n\\nThe authors appreciate the reviewer\\u2019s the time and efforts for reviewing this paper, and would like to respond to the questions in the following paragraphs.\", \"q1\": \"The introduction has not made it crystal clear that the considered paradigm is different from e.g. DAGGER and GAIL in that expert demonstrations are used at the inference time. A much wider audience is familiar with the former methods, and this distinction should have be explained more clearly.\", \"response\": \"We fully agree with your comments regarding the number of demonstrations. To address your concerns, we have incorporated an additional figure illustrating the learning curves of the Demo baseline with various number of demonstrations in the supplementary material. According to Fig. 7, we do not observe any significant difference in performance when the number of demonstrations is set to 100, 1,000, and 10,000. This is the reason why we used 1,000 demonstrations for training this baseline method in our experiments.\", \"q2\": \"Section 4.2.: \\u201cAs opposed to discrete control domains, these tasks are especially challenging, as the sample complexity grows in continuous control domains.\\u201d - this sentence did not make sense to me. It basically says continuous control is challenging because it is challenging.\", \"q3\": \"I did not understand the stabilization approach. How exactly Eq. (7) forces the policy to produce \\u201cnot too hard\\u201d training examples for IDM? Fig. 4 shows that it is on the opposite examples with small L_I that are avoided by using \\u03b4 > 0.\", \"q4\": \"Table 1 - it is a bit counterintuitive that negative numbers are better than positive numbers here. Perhaps instead of policy\\u2019s deterioration you could report the relative change, negative when the performance goes down and positive otherwise?\", \"q5\": \"The \\u201cDemos\\u201d baseline approach should be explained in the main text! In Appendix S.7 I see that 1000 human demonstrations were used for training. Why 1000, and not 100 and not 10000? How would the results change? This needs to be discussed. Without discussing this it is really unclear how the proposed method can outperform \\u201cDemos\\u201d, which it does pretty often.\"}", "{\"title\": \"good training exploration, somewhat limited scope of experimental conditions\", \"review\": \"This paper presents a system for self-supervised imitation learning using a RL agent that is rewarded for finding actions that the system does not yet predict well given the current state. More precisely, an imitation learner I is trained to predict an action A given a desired observation state transition xt->xt+1; the training samples for I are generated using a RL policy that yields an action A to train given xt (a physics engine evaluates xt+1 from xt and A). The RL policy is rewarded using the loss incurred by I's prediction of A, so that moderately high loss values produce highest reward. In this way, the RL agent learns to produce effective training samples that are not too easy or hard for the learner. The method is evaluated on five block manipulation tasks, comparing to training samples generated by other recent self-supervised methods, as well as those found using a pretrained expert model for each task.\\n\\nOverall, this method exploration seems quite effective on the tasks evaluated. I'd be curious to know more about the limits and failures of the method, e.g. in other types of environments.\", \"additional_questions\": [\"p.2 mentions that the environments \\\"are intentionally selected by us for evaluating the performance of inverse dynamics model, as each of them allows only a very limited set of chained actions\\\". What sort of environments would be less well fit? Are there any failure cases of this method where other baselines perform better?\", \"sec 4.3 notes that the self-supervised methods are pre-trained using 30k random samples before switching to the exploration policy, but in Fig 2, the success rates do not coincide between the systems and the random baseline, at either samples=0 or samples=30k --- should they? if not, what differences caused this?\", \"figs. 4, 5 and 6 all relate to the stabilizer value delta, and I have a couple questions here: (i) for what delta does performance start to degrade? At delta=inf, I think it should be the same as no stabilizer, while at delta=0 is the exact opposite reward (i.e. negative loss, easy samples). (ii) delta=3 is evaluated, and performance looks decent for this in fig 6 --- but fig 4 shows that the peak PDF of \\\"no stabilizer\\\" is around 3 as well, yet \\\"no stabilizer\\\" performs poorly in Fig 5. Why is this, if it tends to produce actions with loss around 3 in both cases?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"interesting and potentially useful paper\", \"review\": \"The paper proposes an exploration strategy for deep reinforcement learning agent in continuous action spaces. The core of the method is to train an inverse local model (a model that predicts the action that was taken from a pair of states) and its errors as an exploration bonus for a policy gradient agent. The intuition is that its a good self-regulating strategy similar to curiosity that leads the agents towards states that are less known by the inverse model. Seeing these states improves the . There are experiments run on the OpenAI gym comparing to other models of curiosity. The paper is well written and clear for the most part.\", \"pros\": [\"the paper seems novel and results are promising\", \"easy to implement\"], \"cons\": [\"seems unstable and not clear how it would scale in a large state space where most states are going to be very difficult to learn about in the beginning like a humanoid body.\", \"only accounts for the immediately controllable aspects of the environment which doesn't seem to be the hard part. Understanding the rest of the environment and its relationship to the controllable part of the state seems beyond the scope of this model. Nonetheless I can imagine it helping with initial random motions.\", \"from (6) the bonus seems to be unbounded and (7) doesn't seem to fix that. Is that not an issue in general ? Any intuition about that ?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Cool idea, but there are issues in evaluation and results are weak overall\", \"review\": [\"The paper proposes a novel exploration strategy for self-supervised imitation learning. An inverse dynamics model is trained on the trajectories collected from a RL-trained policy. The policy is rewarded for generating trajectories on which the inverse dynamics model (IDM) currently works poorly, i.e. on which IDM predicts actions that are far (in terms of mean square error) from the actions performed by the policy. This adversarial training is performed in purely self-supervised way. The evaluation is performed by one-shot imitation of an expert trajectory using the IDM: the action is predicted from the current state of the environment and the next state in the expert\\u2019s trajectory. Experimental evaluation shows that the proposed method is superior to baseline exploration strategies for self-supervised imitation learning, including random and curiosity-based exploration.\", \"Overall, I find the idea quite appealing. I am not an expert in the domain and can not make comments on the novelty of the approach. I found the writing mostly clear, except for the following issues:\", \"the introduction has not made it crystal clear that the considered paradigm is different from e.g. DAGGER and GAIL in that expert demonstrations are used at the inference time. A much wider audience is familiar with the former methods, and this distinction should have be explained more clearly.\", \"Section 4.2.: \\u201cAs opposed to discrete control domains, these tasks are especially challenging, as the sample complexity grows in continuous control domains.\\u201d - this sentence did not make sense to me. It basically says continuous control is challenging because it is challenging.\", \"I did not understand the stabilization approach. How exactly Equation (7) forces the policy to produce \\u201cnot too hard\\u201d training examples for IDM? Figure 4 shows that it is on the opposite examples with small L_I that are avoided by using \\\\delta > 0.\", \"Table 1 - it is a bit counterintuitive that negative numbers are better than positive numbers here. Perhaps instead of policy\\u2019s deterioration you could report the relative change, negative when the performance goes down and positive otherwise?\"], \"i_do_have_concerns_regarding_the_experimental_evaluation\": [\"the \\u201cDemos\\u201d baseline approach should be explained in the main text! In Appendix S.7 I see that 1000 human demonstrations were used for training. Why 1000, and not 100 and not 10000? How would the results change? This needs to be discussed. Without discussing this it is really unclear how the proposed method can outperform \\u201cDemos\\u201d, which it does pretty often.\", \"it is commendable that 20 repetitions of each experiment were performed, but I am not sure if it is ever explained in the paper what exactly the upper and lower boundaries in the figures mean. Is it the standard deviation? A confidence interval? Can you comment on the variance of the proposed approach, which seems to be very high, especially when I am looking at high-dimensional fetch-reach results?\", \"the results of \\u201cHandReach\\u201d experiments, where the proposed method works much worse than \\u201cDemos\\u201d are not discussed in the text at all\", \"overall, there is no example of the proposed method making a difference between a \\u201cworking\\u201d and \\u201cnon-working\\u201d system, compared to \\u201cCuriosity\\u201d and \\u201cRandom\\u201d. I am wondering if improvements from 40% to 60% in such cases are really important. In 7 out of 9 plots the performance of the proposed method is less than 80% - not very impressive. \\\"Demos\\\" baseline doesn't perform much better, but what would happen with 10000 demonstrations?\", \"there is no comparison to behavioral cloning, GAIL, IRL. Would these methods perform better than learning IDM like \\\"Demos\\\" does?\", \"I think that currently the paper is slightly below the threshold, due to evaluation issues discussed above and overall low performance of the proposed algorithm. I am willing to reconsider my decision if these issues are addressed.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SylKoo0cKm
How Important is a Neuron
[ "Kedar Dhamdhere", "Mukund Sundararajan", "Qiqi Yan" ]
The problem of attributing a deep network’s prediction to its input/base features is well-studied (cf. Simonyan et al. (2013)). We introduce the notion of conductance to extend the notion of attribution to understanding the importance of hidden units. Informally, the conductance of a hidden unit of a deep network is the flow of attribution via this hidden unit. We can use conductance to understand the importance of a hidden unit to the prediction for a specific input, or over a set of inputs. We justify conductance in multiple ways via a qualitative comparison with other methods, via some axiomatic results, and via an empirical evaluation based on a feature selection task. The empirical evaluations are done using the Inception network over ImageNet data, and a convolutinal network over text data. In both cases, we demonstrate the effectiveness of conductance in identifying interesting insights about the internal workings of these networks.
[ "attribution", "saliency", "influence" ]
https://openreview.net/pdf?id=SylKoo0cKm
https://openreview.net/forum?id=SylKoo0cKm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1xERX6GeE", "HyeEiW9FR7", "Hye5Y6SQR7", "SJeVPpHQA7", "SyxMr6SmRQ", "r1eKZaBmAX", "H1xmAiWo6Q", "H1lipsyqp7", "BkeBNXOJT7", "HJxwToa927", "BkxH95gchQ", "SJe-HDDIn7", "BJgv0O6Snm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1544897484329, 1543246236080, 1542835585576, 1542835547943, 1542835514162, 1542835457121, 1542294475420, 1542220739167, 1541534508623, 1541229503231, 1541175948538, 1540941625316, 1540901070554 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper639/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper639/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper639/Authors" ], [ "ICLR.cc/2019/Conference/Paper639/Authors" ], [ "ICLR.cc/2019/Conference/Paper639/Authors" ], [ "ICLR.cc/2019/Conference/Paper639/Authors" ], [ "ICLR.cc/2019/Conference/Paper639/AnonReviewer3" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper639/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper639/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper639/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper639/Authors" ], [ "~Avanti_Shrikumar1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a new measure to quantify the contribution of an individual neuron within a deep neural network. Interpretability and better understanding of the inner workings of neural networks are important questions, and all reviewers agree that this work is contributing an interesting approach and results.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting work on how to measure the importance of an individual neuron in a network\"}", "{\"title\": \"Confirming point (b)\", \"comment\": \"I confirm that section 5.2 and 6.2 both have quantitative analyses. I missed the table 2 while responding to the anonymous comment.\"}", "{\"title\": \"Response to reviewer 1\", \"comment\": \"We thank the reviewer for their review. The reviewer notes the need to emphasize how and why to use this approach. In the new revision, we have added a discussion section to make a case for this. We will publish the code to compute conductance after the blind-review phase.\\n\\nThe reviewer also mentioned that the paper doesn\\u2019t compare directly against various attribution methods. For this, we refer the reviewer to our response for the comment by anonymous.\"}", "{\"title\": \"Response to reviewer 2\", \"comment\": \"We thank the reviewer for a detailed review. We agree with the reviewer that a uniqueness result based on the axioms is desirable, but we don\\u2019t have it. While we\\u2019re able to show that the paths at the input and at the hidden layer must be coupled (i.e. non-oblivious), we just don\\u2019t understand the space of non-oblivious methods that well. Mathematically, we don\\u2019t have a handle on how the path at the hidden layer can vary as the network below the hidden layer is changed. Partition consistency is the only axiom about the network below the hidden layer, but it is not applicable to all networks. We probably need another axiom to prove uniqueness.\\n\\nAnother key observation made by the reviewer is the interpretability vs importance of neurons. While those are not the same, we demonstrate that conductance can give us some insights about the network (Sections 5.1 and 6.1).\"}", "{\"title\": \"Response to reviewer 3\", \"comment\": \"We thank the reviewer for a detailed review. We have implemented the suggestions for improving clarity.\\n\\nRegarding our use of \\u2018axioms\\u2019: We follow the economics literature in using axioms as normative concepts, i.e., to denote desirable properties that a neuron importance methods. And not the use in the mathematical literature, which is to denote statements that are self-evidently true. We have clarified this in the submission.\"}", "{\"title\": \"Clarification on baselines\", \"comment\": \"We thank the anonymous commentator and reviewer #3 for the critiques about benchmarks.\", \"we_respond_to_the_two_types_of_issues_with_our_comparisons\": \"(a) Comparing against other methods: We chose our benchmarks to be published techniques of neuron importance. In this sense, we certainly make no effort to cherry pick. However, we do agree with the observation that methods such as LRP, DeepLift, DeepShap could be used as measures of neuron importance, though they have not been proposed for this purpose.\\n\\nTo partially address this, we performed a new theoretical analysis of DeepShap/LRP (see Appendix 8.2 and Section 4.1.) . We find that the importance measures from these methods have an intuitively undesirable dependence on the \\u201cimplementation\\u201d of the network. That is, you can vary the network architecture in a way that neuron computes the very same function, but the feature importance changes. \\n\\n(b) Evaluating on different tasks: First, we should point out that we do have a quantitative analysis on *both the text task and the image one. (see sections 5.2 and 6.2). We also have qualitative insights on both tasks (Sections 5.1 and 6.1). It is also true that sections 6.1 and 6.2 are not quite the same task. But this was mostly to make a quantitative eval of feature selection possible. Finally, note that we picked two large-scale, practically applied networks. Not MNIST and not some toy task.\\n\\nThat said, we do agree with Reviewer #3 that the empirical evaluations are not strong proof of generality. Hopefully our theoretical arguments compensate for this. As we all know, there is no natural ground truth for judging attribution or neuron importance. So almost every empirical evaluation of attribution has some issue. But all in all, we do agree there is room for more empirical evaluation.\"}", "{\"title\": \"Concerns about related works and baselines.\", \"comment\": \"From what I understand, SHAP/DeepSHAP and Layerwise Relevance Propagation gives some form of importance measure of the input features. The experiments described in sections 5 and 6 are designed to study how importance measures of the hidden neurons captures informations such as class selectivity or negation selectivity. I believe this is why the authors did not compare with SHAP/DeepSHAP and related methods. They could be compared however by considering the hidden neuron activations as the input of the network. This would also be an important baseline, further showing that ignoring information prior to the selected hidden neurons is detrimental for the importance measure, assuming that their method would still have the best results.\\n\\nMy main concerns with the experiments however is more about the limitation of the setups than about the limitation of baselines as I explained in my review. Results on only two different problems with a single architecture for each is not a strong proof of the generality of the measure. Each problem and architecture are very different which could help showing the generality, but the analyses are different in nature, one is quantitative while the other one is qualitative, which makes them hardly comparable.\"}", "{\"comment\": \"Echoing a comment made by reviewer 3: in the first line of the paper the authors mention several methods that can be applied to explain a network\\u2019s predictions in terms of its inputs. Almost all of these methods can be applied to study neuron-level attribution and many satisfy conservation, including SHAP/DeepSHAP and Layerwise Relevance Propagation. The authors even mention that Layerwise Relevance Propagation satisfies the conservation principle in a footnote on page 5, but they do not compare to it. The baselines are thus very cherry-picked. If the authors wish to make a theoretical argument for their method, that is fine, but selectively leaving out some of the strongest methods in the benchmarking is very misleading because it gives the impression that the methods in the benchmark are the only methods that could be used for this purpose. SHAP/DeepSHAP in particular are quite widely used and should be compared to.\", \"title\": \"Baselines are cherry-picked\"}", "{\"title\": \"An important measure of Neuron Importance\", \"review\": \"This paper presents a new method to measure the importance of hidden neurons in deep neural networks. The method integrates notions of activation value, input influence to a neuron and neuron influence to the network's output. They provide results confirming that this measure is able to identify neurons that are important for specific tasks.\\n\\nQuality\\n\\nThe experiments are well designed to verify their hypothesis, although there could be more to make sure those results are not particular to the few selected problems. Nevertheless, the results are consistent across those experiments.\\n\\nClarity\\n\\nThe text is well written in general, but the structure could be improved. The introduction contains too much related work, which should be divided in another section. Section 2 contains mostly high level explanations of the work, which should be integrated in the introduction, and thus before the related work section, to improve readability. See minor comments for more specific suggestions.\\n\\nIt is difficult to understand the goal of Section 4.2. Section 2 states that section 4.2 proves that a \\\"path method\\\" must be used in order to satisfy the axioms, but why such axioms are important is not stressed enough. Also, it is not clear why those are called axioms since they are not use to build anything else. It seems to me that those are rather \\\"desirable properties\\\" than axioms.\\n\\nOriginality\\n\\nA important number of related works are cited and compared with the current work. Although the proposed measure is close to what is proposed by Datta et al., this paper makes the distinction clear and benchmarks its results properly against it.\\n\\nSignificance\\n\\nThere is an increasing need to interpretability of deep neural networks as they get more and more applied to real-world problems. Measures as the one proposed in this paper are a very important building block towards this.\\n\\nConclusion\\n\\nFor its original importance measure and the proper experiment benchmarks, I believe this paper should be accepted. There is however many minor issues that should be fixed for the camera-ready version. Although the recommended length is 8 pages, the strict limit is 10, so I would recommended to use a bit of the remaining extra space to conclude the paper properly with a discussion on the results and their consequences, as well as a conclusion to wrap up the paper.\\n\\n***\\n\\nMinor Comments\", \"introduction\": [\"The term flow is never defined precisely, we need to infer it based on the definition of conductance and attribution.\", \"First paragraph would be more clear with simple word explanation rather than maths. Also, second sentence is not a complete sentence\", \"Work on image indicators of importance could be compared better with current work. Indicators can be seen as a measure of importance.\", \"This sentence is not clear: \\\"[...]; the nature of correlations in the two models may differ\\\".\"], \"section_2\": [\"Last paragraph of section 2 can be true for any well-performing importance measure. The statements should be put in perspective with others.\"], \"section_3\": [\"Section 3 should be introduced by explaining the goal of the section otherwise it breaks the flow of reading.\", \"The role of the baseline x' should be better explained when it is presented (first paragraph of section 3).\", \"The interchangeable use of the term \\\"conductance of neuron j\\\" for equations 2 and 3 is confusing. Different terms should be use, even if the context makes it possible to infer which one is being referred to.\", \"Remark 1 seems trivial, but the selection of baseline x' seems less trivial. Some explanations should be devoted to it.\", \"Second paragraph of remark 1 is not clear. Why couldn't we take another layer's neuron as the neuron of interest, bounding the conductance measure on one layer as the input and the output of the model? If we make the input to be a neuron y rather than the true input x, we could take another neuron z in a subsequent layer to be the neuron of interest, resulting in conductance measure Cond^z_i(y).\"], \"section_4\": [\"List of importance measure at beginning of Section 4 should probably have citations.\", \"Backward reference to section 3 seems to be a mistake, should it be subsection 4.2?\", \"Each of the justifications to get around the issue of distinguishing strange model behavior from bad feature importance technique should be explained briefly in paragraph before section 4.1.\", \"Subsection 4.1:\", \"I do not understand the problem explained in fourth paragraph of section 4.1. g(f(1 - epsilon)) = 0, why would it be 1- epsilon?\", \"Problem explained in fifth Paragraph of section is not clear unless what the influence of the unit is clearly stated. Is it simply dg/df?\", \"A short explanation of what is tested in section 6 should be given at last paragraph of section 4.1. Although the results are favorable to the conductance metric, it is not clear how they precisely confirm the problem of incorrect signs presented in the caricature examples.\", \"Subsection 4.2:\", \"As said in the my main comments, I am not convinced by the use of the term Axiom. They are not use as building blocks, and are rather used as desirable properties for which the authors prove that only \\\"path methods\\\" can satisfy.\", \"Footnote 2 on page 5 it difficult to read.\", \"Although the proof does not seem to use the axioms as a building block, which is fortunate since it would make it a circular argument otherwise, the text suggests so: \\\"Given these three axioms, we can show that:\\\".\", \"The importance of section 4.2 should be clarified. More emphasis on the importance of the axioms (desirable properties) should be made.\"], \"section_5\": [\"Choices for experiments should be explained. Why choosing layers mixed** rather than others? Why choosing filters?\", \"Figures 1-4 are difficult to interpreted on a printed version. Since this is qualitative, I suggest to change the saturation of the images to make them easier to interpret. The absolute values are not important for a qualitative interpretation\", \"Figure 4 could be more interesting if compared with other classes, like other animal faces. Anyhow, I understand that those were chosen based on the subset of classes used for the experiments.\", \"Space should be added between figures to better divide the captions\"], \"section_6\": [\"The difference between experiments of Figure 5 and 6 should be made more clear.\"], \"section_7_8\": [\"Where are they? No discussion? No conclusion?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Requested minor clarifications.\", \"review\": \"The authors propose a notion of conductance to attribute the deep neural network\\u2019s prediction to its hidden units. The conductance is the flow of attribution via the hidden unit(s) in consideration. The paper proposes using conductance to not only evaluate importance of hidden unit to the prediction for a specific input but also over a set of inputs. The strongest part of the analysis of conductance is that conductance naturally couples the path at the base features with that of the hidden layer.\\n\\nThe authors position their work well within the existing approaches in the community and generalizes the efficient use of measuring hidden activation wrt to specific input or set of inputs.\\n\\nThe analysis makes efficient use of mean value theorem in the context of parametrization of the loss function.\\n\\nConductance seems to satisfy the completeness of hidden features. Further, it also satisfies the layer-wise conservation principle with the outputs completely redistributed to the inputs.\\n\\nIt would be good to see more analysis on the axioms 1 through to 4 for the sake of completeness in the light of partial axiomatization of conductance.\\n\\nThe authors provide empirical evaluation of conductance over a variety of tasks. It would be good to see some more insight in order to relate to interpretability of the importance of neurons, although there has been no claims made on it as its hard to measure importance without interpretability.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Could use more motivation but it is a good concept.\", \"review\": \"The idea is nice. It is well aligned with tools that are needed to understand neural networks. However, the experiments feel like they are missing motivation as to why this method is being used. The paper does not provide very significant evidence that this method is useful. The negation example is nice but this doesn't seem to display the potential power of the method to understand a neural network.\\n\\nMore motivation for experimental section is needed. If the authors don't discuss a motivation then how will a reader know how to apply the tool? It seems there is no conclusion to take away from the experiments in section 5 (convolutions). \\n\\nThe authors should rethink the structure of the experimental section from the standpoint of convincing someone to use this method. In section 4.1 the authors have a good discussion on what is wrong with other methods in order to motivate their approach but then they don't deliver significant evidence in the later part of the section.\\n\\nThe paper needs more discussion and experiments to explain how and why to use this approach. \\n\\nWhile the authors say \\\"attributing a deep network\\u2019s prediction to its input is well-studied\\\" they don't compare directly against these methods. \\n\\nThere are many typos and grammar errors\\n\\nWhile I think the paper could be much more impactful if the experimental section was greatly reworked; I believe the first 5 pages of the paper are a very good contribution and it should be accepted.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Clarification on the missing citation\", \"comment\": \"First a word of caution for the reviewers: looking up the references mentioned below or in Avanti's remark or what follows will implicitly violate review blindness.\\n\\nAvanti's comment is partly about a prior version of this work on arxiv. That prior version contained a comment about inefficiency of computing all conductances in a given layer. Our implementation in tensorflow involved adding several gradient operators for this. Any implementation of that requires either doing a sequence of backprop/gradient operations (time inefficiency) or computing Jacobians (space inefficiency). The work by Shrikumar, Su & Kundaje did not address this inefficiency. Instead that work proposed a method similar to Remark 1 in our current submission. We will add a citation to that effect. However, as explained in Remark 1, it lacks an analogue to Equation 2.\"}", "{\"comment\": \"\\\"Remark 1\\\" in this paper, which states that a different and more computationally efficient generalization of Integrated Gradients is possible, is directly related to the paper \\\"Computationally Efficient Measures of Internal Neuron Importance\\\" by Shrikumar, Su & Kundaje, published on arXiv on 26th July, 2018 (https://arxiv.org/abs/1807.09946 ). However, the paper by Shrikumar, Su & Kundaje is not cited. The original version of this paper appeared on arXiv on 30th May, 2018 and did not contain Remark 1. It also contained the statement that \\\"computing conductance in tensorflow involved adding several gradient operators and didn't scale very well\\\" in the context of calculating Total Conductance. That statement served as motivation for the work by Shrikumar, Su & Kundaje, and is absent from this version of the work. We therefore request that the authors add a citation to the work by Shrikumar, Su & Kundaje.\", \"title\": \"Missing citation of \\\"Computationally Efficient Measures of Internal Neuron Importance\\\" by Shrikumar, Su & Kundaje\"}" ] }
BJeOioA9Y7
Knowledge Flow: Improve Upon Your Teachers
[ "Iou-Jen Liu", "Jian Peng", "Alexander Schwing" ]
A zoo of deep nets is available these days for almost any given task, and it is increasingly unclear which net to start with when addressing a new task, or which net to use as an initialization for fine-tuning a new model. To address this issue, in this paper, we develop knowledge flow which moves ‘knowledge’ from multiple deep nets, referred to as teachers, to a new deep net model, called the student. The structure of the teachers and the student can differ arbitrarily and they can be trained on entirely different tasks with different output spaces too. Upon training with knowledge flow the student is independent of the teachers. We demonstrate our approach on a variety of supervised and reinforcement learning tasks, outperforming fine-tuning and other ‘knowledge exchange’ methods.
[ "Transfer Learning", "Reinforcement Learning" ]
https://openreview.net/pdf?id=BJeOioA9Y7
https://openreview.net/forum?id=BJeOioA9Y7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJetJYMplr", "r1g_pSjGg4", "r1liuBG50X", "BJlDEqbKA7", "ByeCKHTOA7", "r1lPW7iI07", "SkgAysOIRm", "rklBpc_LC7", "Bkl3Lcd80m", "BkgiFx9nnQ", "rkgVy1xs2m", "Byx-vLP5hQ" ], "note_type": [ "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1562351841120, 1544889792504, 1543279987398, 1543211566874, 1543193989602, 1543054078568, 1543043814229, 1543043773484, 1543043668419, 1541345410890, 1541238492384, 1541203544616 ], "note_signatures": [ [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper638/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper638/Authors" ], [ "ICLR.cc/2019/Conference/Paper638/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper638/Authors" ], [ "ICLR.cc/2019/Conference/Paper638/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper638/Authors" ], [ "ICLR.cc/2019/Conference/Paper638/Authors" ], [ "ICLR.cc/2019/Conference/Paper638/Authors" ], [ "ICLR.cc/2019/Conference/Paper638/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper638/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper638/AnonReviewer3" ] ], "structured_content_str": [ "{\"comment\": \"Hi, authors,\\n\\nThanks for this interesting work. I have a question about the size of matrix Q. One Q's dimension is (the size of a teacher's feature map)x(the size of student's feature map). For ImageNet, one intermediate feature map might be 512x14x14= 100352. Is Q becomes a 100352 x 100352 matrix? If so, it would be too large to run.\\n\\nIn addition, would you mind to release codes?\\n\\nBest Regards,\", \"title\": \"The additional memory cost of large deep network.\"}", "{\"metareview\": \"The authors have taken inspiration from recent publications that demonstrate transfer learning over sequential RL tasks and have proposed a method that trains individual learners from experts using layerwise connections, gradually forcing the features to distill into the student with a hard-coded annealing of coeffiecients. The authors have done thorough experiments and the value of the approach seems clear, especially compared against progressive nets and pathnets. The paper is well-written and interesting, and the approach is novel. The reviewers have discussed the paper in detail and agree, with the AC, that it should be accepted.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta-review\"}", "{\"title\": \"Response to AnonReviewer1:\", \"comment\": \"Thanks a lot for additional time and feedback.\", \"re_4\": \"We added the comment regarding space complexity to Sec. 3.2 of the main paper.\", \"re_6\": \"We moved the detailed treatment of related work to the appendix and provide a shortened version in the main paper. We also moved Fig. 7 and the corresponding text to Sec. 4.2 of the main paper.\"}", "{\"title\": \"Nice revision!\", \"comment\": \"The new additions to the paper are very welcome, and definitely make the paper stronger in my opinion.\", \"re_4\": \"I recommend the authors include this statement somewhere in the paper/appendix.\", \"re_6\": \"If the authors feel paragraphs 2 & 3 are critical to motivate this work, then I still think you can instead shorten the parts of the related work section to make the information there less redundant. In my opinion, the main text would be better if you made space for Fig 7 from the appendix and the relevant text description, by reducing the redundancy in descriptions of the alternative methods and their shortcomings.\"}", "{\"title\": \"Response to AnonReviewer2:\", \"comment\": \"Thanks a lot for additional time, feedback and clarifications.\", \"re_1\": \"Multi-task learning.\\nNote that the challenge we address differs from multi-task learning. In multi-task learning, multiple tasks are addressed at the same time. In contrast, `Knowledge Flow\\u2019 focuses on a single task. Hence, common for multi-task learning and `Knowledge Flow\\u2019 is a transfer of information. However, in multi-task learning, information extracted from different tasks are shared to boost performance, while, in `Knowledge Flow,\\u2019 the information of multiple teachers is leveraged to help a student learn better a single, new, previously unseen task. We updated Section 5 to clarify the connection and differences.\", \"re_2\": \"Notation of Section 2.\\nWe follow the notation of Mnih et al. (2016), i.e., the expectation is taken with respect to a trajectory \\\\tau = ({x_t, a_t, r_t}, {x_{t+1}, a_{t+1}, r_{t+1}}, ...) generated by following the policy \\\\pi. We clarified this and updated Section 2 and 3.\"}", "{\"title\": \"Multiple-task learning for DNN\", \"comment\": \"2 and 3 are the same.\\n\\nMultiple-task learning approaches are rife in this area (see e.g. https://en.wikipedia.org/wiki/Multi-task_learning, and citations therein). This huge body of work establishes that using a proper regularisation scheme is central. The intuition in the present paper seems to align with those ideas. But since those are so standard by now, the authors can be expected to make the connection explicit. Note that the idea of 'lifelong learning' as cited does acknowledge this connection.\\n\\nMulti-task learning for DNN is a standard theme (especially in this conference), and it is not clarified how this work relates/improves over this body of work. One way to address this issue is to report empirical results on a standard benchmark (as MNIST).\\n\\nThe introductory text (ch, 2) is not quite correct (especially the RL needs care), but can be patched up by citing relevant introductory texts (what is random etc..) and adhering to their notation.\"}", "{\"title\": \"Response to AnonReviewer2:\", \"comment\": \"We thank the reviewer for time and feedback. We think the questions aren\\u2019t precise enough for us to act upon:\\n1. We\\u2019d appreciate if the reviewer can point out the parts that are according to the reviewer\\u2019s opinion `semi-formal\\u2019? We are more than happy to revise the text but are currently left guessing, particularly since another reviewer points out that the paper is `well written.\\u2019 \\n2. We compare to recent baselines, in particular state-of-the-art methods like PNN and PathNet. If the reviewer would specify which papers from before 2009 we should compare to, we are very happy to include a statement, assuming that PNN and/or PathNet or their predecessors haven\\u2019t compared to those already. \\n3. To the best of our knowledge, the two baselines (PNN and PathNet) we compare with are the state-of-the-art RL transfer frameworks.\"}", "{\"title\": \"Response to AnonReviewer1:\", \"comment\": \"Updated: Changed section numbers to fit latest revision.\\n---------------------------------------------------------------------------\\nWe thank the reviewer for time and feedback.\", \"re_1\": \"plot p_w values for C10/C100 dataset.\\nIn the newly added Fig. 4 and the corresponding discussion (Sec. 4.2), we plot the weight (p_w) for teachers and the student in the C10/C100 experiment, where C100 and SVHN experts are teachers. As expected and intuitively, the C100 teacher should have higher p_w value than the SVHN based teacher, because C100 is more relevant to C10. The plot verifies this intuition, p_w of the C100 teacher is higher than that of the SVHN teacher during the entire training. Both teachers\\u2019 normalized weights approach zero at the end of training.\", \"re_2\": \"verify Knowledge Flow is not just NAS.\\nAs the reviewer pointed out, one key difference between NAS and Knowledge Flow is that a student in Knowledge Flow benefits from teachers\\u2019 knowledge. To verify that the student really benefits from the knowledge of teachers, we conduct the ablation study suggested by the reviewer. In the newly added experiment, discussed in Sec. 7.3.1 and summarized in Fig. 8, teachers are models that haven\\u2019t been trained at all. Intuitively, learning with untrained teachers should have worse performance than learning with knowledgeable teachers. Our experiments verify this intuition. Considering Fig. 8 (a), where the target task is hero, learning with untrained teachers achieves an average reward of 15934, while learning with knowledgeable teachers (experts of Seaquest and Riverraid) achieves an average reward of 30928. Consistently with all other experiments we average over five runs. More results are presented in Fig. 8 (b, c). The results show that Knowledge Flow achieves higher rewards than NAS in different environments and teacher-student settings.\", \"re_3\": \"training teacher networks jointly.\\nWe did try to train teachers jointly with students. However, as the reviewer mentioned, the memory usage is large and training is very slow. Up until now we didn\\u2019t observe any improvements.\", \"re_4\": \"memory requirement for matrices Q.\\nThe upper bound of the number of Q matrices in our framework is O(L*M*T). In practice, we don\\u2019t link a student\\u2019s layer to every layer of a teacher network. For example, we observed that linking a teachers\\u2019 bottom layer to a student\\u2019s top layer generally doesn\\u2019t yield improvements. Intuitively, a teachers\\u2019 bottom layer features are very likely irrelevant to a student\\u2019s top layer features. Therefore, in practice, we recommend to link one teacher layer to one or two student layers, in which case the space complexity is O(L*T).\", \"re_5\": \"Captions of Table 1 and Table 2.\\nWe updated the caption of Table 1 and Table 2.\", \"re_6\": \"Shorten paragraph 2 and paragraph 3.\\nWe felt shortening paragraph 2 and 3 would remove the motivation of this work. Shortening the related work section wouldn\\u2019t do justice to our peers. Therefore at this point we prefer to maintain the current writing unless the majority of the reviewers and the AC feel strongly about shortening.\"}", "{\"title\": \"Response to AnonReviewer3:\", \"comment\": \"Updated: Changed section numbers to fit latest revision.\\n---------------------------------------------------------------------------\\nWe thank the reviewer for time and feedback.\", \"re_1\": \"Use teachers with different architectures from the student.\\nIn additional experiments, following the suggestion of the reviewer, we use architectures for the teacher which differ from the student model. The results are summarized in Fig. 10 and discussed in Sec. 7.4. We observed that learning with teachers, whose architecture differs from the student, to have similar performance as learning with teachers which have the same architecture. Consider Fig.10 (a) as an example, where the target task is KungFu Master, and the teachers are experts for Seaquest and Riverraid. At the end of training, learning with teachers of different architectures achieves an average reward of 37520, and learning with teachers of the same architecture achieves an average reward of 35012. More results are shown in Figs. 10 (b, c). The results illustrate that Knowledge Flow can benefit from the knowledge of teachers, and thus achieve higher rewards, even if the teachers and the student architectures differ.\", \"re_2\": \"Importance of KL term.\\nThe KL term prevents the student\\u2019s output distribution over actions or labels to change too much when the teachers\\u2019 influence is decreasing. To investigate the importance of the KL term, we conduct an ablation study where the KL coefficient (\\\\lambda2) is set to zero. The results are summarized in Fig. 9 and discussed in Sec. 7.3.2. Considering Fig. 9 (a) as an example, where the target task is MsPacman and the teachers are Riverraid and Seaquest experts. Without the KL term the rewards drop drastically when the teacher\\u2019s influence decreases. In contrast, we don\\u2019t observe this performance drop with a KL term. At the end of training, learning with a KL term achieves an average reward of 2907 and learning without the KL term achieves an average reward of 1215. More results are presented in Figs. 9 (b, c), which show that training with the KL term achieves higher reward than training without the KL term.\", \"re_3\": \"Use of an average network as \\\\theta_{old}.\\nAn average network, i.e., exponential averaging can be used to obtain \\\\theta_{old}. To investigate how usage of an average network for \\\\theta{old} affects the performance, we conduct an experiment setting \\\\theta_{old} to be the exponential running average of the model weight. More specifically, \\\\theta_{old} is updated as follows: \\\\theta_{old} \\\\leftarrow \\\\alpha * \\\\theta_{old} + (1 - \\\\alpha) * \\\\theta, where \\\\alpha = 0.9. The results are summarized in Fig. 11 and discussed in Sec. 7.5. We observed that using an exponential average to compute \\\\theta_{old} results in very similar performance to using a single model. Consider Fig.11 (a) as an example, where the target task is Boxing and the teacher is a Riverraid expert. At the end of training, computing \\\\theta_{old} via an exponential average achieves an average reward of 96.2 and using a single parameter to set \\\\theta_old achieves an average reward of 96.0. More results on using an exponential average to compute \\\\theta_{old} are shown in Figs. 11 (b, c).\"}", "{\"title\": \"Multiple task learning for NNs\", \"review\": \"This paper proposes a new set of heuristics for learning a NN for generalising a set of NNs trained for more specific tasks. This particular recipe might be reasonable, but the semi-formal flavour is distracting. The issue of model selection (clearly the main issue here) is not addressed. A quite severe issue with this report is that the authors don't report relevant learning results from before (+-) 2009, and empirical comparisons are only given w.r.t. other recent heuristics. This makes it for me not possible to advice publication as is.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Intriguing idea, strong performance, but missing empirical results to validate intuition\", \"review\": \"This paper proposes to feed the representations of various external \\\"teacher\\\" neural networks of a particular example as inputs to various layers of a student network.\\nThe idea is quite intriguing and performs very well empirically, and the paper is also well written. While I view the performance experiments as extremely thorough, I believe the paper could possibly use some additional ablation-style experiments just to verify the method actually operates as one intuitively thinks it should.\", \"other_comments\": [\"Did you verify that in Table 3, the p_w values for the teachers trained on the more-relevant C10/C100 dataset are higher than the p_w value for the teacher trained on the SVHN data? It would be interesting to see the plots of these p_w over the course of training (similar to Fig 1c) to verify this method actually operates as one intuitively believes it should.\", \"Integrating the teacher-network representations into various hidden layers of the student network might also be considered some form of neural architecture search (NAS) (by including parts of the teacher network into the student architecture).\"], \"see_for_example_the_darts_paper\": \"https://arxiv.org/abs/1806.09055\\nwhich similarly employs mixtures of potential connections. \\nUnder this NAS perspective, the dependence loss subsequently distills the optimal architecture network back into the student network architecture.\\n\\nHave you verified that this method is not just doing NAS, by for example, providing a small student network with a few teacher networks that haven't been trained at all? (i.e. should not permit any knowledge flow)\\n\\n- Have the authors considered training the teacher networks jointly with the student? This could be viewed as teachers learning how to improve their knowledge flow (although might require large amounts of memory depending on the size of the teacher networks).\\n\\n- Suppose we have an L-layer student network and T M-layer teacher networks.\\nDoes this imply we have to consider O(L*M*T) additional weight matrices Q?\\nCan you comment on the memory requirements?\\n\\n- The teacher-student setup should be made more clear in Tables 1 and 2 captions (took me some time to comprehend).\\n\\n- The second and third paragraphs are redundant given the Related Work section that appears later on. I would like to see these redundancies minimized and the freed up space used to include more results from the Appendix in the main text.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"interesting approach in combining multiple trained models for transfer\", \"review\": \"This paper presents a method for distilling multiple teacher networks into a student, by linearly combining feature representations from all networks at multiple intermediate layers, and gradually forcing the student to \\\"take over\\\" the learned combination. Networks to be used as teachers are first pretrained on various initial tasks. A student network is then trained on a target task (possibly different from any teacher task), by combining corresponding hidden layers from each teacher using learned linear remappings and weighted combinations. Learning this combination allows the system to find appropriate teachers for the target task; eventually, a penalty on the combination weights forces all weight onto the student network, resulting in the distillation.\\n\\nApplications to both reinforcement learning (atari game) and supervised image classification (cifar, svhn) are evaluated. The reinforcement learning application is particularly fitting, since combining tasks together is less straightforward in this domain.\\n\\nI wonder whether any experiments were performed where the layers correspondence between teacher models was less clear --- say, using teachers with different architectures. Figure 1(a) (different teacher archs) as well as the text (\\\"candidate set\\\" on p.4) indicate this is possible, but experiment details describe combinations of same-architecture teachers only.\\n\\nIn addition, I would have liked to see some further exploration of the KL term and use of \\\"theta_old\\\". This seems potentially important, and also has ties to self-ensembling through teachers with exponential weight averaging. Could an average network also be used here? And how important is this term in linking student to teachers as the weights change?\\n\\nOverall I find this a very interesting approach. Rather than training a large joint model on multiple tasks simultaneously as a transfer initialization, this approach uses models already fully trained for different tasks. This results in a potentially advantageous trade-off: One no longer needs to carefully calibrate the different tasks and common task components in a joint model, but at the cost of requiring inference through multiple teachers when training the student.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HkxOoiAcYX
Estimating Information Flow in DNNs
[ "Ziv Goldfeld", "Ewout van den Berg", "Kristjan Greenewald", "Brian Kingsbury", "Igor Melnyk", "Nam Nguyen", "Yury Polyanskiy" ]
We study the evolution of internal representations during deep neural network (DNN) training, aiming to demystify the compression aspect of the information bottleneck theory. The theory suggests that DNN training comprises a rapid fitting phase followed by a slower compression phase, in which the mutual information I(X;T) between the input X and internal representations T decreases. Several papers observe compression of estimated mutual information on different DNN models, but the true I(X;T) over these networks is provably either constant (discrete X) or infinite (continuous X). This work explains the discrepancy between theory and experiments, and clarifies what was actually measured by these past works. To this end, we introduce an auxiliary (noisy) DNN framework for which I(X;T) is a meaningful quantity that depends on the network's parameters. This noisy framework is shown to be a good proxy for the original (deterministic) DNN both in terms of performance and the learned representations. We then develop a rigorous estimator for I(X;T) in noisy DNNs and observe compression in various models. By relating I(X;T) in the noisy DNN to an information-theoretic communication problem, we show that compression is driven by the progressive clustering of hidden representations of inputs from the same class. Several methods to directly monitor clustering of hidden representations, both in noisy and deterministic DNNs, are used to show that meaningful clusters form in the T space. Finally, we return to the estimator of I(X;T) employed in past works, and demonstrate that while it fails to capture the true (vacuous) mutual information, it does serve as a measure for clustering. This clarifies the past observations of compression and isolates the geometric clustering of hidden representations as the true phenomenon of interest.
[ "information theory", "representation learning", "deep learning", "differential entropy estimation" ]
https://openreview.net/pdf?id=HkxOoiAcYX
https://openreview.net/forum?id=HkxOoiAcYX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1xYYj_7xV", "S1xPxKclRm", "SyxZ8-zo6X", "HyxKvDU7TX", "r1es6aSXa7", "SkgUq6SXaX", "SylcZTH767", "S1ev16BmTm", "Hyxq33SmTm", "rylnRsBQT7", "HJxX7cGnh7", "Syx3uzKGnQ", "HJenP8pmjm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544944512920, 1542658286681, 1542295880762, 1541789537003, 1541787074998, 1541787022296, 1541786882205, 1541786847487, 1541786802348, 1541786579954, 1541315099030, 1540686451800, 1539720803650 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper636/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper636/Authors" ], [ "ICLR.cc/2019/Conference/Paper636/Authors" ], [ "ICLR.cc/2019/Conference/Paper636/Authors" ], [ "ICLR.cc/2019/Conference/Paper636/Authors" ], [ "ICLR.cc/2019/Conference/Paper636/Authors" ], [ "ICLR.cc/2019/Conference/Paper636/Authors" ], [ "ICLR.cc/2019/Conference/Paper636/Authors" ], [ "ICLR.cc/2019/Conference/Paper636/Authors" ], [ "ICLR.cc/2019/Conference/Paper636/Authors" ], [ "ICLR.cc/2019/Conference/Paper636/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper636/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper636/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This paper studies the compression aspect of the information bottleneck. It seeks to clarify a debate about the evolution of mutual information between inputs and representations during training in neural networks. The paper discusses numerous ideas and techniques and arrives at valuable conclusions.\\n\\nA concern is that parts of the paper (theoretical parts) are intended for a separate paper, and are included in the paper only for reference. This means that the actual contribution of the present paper is mostly on the experimental part. Nonetheless, the discussion derived from the theory and experiments seem valuable in the ongoing discussion of this topic. In any case, I encourage the authors to make efforts to obtain a transparent separation of the different pieces of work. \\n\\nA concern was raised that the current paper mainly addresses a discussion that originated in a paper that has not passed peer review. On the other hand, this discussion does occupy many researchers and justifies the analysis, even if the originating paper has not been published in a peer reviewed format. \\n\\nAll reviewers are confident in their assessment. Two of them regard the paper positively and one of them regards the paper as ok, but not good enough, with main criticism in relation to the points discussed above. \\n\\nAlthough the paper is in any case very good, unfortunately it does not reach the very high bar for acceptance at this ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Contributes to resolving debate on compression in neural networks\"}", "{\"title\": \"Uploaded revision 3\", \"comment\": \"We are grateful to Reviewer 1 for the feedback on our second revision of the paper. Based on the concerns that R1 has raised about Remark 1, we have slightly expanded it to more closely follow the explanation we included in our response to R1.\\n\\nWe look forward to feedback from the other reviewers on our revision, and any additional recommendations that R1 might have.```\"}", "{\"title\": \"Uploaded revision 2\", \"comment\": \"We have uploaded a new revision of our paper that addresses the concerns raised by the reviewers. To make it clear what has changed, new text is highlighted in blue. The highlighting will, of course, be removed in the final revision of the paper.\\nTo be specific, we have added material in response to the following concerns:\\n1. R2 asked about the apparent disconnect between compression and generalization performance (specifically loss on the test set) seen in Fig. 5(a) and (b).\\u00a0 We discuss this issue on page 7.\\n2. R1 asked whether Theorem 1 can be used to set the number of samples used in the sample propagation estimator of I(X;T) and R3 asked how the noise variance \\\\beta^2 is chosen. We discuss these issues in Remark 1 on page 5.\\n3. R1 was concerned that the sample-propagation estimator is simply Monte Carlo integration. We have added text clarifying the distinction between the sample-propagation estimator, which casts the estimation of differential entropy in our noisy DNNs as estimation of the differential entropy of a known Gaussian mixture model, and Monte Carlo integration, which is used to numerically evaluate the differential entropy of the GMM, on pages 4 and 5.\\n4. R1 suggested that we \\\"include a short section before the experiments stating their hypothesis and pointing to the experiment/figure number supporting their hypothesis.\\\" Instead, we have added a bit more text at the beginning of Section 5 (page 6) discussing the goals of our experiments, and added a summary of our findings like what R1 suggested at the end of Section 5 (page 9).\\n5. We replaces Fig. 4(d) with one that shows I(X;T) as a function of epoch (instead of weight) and different values of \\\\beta, per R3's request.\\n6. Unless one of the reviewers or the AC objects, we currently plan to leave Supplement 10 intact, except that we will add a non-anonymized citation of our theory paper.\\n\\nThe additional material has increased the length of the main paper, excluding references, from 8 pages to 9, which is still within the ICLR page limits.\\n\\nWe thank the reviewers for their helpful feedback, and we hope that they will find that the revised paper addresses their concerns.\"}", "{\"title\": \"New revision\", \"comment\": \"We've uploaded a revision that changes Fig. 4(d) to show mutual information as a function of epochs, per Reviewer 3's request. We are working on additional revisions recommended by the reviewers, and will upload another revision the week of Nov. 12.\"}", "{\"title\": \"Response to Reviewer 3 (part 2)\", \"comment\": \"\\\"I think Section 3 and Theorem 1 are interesting and insightful. But I notice that in Section 10 you mentioned that this will be a separate paper. Is it OK to put them together in this paper?\\\"\\n\\nWe think the reviewer is asking why we did not include the full proof of Theorem 1 and related theory in the ICLR submission. The answer is that including all of the theory and the empirical work would require nearly 30 pages, while the ICLR page limit is 10 pages. We thus had to split the theoretical work and empirical work into two papers. In the interest of transparency, we included the key parts of the theoretical work in the supplement and explained that there is a parallel paper (in review) on the theory. Note, however, that Section 3 and Theorem 1 will remain in the final version of the ICLR paper (if it is accepted). Our original plan was to omit Section 10 of the current supplement, replacing it with a non-anonymized citation of the companion paper. However, if the reviewer thinks that it would be better to keep Section 10 in the final version, we are happy to comply with that suggestion.\\n\\n\\n\\\"The paper by (Schwatz-Ziv & Tishby 17') has not pass a peer-review process and it is still a preprint. This paper is nothing but only saying some deficiencies of (Schwatz-Ziv & Tishby 17') (except Section 3 and Theorem 1 which I think should be an independent paper). I think such a paper should not be published as a conference paper before (Schwatz-Ziv & Tishby 17') pass a peer-review process.\\\"\\n\\n\\nAccording to Google Scholar, (Shwartz-Ziv & Tishby, 2017) has 178 citations. Naftali Tishby's lecture from the \\\"Deep Learning: Theory, Algorithms, and Applications\\\" workshop held in June 2017 in Berlin has over 70,000 views on YouTube. This work, even if it has not appeared in a peer-reviewed venue, has received plenty of attention in the deep learning community. It is therefore an appropriate subject for other scholarly work.\\n\\n\\nIndeed, many of the issues we identify with the (Shwartz-Ziv & Tishby, 2017) analysis also appear in other *published* works that we cite, e.g. (Saxe et al., 2018, published at ICLR 2018). We focus our discussion on (Shwartz-Ziv & Tishby, 2017) because it is the most well-known and indeed was the first work in this area. However, we could have just as easily chosen one of the other peer reviewed works we cite as the focus of our discussion.\"}", "{\"title\": \"Response to Reviewer 3 (part 1)\", \"comment\": \"\\\"However, how do you choose the noise level \\\\beta?\\\"\\n\\nIdeally, \\\\beta should be treated as a hyperparameter that is selected to optimize the performance of the classifier on held-out data, much in the way that hyperparameters such as dropout rate are tuned to optimize held-out performance. In practice, we sometimes had to back off from the \\\\beta value that optimizes performance to a higher value to ensure accurate estimation of mutual information (the smaller \\\\beta is the harder it becomes to estimate I(X:T_\\\\ell)), depending on factors such as the dimensionality of the layer being analyzed and the number of data samples available for a task. The bounds described in the Supplement were used to provide guidance on values of \\\\beta that the estimator can handle using a given number of samples.\\n\\n\\n\\\"If I understand correctly, the noise level plays a similar role of the bining size in (Schwatz-Ziv & Tishby 17'). Noise level goes to zero is similar to bining size goes to zero.\\\"\\n\\nThe noisy setting and binning-based estimation in deterministic DNNs are fundamentally different. Binning is a *method for* differential entropy (and thus mutual information) *estimation* that has no theoretic convergence guarantees that we are aware of when the bin size is fixed. In deterministic DNNs, I(X;T_\\\\ell)=H(X) is constant and any plot that shows otherwise shows a faulty estimate. While tweaking the bin sizes *changes the estimate* and the plot (see Fig. 1), the true mutual information remains constant (=H(X)). In contrast, the noise parameter \\\\beta affects the true mutual information in our noisy DNN (because the noise is part of the DNN's operation), as shown in Fig. 4(d). We emphasize that \\\\beta primarily affects the degree to which I(X;T_\\\\ell) is affected by the underlying clustering trend (observed compression is more pronounced for smaller \\\\beta, and disappears for very large \\\\beta). The main point here, however, is that trends shown in our mutual information plots track the true behavior of I(X;T_\\\\ell) in the noisy DNN (as suggested by our theoretical estimation risk bound), while binning-based estimation of I(X;T_\\\\ell) in a deterministic DNN produces plots in which the curves vary only due to estimation errors. We note that if the network from (Shwartz-Ziv & Tishby, 2017) had quantizers applied on the outputs of the neurons (which it does not), then the choice of the quantization gap (i.e., the bin size) would have been analogous to the \\\\beta parameter in our work.\\n\\n\\n\\\"I wish to see a figure about how different \\\\beta affects the curve of I(X;T) (similar to Figure 1 but let \\\\bet change)?\\\"\\n\\nAt the end of Section 4 we explain how different \\\\beta values affect the observed relation between clustering and compression. Basically, for larger \\\\beta values the Gaussians at the output of the noisy neuron are indistinguishable to begin with, and consequently, clustering the internal representations has less effect on the mutual information. Thus, I(X:T_\\\\ell) is better for tracking clustering for smaller values of \\\\beta. The revised Fig. 4(d) visualizes the effect of \\\\beta on the mutual information in the minimal example, and Figs. 5(a) and 10(a) show the same tanh experiment for the noisy version of the DNN from (Shwartz-Ziv & Tishby, 2017) with different \\\\beta values (0.005 and 0.01, respectively). As claimed, smaller \\\\beta makes compression less pronounced.\\n\\n\\n\\\"In Figure 4(d) there is a plot showing how different \\\\beta will affect the mutual information, but the x-axis is \\\"weight\\\". I wonder that how the curve of mutual information change w.r.t \\\\beta, if the x-axis is training epochs. Do your statement stable about \\\\beta?\\\"\\n\\nThanks for the suggestion. We have updated Fig. 4(d) to have the x-axis be training epochs as requested. The results show the desired stability with respect to \\\\beta. We note that we originally presented the mutual information curve in Fig. 4(d) vs a growing weight parameter since, in this minimal example, the weight monotonically grows throughout training. Indeed, the original Fig. 4(d) and the revised one in the current version of the text look very much alike, up to some horizontal stretching of the curves.\"}", "{\"title\": \"Response to Reviewer 1 (part 3)\", \"comment\": \"\\\"Regarding the first experiment, could the authors clarify how per unit and per entire layer compression estimation differs?\\\"\", \"the_mechanics_of_the_estimation_are_the_same\": \"in both cases we use H(Bin(T_\\\\ell)) estimator. The difference is that in the entire layer case we are looking at the joint distribution P_{T_\\\\ell}, while in the per unit case we are looking at the marginal distribution P_{T_\\\\ell(k)}. In the case of the full layer, the output of each unit is discretized into two bins (as described in the caption of Fig. 7), while for the per-unit measurements we tested bins with widths in {10^-5, 10^-4, 10^-3, 10^-2, 0.1, 0.2, 0.3}, and found consistent results for bin sizes in [10^-4, 0.2].\\n\\nOne problem with the per unit computation is that we then have d_\\\\ell mutual information trajectories, one for each unit k\\\\in[1:d_\\\\ell], over the course of training that must be summarized. We summarize them by computing a linear regression that predicts I(X;T_\\\\ell(k)) from the training epoch, t, for each unit k, and then looking at the distribution of the slopes of the regressors. Because most of the slopes are negative, this shows a trend that I(X;T_\\\\ell(k)) decreases as t increases, which suggests that clustering is occurring.\\n\\nWhat we are most interested in is characterizing the clustering of samples in the representation computed by an entire layer. However, because differential entropy estimation has sample complexity exponential in dimension, we can only use I(X; T_\\\\ell) to characterize clustering for small numbers of hidden units. The single-dimension results are suggestive that clustering is occurring, even though we cannot show it on the full layer.\\n\\n\\n\\\"Also, in my opinion, more clustered representations seem to indicate that the mutual information with the output increases. Could the authors comment on how the noise levels in this particular version of a stochastic network affects the mutual information with the output and the clustering? Do more clustered representations lead to increased mutual information of the layer with the output?\\\"\\n\\nWe kindly request a clarification here as to whether the reviewer meant the `output' of the network or the target (true) label?\\n\\nWe ask this since we believe the mutual information between the hidden layer and the true label I(Y;T_\\\\ell) is more informative, while the DNN's output does not necessarily equal the true label. While the current paper focuses on studying the behavior of I(X:T_\\\\ell), we have a few comments regarding I(Y;T_\\\\ell). First, we think that larger values of I(Y;T_\\\\ell) are more related to having a good separation between the classes rather than to clustering itself. One way to see this is to note that for the last hidden T_{L-1}, I(Y;T_{L-1}) is essentially the cross-entropy loss. Studying I(Y;T_\\\\ell) is on our research agenda and, in fact, we have just begun to explore it.\\n\\n\\n\\\"I found it fairly difficult to summarize the experimental contribution after the first read. I think the presentation and summary after each experiment could be improved and made more reader friendly. For example, the authors could include a short section before the experiments stating their hypothesis and pointing to the experiment/figure number supporting their hypothesis.\\\"\\n\\nThanks for this helpful suggestion. We will revise our paper accordingly, while also trying to respect the ICLR page limit.\"}", "{\"title\": \"Response to Reviewer 1 (part 2)\", \"comment\": \"\\\"[U]nless I missed something, the proposed method for estimating MI for this Gaussian channel is just doing MC estimation...\\\"\\n\\nThe noisy neural network channel is not a Gaussian channel because it involves a composition of multiple layers of nonlinearities and Gaussian noises that the input signal has to traverse until it reaches layer \\\\ell. Writing T_\\\\ell=S_\\\\ell+Z_\\\\ell, with S_\\\\ell=f_\\\\ell(T_{\\\\ell-1}), this concatenation of nonlinear operations and Gaussians renders the distributions of S_\\\\ell (marginal or conditioned on X) extremely complicated. Not only can these distributions not be written out in an analytic form, they are even *extremely* hard to numerically evaluate at any given point. Our best mode of operation was therefore to treat P_{S_\\\\ell} and P_{S_\\\\ell|X} as unknown. However, the generative model of the DNN does permit us to efficiently sample from P_{S_\\\\ell} and P_{S_\\\\ell|X}, which brings us to the considered functional estimation problem: estimating the differential entropy h(P_{S_\\\\ell}\\\\ast\\\\gamma) based on i.i.d. samples from the *unknown* distribution P_{S_\\\\ell} and knowledge of the noise distribution \\\\gamma (this can be equivalently viewed as the estimation of the functional T_\\\\gamma(P_{S_\\\\ell})\\\\triangleq h(P_{S_\\\\ell}\\\\ast\\\\gamma) of the unknown P_{S_\\\\ell} based on i.i.d. samples from it).\\n\\nWe note that it is not possible to simply apply Monte Carlo integration to estimate the differential entropy h(Q) of an unknown distribution Q using only i.i.d. samples from Q: the MC integrator would also need to know Q itself. The crux of differential entropy estimation is to find a function of the samples alone that approximates h(Q). In our case of Q=P_{S_\\\\ell}\\\\ast\\\\gamma, the SP estimator uses the samples from P_{S_\\\\ell} and the known noise distribution to form a provably consistent estimate of the entropy that is expressed in terms of a d-dimensional integral. Because this integral cannot be evaluated in closed form, we use MC integration *merely to evaluate the integral*.\", \"we_clarify_the_full_estimation_process_and_the_role_of_each_component_next\": \"(i) Expand I(X;T_\\\\ell)=h(T_\\\\ell)-\\\\frac{1}{m}\\\\sum_{i=1}^m h(T_\\\\ell|X=x_i).\\n\\n(ii) Since T_\\\\ell=S_\\\\ell+Z_\\\\ell and S_\\\\ell and Z_\\\\ell are independent, the distribution of T_\\\\ell is P_{S_\\\\ell} \\\\ast \\\\gamma. We know \\\\gamma since the noise is injected by design, and we can sample from P_{S_\\\\ell} via the DNN's forward pass. Estimating I(X;T_\\\\ell) reduces to a new functional estimation problem: estimate h(A+B) given i.i.d. samples from A and knowing the distribution of B ~ N(0,\\\\beta^2 I_d).\\n\\n(iii) SP Estimator: Given i.i.d. samples from P_{S_\\\\ell}, let \\\\hat{P}_n be their empirical distribution. We estimate h(T_\\\\ell) by \\\\hat{h}_{SP}\\\\triangleq h(\\\\hat{P}_n \\\\ast \\\\gamma), which is computed only through the available resources: the samples and \\\\gamma.\\n\\n(iv) MC Integration: Since \\\\hat{P}_n is a discrete (known) distribution, \\\\hat{P}_n \\\\ast \\\\gamma is a *known* n-mode Gaussian mixture with centers at the samples, and \\\\hat{h}_{SP} equals the entropy of this mixture. This entropy (the aforementioned d-dimensional integral) has no closed-form expression, but since the Gaussian mixture is known (we know both \\\\hat{P}_n and \\\\gamma), we can efficiently compute its entropy by MC integration.\\n\\nWe hope this clarifies our two-step process (first estimation and then computation) and that the estimator is \\\\hat{h}_{SP}, and not the MC integrator. That this was unclear from the paper suggests the presentation might have been lacking; we will invest efforts in making the final version crystal clear.\"}", "{\"title\": \"Response to Reviewer 1 (part 1)\", \"comment\": \"\\\"Since the authors have this convergence bound stated in Theorem 1, it would be great to see it being used - how many samples are needed/being used in the experiments? What should the error bars be around mutual information estimates in the experiments? If the bound is too loose for a reasonable number of samples, then what\\u2019s the use of it?\\\"\\n\\nThank you for this very relevant comment. We first note that Theorem 1 provides a worst-case result: it bounds the absolute-error risk of the differential entropy estimation given the worst possible probability distribution. What this means in practice is that the bound is quite pessimistic because the distributions induced by the DNN generally do not follow the pathological structures that attains the worst case bound.\\n\\nWe emphasize that the theoretical bound is still worthwhile. From a theoretical perspective it gives justification for applying our estimator in the noisy DNN setup and in other problems, and a guideline for determining the approximate highest dimensionality that we can handle. From a practical perspective it gives a worst-case starting point for the number of samples, n, which can be reduced if the estimator empirically performs better than worst-case.\\n\\nThis is, in fact, how the bound was used for producing our simulation results. Generating the curves in our plots required running the sample-propagation differential entropy estimator multiple times. First, estimating the mutual information term of interest I(X;T_\\\\ell) for a given set of DNN parameters involves computing m+1 differential entropy estimates, where m is the size of the empirical dataset. Then, we had to estimate I(X;T_\\\\ell) not just once but for each epoch of training. To overcome this computational burden while adhering to the theoretical result, we tested the value of n given by Theorem 1 on a few points of the curve and reduced it until the overall computation cost of producing the full curve became reasonable. To ensure estimation accuracy was not compromised we empirically tested that the estimate remained stable.\\n\\nAs a concrete example, to achieve an error bound of 5% of Fig. 5 plot's vertical scale (which amounts to an 0.4 absolute error bound), the number of samples required by Theorem 1 is n=4*10^9. This number is too large for our computational budget. Performing the above procedure for reducing n, we find good accuracy is achieved for n = 4*10^6 samples (Theorem 1 has the pessimistic error bound of 3.74 for this value). Adding more samples beyond this value does not change the results.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"\\\"The main concern of the paper is its conclusion. While the experiments in the paper did show the mutual information goes down as the clustering effect enhanced, it only means 'clustering' and 'compression' are correlated; but the paper claims 'clustering' is the source of 'compression', i.e., 'clustering' leads to 'compression'. This conclusion is problematic. For example, looking at Figure 5(a), as the mutual information goes down from epoch 28 to epoch 8796, not only the clustering gets enhanced, but also the loss is going down. Thus, alternatively, one can also argue the loss (i.e., 'relevance') is the cause of 'compression' instead of 'clustering'. From another aspect, the effect of 'clustering' is also related to the loss, i.e., it is the loss function that pushes the points of the same class to be closer; then, even if the direct cause of 'compression' is 'clustering', the root cause might still be the loss (i.e., 'relevance').\\\"\\n\\nWe agree with R2 that, ultimately, all of the dynamics we observe are driven by the training algorithm working to reduce the training loss. However, our results show that clustering is the immediate cause of compression, i.e., whenever information compression occurs it is due to clustering. Furthermore, we have shown far more than simple correlation between compression and clustering for the following two reasons:\\n\\n\\n1. The analysis of information transmission over an additive white Gaussian noise (AWGN) channel in Section 4 shows directly how moving the representations of training samples closer together (that is, clustering them) causes a reduction in I(X;T_\\\\ell) (that is, compression).\\n\\n\\n2. Our analysis of a minimal example in Section 4 illustrates the causal relationship between clustering and compression in low dimensions, where human geometric intuitions are reliable.\\n\\n\\nWe also stress that our results are incompatible with a claim that *reduction in loss* is correlated with compression. Multiple different trends are observable in relation to loss and compression: while there are instances where reduction in loss and compression simultaneously occur, there are other instances when loss decreases but mutual information rises. Two examples of the latter are the following:\\n\\n\\n1. Fig. 5(a), between epochs 80 and 541, shows that the training loss decreases while I(X;T_\\\\ell) increases. The scatter plots show why: the representations of the training samples in layer 5 are rearranged from compact clusters into a more uniform (spread out) tube.\\n\\n\\n2. Similarly, a comparison of the results in Fig. 5(a) and 5(b) shows that the introduction of Parseval regularization does not interfere with the reduction of training loss, but it does eliminate compression and the mutual information keeps increasing from epoch roughly 500 and until the end of training. The reason why compression is eliminated is that Parseval regularization suppresses the network's ability to saturate all its units and form the tight clusters at the corners of the cube as it did in the unregularized experiment from Fig. 5(a). Indeed, the final constellation of internal representations in Fig. 5(b) (see scatter plot for epoch 7230) has no tight clusters. This stands in accordance with the claimed relations between clustering and compression.\\n\\n\\n\\\"In Figure 5(a). Why the mutual information increases from epoch 80 - epoch 541?\\\"\\n\\nThe constellation of training samples in layer 5 at epoch 541 is an elongated tube, while at epoch 80 it is a set of compact clusters. The increase in I(X;T_\\\\ell) is consistent with our explanation that compression is caused by clustering: the tube in epoch 541 is more spread out than the clusters in epoch 80.\\n\\n\\\"Also, it seems that the test loss increases as the I(X;T) decreases from epoch 541 to epoch 8796. This seems to be counter-intuitive to the claim that 'lower I(X;T) implies higher generalization ability'. Can you explain this phenomenon?\\\"\\n\\n\\nWe emphasize that we never claimed that lower I(X;T_\\\\ell) values imply better generalization: this claim was made in (Shwartz-Ziv & Tishby, 2017). In fact, as the reviewer points out, our empirical results indicate that this is not always the case. Fig. 5(a) is an excellent example of that, showing that too much clustering/compression probably results in overfitting, which is why the test loss grows towards the end. For practical purposes, early stopping would probably have been helpful here. However, since we are not concerned with attaining the best possible classification results, but rather understanding compression, we ran the training beyond the optimal stopping point. We hope this clarifies our stance regarding the relation between compression and generalization, and we will add a discussion in the revision to this effect.\"}", "{\"title\": \"Clarification of Compression Phrase in Information Bottleneck theory of DNNs\", \"review\": \"This paper provides a principled way to examine the compression phrase, i.e, I(X;T) in deep neural networks. To achieve this, the authors provides an theoretical sounding entropy estimator to estimate mutual information. Empirically, the paper did observe this compression phrase across both synthetic and real-world data and relates this compression behavior with geometric clustering.\", \"pros\": [\"The paper is well-written and easy to understand.\", \"The framework for analyzing the mutual information in DNNs is theoretically sounding and robust.\", \"The finding of connecting clustering with compression is novel and inspiring.\"], \"questions\": [\"The main concern of the paper is its conclusion. While the experiments in the paper did show the mutual information goes down as the clustering effect enhanced, it only means `clustering` and `compression` are correlated; but the paper claims `clustering` is the source of `compression`, i.e., `clustering` leads to `compression`. This conclusion is problematic. For example, looking at Figure 5(a), as the mutual information goes down from epoch 28 to epoch 8796, not only the clustering gets enhanced, but also the loss is going down. Thus, alternatively, one can also argue the loss (i.e., `relevance`) is the cause of `compression` instead of `clustering`. From another aspect, the effect of `clustering` is also related to the loss, i.e., it is the loss function that pushes the points of the same class to be closer; then, even if the direct cause of `compression` is `clustering`, the root cause might still be the loss (i.e., `relevance`).\", \"In Figure 5(a). Why the mutual information increases from epoch 80 - epoch 541? Also, it seems that the test loss increases as the I(X;T) decreases from epoch 541 to epoch 8796. This seems to be counter-intuitive to the claim that \\\"lower I(X;T) implies higher generalization ability\\\". Can you explain this phenomenon?\", \"[UPDATE] the authors address my concerns in a detailed way, and the updated revision is rather robust, therefore, I decide to change my score to accept.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An interesting paper but the observations from the experiments could be stated more clear.\", \"review\": \"Response to author comments:\\n\\nI would like to thank the authors for answering my questions and addressing the issues in their paper. I believe the edits and newly added comments improve the paper. \\n\\nI found the response regarding the use of your convergence bound very clear. It is a very reasonable use of the bound and now I see how you take advantage of it in your experimental work. However, I believe the description in the paper, in particular, the last two sentences of Remark 1, could still be improved and better explain how a reasonable and computationally feasible n was chosen.\\n\\nTo clarify one of my questions, you correctly assumed that I meant to write the true label, and not the output of the network.\\n\\n\\n***********\\n\\nThe paper revises the techniques used in Tishby\\u2019s and Saxe et al. work to measure mutual information between the data and a hidden layer of a neural network. The authors point out that these previous papers\\u2019 measures of mutual information are not meaningful due to lack of clear theoretical assumptions on the randomness that arises in DNNs.\\n\\nThe authors propose to study a perturbed version of a neural network to turn it into a noisy channel making the mutual information estimation meaningful. The perturbed network has isotropic Gaussian noise added to each layer nodes. The authors then propose a method to estimate the mutual information of interest. They suggest that the mutual information describes how distinguishable the hidden representation values are after a Gaussian perturbation (which is equivalent to estimating the means of a mixture of Gaussians). Data clustering per class is identified as the source of compression.\\n\\nIn addition to proposing a way to estimate a mutual information of a stochastic network, the authors analyze the compression that occurs in stochastic neural networks. \\n\\nIt seems that the contribution is empirical, rather than theoretical, as the theoretical result cited is going to appear in a different article. After reading that the authors \\u201cdevelop sample propagation (SP) estimator\\u201d, I expected to see a novel approach/algorithm. However, unless I missed something, the proposed method for estimating MI for this Gaussian channel is just doing MC estimation (and no guarantees are established in this paper). The convergence bounds for the SP estimator are presented(Theorem 1), however, the result is cited from another article of the authors, so it is not a contribution of this submission. \\n\\nSince the authors have this convergence bound stated in Theorem 1, it would be great to see it being used - how many samples are needed/being used in the experiments? What should the error bars be around mutual information estimates in the experiments? If the bound is too loose for a reasonable number of samples, then what\\u2019s the use of it?\\n\\nThe authors perform two types of experiments on MNIST. The first experiment demonstrates that no compression is observed per layer and the mutual information only increases during training (as measured by the binning approach, which is supposed to track the mutual information of the stochastic version of the network). The second experiments demonstrates that deeper layers perform more clustering. \\n\\nRegarding the first experiment, could the authors clarify how per unit and per entire layer compression estimation differs?\\n\\nAlso, in my opinion, more clustered representations seem to indicate that the mutual information with the output increases. Could the authors comment on how the noise levels in this particular version of a stochastic network affects the mutual information with the output and the clustering? Do more clustered representations lead to increased mutual information of the layer with the output?\\n\\nI found it fairly difficult to summarize the experimental contribution after the first read. I think the presentation and summary after each experiment could be improved and made more reader friendly. For example, the authors could include a short section before the experiments stating their hypothesis and pointing to the experiment/figure number supporting their hypothesis.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"ICLR 2019 Conference Paper636 AnonReviewer3\", \"review\": \"This paper studied the information bottleneck principle for deep learning. In the paper by (Schwatz-Ziv & Tishby 17'), it is empirically shown that the mutual information I(X;T) between input X and internal layers T decreases, which is called a compression phase. In this paper, the author found that the compression phase is not always happening and the shape of the curve of I(X;T) highly depends on the \\\"bining size\\\" which is used for estimating mutual information by (Schwatz-Ziv & Tishby 17'). Then the authors proposed to use a noisy DNN to make sure the map X->T is stochastic, then proposed a guaranteed mutual information estimator. Then some empirical results are shown.\\n\\nI think the problem in (Schwatz-Ziv & Tishby 17') do exist and their result is highly questionable. However, I have some major question about this paper.\\n\\n1. In this paper a noisy DNN was proposed. However, how do you choose the noise level \\\\beta? If I understand correctly, the noise level plays a similar role of the bining size in (Schwatz-Ziv & Tishby 17'). Noise level goes to zero is similar to bining size goes to zero. I wish to see a figure about how different \\\\beta affects the curve of I(X;T) (similar to Figure 1 but let \\\\bet change). \\n\\n In Figure 4(d) there is a plot showing different \\\\beta will affect the mutual information, but the x-axis is \\\"weight\\\". I wonder that how the curve of mutual information change w.r.t \\\\beta, if the x-axis is training epochs. Do your statement stable about \\\\beta? \\n\\n2. I think Section 3 and Theorem 1 are interesting and insightful. But I notice that in Section 10 you mentioned that this will be a separate paper. Is it OK to put them together in this paper?\\n\\n3. The paper by (Schwatz-Ziv & Tishby 17') has not pass a peer-review process and it is still a preprint. This paper is nothing but only saying some deficiencies of (Schwatz-Ziv & Tishby 17') (except Section 3 and Theorem 1 which I think should be an independent paper). I think such a paper should not be published as a conference paper before (Schwatz-Ziv & Tishby 17') pass a peer-review process.\\n\\nSo totally I think this paper should not be accepted by ICLR at this point. I think Section 3 and Theorem 1 should become an independent paper, and the DNN approach can be an application of the mutual information estimator.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
rkxusjRctQ
Learning models for visual 3D localization with implicit mapping
[ "Dan Rosenbaum", "Frederic Besse", "Fabio Viola", "Danilo J. Rezende", "S. M. Ali Eslami" ]
We consider learning based methods for visual localization that do not require the construction of explicit maps in the form of point clouds or voxels. The goal is to learn an implicit representation of the environment at a higher, more abstract level, for instance that of objects. We propose to use a generative approach based on Generative Query Networks (GQNs, Eslami et al. 2018), asking the following questions: 1) Can GQN capture more complex scenes than those it was originally demonstrated on? 2) Can GQN be used for localization in those scenes? To study this approach we consider procedurally generated Minecraft worlds, for which we can generate images of complex 3D scenes along with camera pose coordinates. We first show that GQNs, enhanced with a novel attention mechanism can capture the structure of 3D scenes in Minecraft, as evidenced by their samples. We then apply the models to the localization problem, comparing the results to a discriminative baseline, and comparing the ways each approach captures the task uncertainty.
[ "generative learning", "generative models", "generative query networks", "camera re-localization" ]
https://openreview.net/pdf?id=rkxusjRctQ
https://openreview.net/forum?id=rkxusjRctQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Sylupyz-l4", "B1xGm5kRkE", "SyluN9utCm", "BJeJKJUERm", "rJeroar407", "BkgP-or4Am", "HylrOFH4RQ", "BJlDJti1C7", "rkxoWDUT2X", "Hygz4bf93Q", "Sygu18wv3X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544785856508, 1544579609542, 1543240239661, 1542901622532, 1542901148535, 1542900479393, 1542900076972, 1542596831321, 1541396227443, 1541181737625, 1541006815641 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper635/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper635/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper635/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper635/Authors" ], [ "ICLR.cc/2019/Conference/Paper635/Authors" ], [ "ICLR.cc/2019/Conference/Paper635/Authors" ], [ "ICLR.cc/2019/Conference/Paper635/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper635/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper635/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper635/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a method that learns mapping implicitly, by using a generative query network of Eslami et al. with an attention mechanism to learn to predict egomotion. The empirical findings is that training for egomotion estimation alongside the generative task of view prediction helps over a discriminative baseline, that does not consoder view prediction. The model is tested in Minecraft environments.\\nA comparison to some baseline SLAM-like method, e.g., a method based on bundle adjustment, would be important to include despite beliefs of the authors that eventually learning-based methods would win over geometric methods. For example, potentially environments with changes can be considered, which will cause the geometric method to fail, but the proposed learning-based method to succeed.\\n\\nMoreover, there are currently learning based methods for the re-localization problem that the paper would be important to compare against (instead of just cite), such as \\\"MapNet: An Allocentric Spatial Memory for Mapping Environments\\\" of Henriques et al. and \\\"Active Neural Localization\\\" of Chaplot et al. . In particular, Mapnet has a generative interpretation by using cross-convolutions as part of its architecture, which generalize very well, and which consider the geometric formation process. The paper makes a big distinction between generative and discriminative, however the architectural details behind the egomotion estimation network are potentially more or equally important to the loss used. This means, different discriminative networks depending on their architecture may perform very differently. Thus, it would be important to present quantitative results against such methods that use cross-convolutions for egomotion estimation/re-localization.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"lack of experiments to obvious geometric baselines and previous learning-based methods for localization\"}", "{\"title\": \"Additional comment\", \"comment\": \"The authors did not address my concerns much. I keep my rating to 5 (leaning to 4), as I still think there are more experiments that can be included to strengthen the paper (and I think the paper would benefit from this in a long run).\\n\\n[Comparison to traditional SLAM]\\nWhile the claimed main contribution of the paper is that utilizing implicit mapping would benefit the model on the localization task, no comparison to a method with explicit mapping is shown. Direct visual SLAM approaches, like LSD-SLAM, should perform well in the Minecraft environment given sequential image input. These methods also offer the additional benefit of interpretable explicit geometric maps. Despite comments, the authors do not offer an experiment comparing their approach to traditional approaches, and therefore we are left in the dark about the relative performance of this approach. I suggest the authors compare with the SLAM methods by using a sequence of frames collected in the Minecraft environment and perform the localization task given sampled frames. Therefore, the strengths of the proposed method and the SLAM methods need to be discussed.\\n\\n[Using real data]\\nWe are still far from achieving a generative model for real world scenes yet. Therefore, it is not convincing that the proposed approach will address localization problems with real data. The rebuttal did not address this concern.\\n\\n[Interpretability]\\nThe proposed model representation is much less interpretable compared to traditional SLAM algorithms as the map is implicitly represented. The authors do not provide insight addressing this point but simply left this for further research without a clear path towards superiority to traditional SLAM.\\n\\n[Quantization of discriminative model output]\\nI believe de-correlating nearby pose values will hurt the performance of the discriminative model and possibly lead to an incorrect conclusion. However, no further experiment to defend this question was provided during the rebuttal period.\"}", "{\"title\": \"Reviewer 3 additional comments\", \"comment\": \"We thank the authors for the provided additional details. After reading their responses, I upgrade my initial rating of 5 to 6.\"}", "{\"title\": \"Thank you for your review and helpful comments.\", \"comment\": \"Thank you for your review and helpful comments.\\n \\nRegarding using PoseNet as a baseline - As we describe briefly in the related work, Our discriminative baseline, the reversed-GQN, can be thought of as an implementation of an adaptive PoseNet, where instead of training the weights from scratch on new scenes, we train a representation network that outputs a scene-specific vector on which the decoder is conditioned. The reason we need to make PoseNet adaptive is because the number of observations in our task is likely to be too small to train a whole network from scratch. Additionally, the fact that the architecture of our baseline and the input it is conditioned on are similar to our proposed generative model, reduces confounding factors and makes the comparison of the approaches easier. We have emphasizes this further in section 4.\\n\\nRegarding the work being incremental - The main objective and the novelty of this paper is to propose a generative approach and demonstrate the advantage and disadvantage compared to the discriminative approach which is the standard in recent work.\\nHowever, even when considering the model itself, we think that the novel sequential attention mechanism that we introduced is a significant and interesting extension of the recent GQN model. This is what enables the model to move from toy data with a handful of simple objects in a room, to the more complex scenes in Minecraft (see figure 4 for quantitative comparison), and this is also what allows the model to be used for localization (see table 1). The visualization of the attention (figure 6) is also an interesting outcome when comparing to the classic methods of image feature extraction for localization. \\n\\nRegarding using more complex data and real life applications - We agree that demonstrating our method on real data with more complex scenes and higher resolution images would be interesting, however since we are proposing a new approach and we base it on a model that was previously only shown to work on toy data, we think that it is important to take intermediate steps and analyze the results. As far as we know, all the localization methods based on machine learning still fall far behind handcrafted methods and therefore are still in an exploratory phase. Furthermore, within the machine learning methods, as we mention in the paper, the generative approach that we propose has a clear limitation of computation time (since it needs to optimize at test time). Using real data would make this disadvantage even worse, potentially preventing us from demonstrating the benefits of the approach (better capturing the uncertainty and being free from a prior), which we think are important to the machine learning community and could drive further progress. We have emphasized this point further in section 2.\"}", "{\"title\": \"Thank you for your review and helpful comments.\", \"comment\": \"Thank you for your review and helpful comments.\\n\\nRegarding the interpretability of implicit mapping models - We agree that the model is less interpretable in this aspect, and we emphasized this further in the introduction. However one approach that we find useful for understanding how the model captures the scene structure, is to look at samples from new view points (figures 5, 7, 9) and compare to the ground truth image. More broadly, although neural representations have demonstrated their efficacy on a wide range of tasks, interpretation of neural representations remains an active area of research. We believe many of the findings in interpretability will transfer to our model as well.\\n \\nRegarding the quantization of the discriminative model\\u2019s output - We quantized the output because we wanted to capture the full distribution in the simplest way. This is because we are interested in analyzing the way the (potentially multi-modal) uncertainty is captured. In addition, quantizing the output with the same values as the ones we use for the grid search we perform with the generative approach makes the comparison easier.\\nIn contrast to other ways of capturing multi-modal distributions (like mixture models), the downside of quantization is the de-correlation of nearby values and that the smoothness needs to be learned from data. However, looking at the output maps compared to the generative localization maps (figure 7 and 9) we see that the discriminative output is much more smooth, and in fact we mention this over-smoothing around the context points as one of the problems of the discriminative method. \\n\\nRegarding the novelty in the paper - The main novelty is in the approach itself. The main objective of this paper is to propose the generative approach and demonstrate the advantage and disadvantage compared to the discriminative approach which is the standard in recent work.\\nHowever, even when considering the model itself, we think that the novel sequential attention mechanism that we introduced is a significant and interesting extension of the recent GQN model. This is what enables the model to move from toy data with a handful of simple objects in a room, to the more complex scenes in Minecraft (see figure 4 for quantitative comparison), and this is also what allows the model to be used for localization (see table 1). The visualization of the attention (figure 6) is also an interesting outcome when comparing to the classic methods of image feature extraction for localization. \\n\\nRegarding using real data - We agree that demonstrating our method on real data with more complex scenes and higher resolution images would be interesting, however since we are proposing a new approach and we base it on a model that was previously only shown to work on toy data, we think that it is important to take intermediate steps and analyze the results. As far as we know, all the localization methods based on machine learning still fall far behind handcrafted methods and therefore are still in an exploratory phase. Furthermore, within the machine learning methods, as we mention in the paper, the generative approach that we propose has a clear limitation of computation time (since it needs to optimize at test time). Using real data would make this disadvantage even worse, potentially preventing us from demonstrating the benefits of the approach (better capturing the uncertainty and being free from a prior), which we think are important to the machine learning community and could drive further progress. We have emphasized this point further in section 2.\\n\\nRegarding a comparison to LSD-SLAM or a similar method - As you mention, applying LSD-SLAM will probably not work because the re-localization task is different than SLAM and specifically contains a sparse set of observations with a weaker sequential prior. However we do believe that it is possible to develop a handcrafted method for our task that will work better than our proposal (e.g. bundle-adjustment on some key-point matches). \\nWe don\\u2019t think that such a comparison will be useful here, because as we mention in the introduction and discussion, handcrafted methods in general still outperform machine learning approaches for localization (see Walch et al. 2017 for example). We do believe that machine learning approaches could eventually outperform handcrafted methods, and we think that a generative approach that better captures the uncertainty like we demonstrate in the paper, could be a key factor in making this happen.\"}", "{\"title\": \"Thank you for your comments and positive feedback.\", \"comment\": \"Thank you for your comments and positive feedback.\\n\\nRegarding the effect of attention in the discriminative model - We agree that it is smaller than in the generative direction, however table 1 shows there\\u2019s still a significant difference resulting from the attention (compare line 1 with line 3). This is also visible on the right side of figure 4. We think this might happen because a lot of what the discriminative model captures comes from the prior on camera positions rather than using the images. We added that to our discussion on table 1.\\n\\nRegarding using real data - We agree that demonstrating our method on real data with more complex scenes and higher resolution images would be interesting, however since we are proposing a new approach and we base it on a model that was previously only shown to work on toy data, we think that it is important to take intermediate steps and analyze the results. As far as we know, all the localization methods based on machine learning still fall far behind handcrafted methods and therefore are still in an exploratory phase. Furthermore, within the machine learning methods, as we mention in the paper, the generative approach that we propose has a clear limitation of computation time (since it needs to optimize at test time). Using real data would make this disadvantage even worse, potentially preventing us from demonstrating the benefits of the approach (better capturing the uncertainty and being free from a prior), which we think are important to the machine learning community and could drive further progress. We have emphasized this point further in section 2.\"}", "{\"title\": \"typo fixed => training for 4M iterations.\", \"comment\": \"You are right this was a typo and we corrected it now.\\nWe are training the models for 4M iterations. This can also be seen in the training curves in figure 4.\\nthanks!\"}", "{\"comment\": \"Thanks for your great work. I have one question about your paper.\\n\\nIn Appendix B, you mention \\\"We train the model for 400M iterations\\\", but I wonder if this is a typo of \\\"400K\\\" or \\\"4M\\\" because it takes too much time to train if it is true (In the work of GQN by Eslami et al., they train the model for 2M iterations).\\n\\nIf this is not a typo, I would like to ask you why this model needs so many iterations to train compared to simple GQN.\\n\\nThank you.\", \"title\": \"Is the number of training iterations correct?\"}", "{\"title\": \"Very informative and great paper\", \"review\": \"This paper proposes generative approaches to localization without explicit high definition geometric maps. A generative baseline (GQN) and an extension to that with attention is introduced in the context of localization.\\n\\nThe paper is clearly written, and the relevant previous work is discussed to satisfying degree. \\n\\nFigures 2 and 3 help understanding the GQN and the proposed attention version a lot. \\n\\nI am intrigued with the result presented in table 1. especially with the fact that attention is helping the generative method quite a bit, but not so much for the discriminative method. This is not discussed in detail in the paper, I suggest the authors expand their discussion a bit in this direction. \\n\\nThe second suggestion is in terms of the data. I understand the motivations behind using the minecraft world. however, real data is still quite different than this data, think about various differences in the real world at time of training and test even at the same location texture of sky and lighting will change. On top of this, there is quite a bit of variance in levels of detail compared to the monotonic minecraft world. I suggest using a real dataset for another set of experiments. This can be added as an appendix. \\n\\nIn general, very good motivation, and intriguing work for the community -- localization with high definition geometric works is tough to scale, and implicit world representations are an important piece in relaxing this dependency. I believe this paper will motive more future work.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Weak evaluation\", \"review\": [\"**Summary of the paper**\", \"This paper studies the problem of visual re-localization, where we are interested in estimating the camera pose of a new image from a set of source images and their camera poses. Instead of explicitly designing the structure of a map of 3D scenes (e.g. occupancy grids or point clouds), the paper proposes implicitly learning an abstract map representation. Specifically, the paper proposes a generative method based on Generative Query Networks (GQNs) augmented with an attention mechanism. The authors apply this model to the visual re-localization problem. To train and test the proposed model, the authors introduce the Minecraft random walk dataset, which consists of images and their camera poses extracted from randomly generated trajectories in the Minecraft environment. The proposed model is compared against a discriminative counterpart, which is trained to directly predict the target camera pose and achieves better MSE.\", \"**Clarity**\", \"Above average\", \"**Significance**\", \"Below Average\", \"**Detailed comments**\", \"_Paper Strengths_\", \"The idea of leveraging generative models' knowledge of \\\"maps\\\" to perform visual localization is interesting. This gives learning frameworks the flexibility of building a latent representaiton of maps which may yield better performance instead of being restricted to a pre-defined representations.\", \"The paper is very well-written and easy to follow.\", \"The authors did a good job presenting the proposed methods. The descriptions and formulations are clear. Both Figure 2 and Figure 3 are helpful for understanding the GQNs and the proposed attention mechanism.\", \"The patch dictionary for the attention mechanism seems effective especially when dealing with a set of context images capturing the same scene.\", \"The authors are honest about the limitations of the proposed framework compared to classic approaches.\", \"The visualization of results are clear. Particularly, Figure 5 and Figure 7 give easily interpretable representations of the results.\", \"_Paper Weaknesses_\", \"Implicitly learning a map of the scene is mentioned as a strength in the paper, but this comes at the high cost of interpretability. Without an explicit map representation, it is difficult to understand the failure cases - does the model not understand the 3D scene well or does the model have a hard time accurately predicting camera poses?\", \"Minecraft is an interesting environment for proof of concept, but lacks much of the subtlety of the real world.\", \"Building a framework that is able to perform the localization task from real-world scenes is more interesting. Learning generative models of real-world scenes is known to be difficult, which makes this framework impractical. There are google streetview and indoor datasets authors can try to utilize.\", \"The aforementioned point is supported by the fact that the localization performance of the proposed model on real-world scenes is missing.\", \"The reviewer does not find enough novelty from the proposed model, which is an iterative improvement on GQNs.\", \"The paper only compares the proposed model against its discriminative couterpart, which is not sufficient. While the authors strongly argue that exploiting the proposed implicitly representations of scenes is more beneficial than utilizing the pre-defined explicit representations, the only baseline is using the same implicit representations. Although the reviewer is aware of that this model does not use complete video sequences, benchmarking against a visual monocular SLAM algorithm, like LSD-SLAM [1], would contextualize the claim.\", \"Why quantize the discriminative model's output? This de-correlates nearby pose values. The paper could benefit from an explanation of not using a straightforward regression over pose variables.\", \"Overall, the reviewer does not find enough novelty from any aspects except the idea of utilizing a generative model for visual localization with implocitly learned maps, which is not fully demonstrated in the experiment section (i.e. not compare to baselines using explicit maps).\", \"A differentiating factor for this paper could be tackling one of the open problems remaining in SLAM as identified in [2], like lifelong learning or semantic mapping.\", \"_Reproducibility_\", \"Given the clear description in the main paper and the details provided in the appendix, the reviewer believes reproducing the results is possible.\", \"_Conclusion_\", \"Overall, the reviewer believes this paper is well presented and reproducible. However, the paper does not propose to solve a novel problem, nor does it present a very novel method. Although the idea of using existing generative networks for localization is interesting, the paper misses important baselines relying on explicit map representation and is not sufficiently convincing. Moreover, requiring a generative model significantly limits the possibility of utilizing the proposed model for real-world applications. While the paper does present a new dataset built in Minecraft which is suitable for demonstrating the strengths of the proposed method, the reviewer does not find this significant. Therefore, the reviewer recommends a rejection.\", \"_Reference_\", \"[1] Engel, Jakob, Thomas Sch\\u00f6ps, and Daniel Cremers. \\\"LSD-SLAM: Large-scale direct monocular SLAM.\\\" European Conference on Computer Vision. Springer, Cham, 2014.\", \"[2] Cadena, Cesar, et al. \\\"Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age.\\\" IEEE Transactions on Robotics 32.6 (2016): 1309-1332.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An interesting but incremental application of Eslami et al. (2018)\", \"review\": \"Summary:\\nEslami et al. (2018) proposed a deep neuronal framework for a scene representation and renderer (the Generative Query Networks: GQN), which generate an image from a scene representation and a query camera pose. In this work, the authors use the GQN to estimate the camera pose from a target image. Existing learning approaches are discriminative, meaning that they are trained to output the camera pose in an end-to-end fashion, while this paper proposes a generative method more in the line of hand-crafted methods which still largely outperform learning approaches. Using the GQN with the proposed attention mechanism, the method captures an implicit mapping of the environment at a more abstract level. This implicit representation is then used to optimize the likelihood of the target pose in a probabilistic graphical model framing. They compare their solution to a discriminative baseline, based on a reversed GQN.\", \"pros\": [\"As shown in Figure 7, the generative approach seems to capture better the implicit representation associated to the mapping from the scene geometry and the image.\", \"The proposed generative solution seems to be more accurate than the discriminative baseline.\", \"As shown in Table 1, the proposed attention mechanism allows to focus on relevant parts of the context images, giving flexibility for more complex scenes.\", \"Unlike classical discriminative methods, the proposed solution can be easily used in new scenes (different from the one used for the learning) thanks to the representation network.\"], \"cons\": [\"The contribution seems incremental with respect to Eslami et al. (2018).\", \"Lack of comparisons to state of the art, in particular a comparison with PoseNet is necessary.\", \"The results are shown only on simple datasets of small images (32x32 pixels).\", \"Tradeoff between precision and time computing is necessary to handle large environments because of space discretization. Then, the method seems to be far to be exploited in a real life SLAM application (e.g. autonomous vehicle).\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
S1e_ssC5F7
Hyper-Regularization: An Adaptive Choice for the Learning Rate in Gradient Descent
[ "Guangzeng Xie", "Hao Jin", "Dachao Lin", "Zhihua Zhang" ]
We present a novel approach for adaptively selecting the learning rate in gradient descent methods. Specifically, we impose a regularization term on the learning rate via a generalized distance, and cast the joint updating process of the parameter and the learning rate into a maxmin problem. Some existing schemes such as AdaGrad (diagonal version) and WNGrad can be rederived from our approach. Based on our approach, the updating rules for the learning rate do not rely on the smoothness constant of optimization problems and are robust to the initial learning rate. We theoretically analyze our approach in full batch and online learning settings, which achieves comparable performances with other first-order gradient-based algorithms in terms of accuracy as well as convergence rate.
[ "Adaptive learning rate", "novel framework" ]
https://openreview.net/pdf?id=S1e_ssC5F7
https://openreview.net/forum?id=S1e_ssC5F7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rkeBezxIxE", "rygyfBVelV", "S1gxz6cfkN", "ryleH4UBA7", "HJgg6z8H0m", "BJeQlWUHRX", "H1g51qT92m", "S1lb8qKMnm", "Syeq6mf-3m" ], "note_type": [ "meta_review", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545105900617, 1544729863417, 1543838984270, 1542968376164, 1542967992018, 1542967530631, 1541229026009, 1540688456810, 1540592578053 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper634/Area_Chair1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper634/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper634/Authors" ], [ "ICLR.cc/2019/Conference/Paper634/Authors" ], [ "ICLR.cc/2019/Conference/Paper634/Authors" ], [ "ICLR.cc/2019/Conference/Paper634/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper634/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper634/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"All three reviewers found that the motivation for the proposed method was lacking and recommend rejection. The AC thus recommends the authors to take these comments in consideration when revising their manuscript.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reject\"}", "{\"comment\": \"Hello,\\n\\nDuring the implementation of the 2d neural network on MNIST using the proposed algorithm, I got a problem. The initial value of gradient is big (since I put the random values in Ws and they are not probably close to the optimum) so when I use this new learning rate, it doesn't converge. I want to ask if there is an initial value on the learning rate in your algorithm in order to avoid that?\", \"one_solution\": \"we can do a step of regular gradient descent and after that change the update rules for iterations > 1.\\n\\nThanks\", \"title\": \"initial gradient problem\"}", "{\"title\": \"Comment\", \"comment\": \"I thank the reviewers for their response, and I keep my score.\"}", "{\"title\": \"Thanks for your time and insightful comments!\", \"comment\": \"1) Our main contribution (or focus) of this paper is to propose a framework for adjusting learning rate adaptively. We offer a novel viewpoint different from previous main approaches like line search and approximate second-order methods (BBstep, Adagrad, etc.).\\n\\n2) Our idea stems from the work of Daubechies et al. (2010), where the authors adjusted the weights of the weighted least squares problem by solving an extra objective function which added a regularizer about the weights to origin objective function. \\n\\n3) Since our framework can derive AdaGrad as a private case, we are more general scene so our bound doesn\\u2019t better than AdaGrad. However, our bound are almost same as AdaGrad.\"}", "{\"title\": \"Thanks for your time and helpful comments!\", \"comment\": \"1) Our idea stems from the work of Daubechies et al. (2010), where the authors adjusted the weights of the weighted least squares problem by solving an extra objective function which added a regularizer about the weights to origin objective function.\\n2) We give theoretical analysis for both update rules (see Theorems 6 and 7) not only for the original algorithm.\\n3) Compared with Bregman divergence, \\\\phi-divergence naturally imposes nonnegativity on \\\\beta and \\\\veta_t, which is necessary for learning rates.\", \"specific_comments_and_questions\": \"1) Since taking a convex regularization term of learning rate and viewing it as a penalty according to current learning rate, we need to regard this as a maximum problem. And there is little difference when adding a regularization and viewing it as a minimum problem. Actually, our paper proposes a framework of adaptive step size learning. \\n\\n2)Since \\\\phi-divergence is natural for nonnegative variable, compared with Bregman divergence, and doesn\\u2019t have to be a probability (sum is 1) compared to KL-divergence.\\n\\n3) We have shown some common \\\\phi-divergence and relevant update rules in Appendix D.1, and most of them seem simple to solve.\\n\\n4) The objective functions of problems (5) and (7) are concave for \\\\beta and convex for x (since \\\\beta > 0), and let the equations of both partial derivatives equal zero (it is somehow like the equation(12) ), then the solutions satisfy saddle point condition, so the solutions are identical. \\n\\n5) We have shown in Lemma 2 that the solution of (5) can easily be extended to constrained cases. Moreover, in practical use, growth clipping is often necessary, which indicates constrained cases. Therefore, we choose maxmin formulation instead of minmax formulation.\\n\\n6) We would like to adaptively choose the learning rate in the optimization period rather than setting priori \\\\eta_t for \\\\beta_t, while not on account of the smoothness.\\n\\n7) Equation (11), got by alternating update rule, first uses recommended step size \\\\eta to minimizes x, and then maximizes \\\\beta to get next step size, next turn back to get x by using this step size. Our theorem showed the convergence bound for both methods. Perhaps, there is no need to show when they coincide.\\n\\n9) First, we focus on the convergence with different choice of initial learning rate, while many methods, like GD, would fail for extremely large initial learning rate. However, this doesn't bother us at all since we are free of choosing the initial rate. Second, like GD and many other optimization methods, the choice of initial learning rate may need Lipschitz constant or smoothness constant for the sake of convergence, but in our methods, it doesn't.\\n\\n10) We propose a novel framework on adaptively updating the learning rate with a regularization term, which we take \\\\phi divergence as an example in the paper. In this way, the bounds are given for \\\\phi divergence generally. Therefore, we do not make special assumptions on \\\\alpha or \\\\beta_t.\\n\\n11) The bounds are given for general \\\\phi divergence. Viewed as a special case of our framework with a specialized \\\\phi divergence, AdaGrad is of no wonder to enjoy lower regret bound than the general regret bound we give.\\n\\n13) Due to the space limitation, our description may mislead you. Given a \\\\phi divergence, we totally have three different ways on how to update x_t and \\\\beta_t alternately, corresponding to Algorithms 1 and 2 in page 5, and Algorithm 3 in Appendix C. Actually, we should not mention Algorithm 1 here, for it is shown in Appendix D.1 that Algorithm 1 is always computationally unfriendly. As shown in Figure 1, Algorithm 2 outperforms Algorithm 3 when the initial learning rate is extremely large. However, at their best initial learning rates, their performance is comparable. Out of the consideration on training stability with large initial learning rate, we finally choose the second update rule corresponding to Algorithm 2.\\n\\n14) Growth clipping does not affect the theoretical result, since it is only applied to the algorithms in the experiment. The practical optimization problem neither generally guarantees the gradient smoothness, nor guarantees the global strong convexity. So it is necessary to apply growth clipping in case of training collapses due to the stochastic gradient.\\n\\n15)Figure 2 describes the results of experiments carried out in the full gradient setting. Since there is no randomness, it is of no use to carry out duplicated experiments.\"}", "{\"title\": \"Thanks for your thoughtful review and your time!\", \"comment\": \"1) Our main contribution (or focus) of this paper is to propose a framework for adjusting learning rate adaptively. We offer a novel viewpoint different from previous main approaches like line search and approximate second-order methods (BBstep, Adagrad, etc.).\\n2) Compared with Bregman divergence, \\\\phi-divergence naturally imposes nonnegative constraint about \\\\beta and \\\\eta_t, which is necessary for learning rates, while the L_p normalization still can\\u2019t guarantee nonnegative condition for learning rate.\\n3) Equation (6) is equivalent to (5). We just want to rewrite (5) to a more clear scheme.\\n4) In the classical gradient descent algorithm formulated as x_{t+1} = x_t - g_t / \\\\beta, for small \\\\beta, more precisely for \\\\beta < 2 / L, the algorithm has no guarantee for convergence. Our framework gives a upper bound for runtime (O(1 / \\\\varepsilon)) or regret (O(\\\\sqrt(T))) for arbitrary \\\\beta_0. Moreover, like Adagrad or gradient descent, algorithms derived from our framework also can be suggested a best initial learning rate for optimization ( based on our regret bounds).\\n5) The proof of Theorem 6 can be found in the proofs of Theorems 21, 22, and 23.\"}", "{\"title\": \"minor generalization of AdaGrad style methods\", \"review\": \"The paper presents a generalization of the Adagrad type methods using a min-max formulation and then presents two alternate algorithms to solve this formulation.\\n\\nIt is unclear to me that much extra generalization has been achieved over the original AdaGrad paper. That paper simply presents the choice of hyperparameters as an optimal solution to a proximal primal dual formulation. The formulation presented here appears to be another form of the proximal mapping formulation, and so it is unclear what the advance here is. The AdaGrad paper used a particular Bregman divergence, and different divergences yield slightly different methods, as is observed here by the authors when they use different divergence measures.\\n\\nThe Bregman divergences do make sense from a primal pual proximal formulation point of view, but why do you use a discrepancy function in your min-max formulation that comes from the \\\\phi - divergence family? Why not consider an L_p normalization of the discrepancy? \\n\\nThe difference between formulations (5) and (6) is not clearly specified. Did you mean to drop the constraints that \\\\beta \\\\in \\\\cal{B}_t ? Otherwise, why is (6) , which looks to be a re-write of (5), unconstrained and hence separable?\\n\\nThe authors claim that the method is free of parameter choices, but the initial \\\\beta_0 seems to be a crucial parameter here since it forms both a target and a lower bound for subsequent \\\\beta_t's. How is this parameter chosen and what effect does it have on convergence? From the results (Figs in Sec 5), this choice does significantly impact the final test loss obtained. \\n \\nI could not find a proof for Thm 6 in the appendix. Did I over look it or is there a typo?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Novel idea but theoretical guarantees and empirical results are not convincing.\", \"review\": \"This paper presents a method for adaptively tuning the learning rate in gradient descent methods. The authors consider the formulation of each gradient descent update as a quadratic minimization problem and they propose adding a phi-divergence between the learning rate that would be used and an auxiliary vector. The authors also propose adding a maximization over all learning rates in the update.\\n\\nThe authors study an important problem and propose a novel method. The algorithms suggested by the author are also relatively clear, and it is great that the paper presents both theoretical results as well as numerical experiments.\\n\\nOn the other hand, I didn't find the main idea of hyper-regularization to be well-justified. It is not clear why adding an additional regularization term for the learning rate makes sense , and it is even less clear why this should be presented as a maxmin problem. This can make the update step much more complicated and is probably why the authors also propose a simpler alternating optimization algorithm as an alternative. Unfortunately, the authors do not discuss how this alternating optimization problem relates to the original one, and the theoretical guarantees are only presented for the original algorithm. The authors also do not justify the choice of phi-divergence as the regularizer for the learning rate. The theoretical guarantees in the paper also do not suggest that the algorithm presented in the paper is better than existing state-of-the-art methods, even in specific situations (i.e. the regret bounds don't appear better than the AdaGrad regret bounds). Moreover, without tests for statistical significance, I also didn't find the experimental results sufficiently compelling.\", \"specific_comments_and_questions\": \"1) Page 3: Equation (4): The paper would be stronger if the authors motivated why the regularization should be posed as an outer maximization.\\n2) Page 3: \\\"we use the \\\\phi-divergence as our hyper-regularization\\\". Why is this a good choice of reuglarizer?\\n3) Page 3: \\\"only a few extra calculations are required for each step\\\". This is a misleading comment, because the maximization can be hard when phi is complicated, even if the problem splits across dimensions.\\n4) Page 4: \\\"The solution of problem (5) is the same as (7) in unconstrained case\\\". You should provide a reference for this statement as well as discuss the specific assumptions on the objective that allow you to arrive at this claim.\\n5) Page 4: \\\"while the solution of (7) is more difficult to get. Thus, we choose (5) as our basic problem\\\". This seems like a very bad motivation for choosing the maxmin formulation. For instance, the problem would be even simpler if you didn't include this extra phi-divergence at all.\\n6) Page 4: \\\"Although setting \\\\eta-t=\\\\beta_t is our main focus...\\\". Why is smoothness in the learning rate a good property? \\n7) Page 5: Equation (11). How do these iterates relate to the ones in equation (5) (e.g. when do they coincide, if ever)?\\n8) Page 5: \\\"influence the efficient of our algorithms.\\\" Grammatical error.\\n9) Page 6. \\\"our algorithms are robust to the choice of initial learning rates and do not rely on the Lipschitz constant or smoothness constant\\\". I'm not sure why this is a valuable property, since AdaGrad doesn't rely on these parameters either.\\n10) Page 6: Theorems 6 and 7. How do these results depend on alpha and \\\\beta_t? This paper would be much stronger if the bounds depend on \\\\phi more clearly and if the authors were able to show that there exist choices of phi that make this algorithm better than existing methods.\\n11) Page 6: Theorem 7: The dependence on G in the regret bound actually makes this worse than the AdaGrad regret bound.\\n12) Page 7: \\\"KL_devergence\\\". Typo.\\n13) Page 7: \\\"different update rules were compared in advance to select the specific one for any phi divergence in the following experiments.\\\" What does this mean exactly? How much of a difference does the choice of update rule make?\\n14) Page 7: \\\"growth clipping is applied to all algorithms in our framework\\\". Why is this necessary, and how does it affect the theoretical results?\\n14) Page 7-8: Figures 1, 2, and 3. It's hard to interpret the significance of these results without error bars.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Unclear formulation, and Benefit of approach is not Demonstrated\", \"review\": \"Summary:\\n%%%%%%%%%%%%%%%\\nThe paper explores ways to adapt the learning rate rule through a new minimax formulation.\\nThe authors provide regret bounds for their method in the online convex optimization setting.\", \"comments\": \"%%%%%%%%%%%%%%%\\n-I found the motivation of the approach to be very lacking.\\nConcretely, it is not clear at all why the minimax formulation even makes sense, and the authors do not explain this issue.\\n\\n-While the authors provide regret guarantees for their method, the theoretical analysis does not reflect when is their approach beneficial compared to standard adaptive methods. Concretely, their bounds compare with the well known bounds of AdaGrad. \\nIt is nice that their approach enables to extract AdaGrad as a private case. But again, it is not clear what is the benefit of their extension.\\n\\n-Finally, the experiments do not illustrate almost any benefit of the new approach compared to standard adaptive methods.\\n\\n\\nSummary\\n%%%%%%%%%%%%%%%\\nThe paper suggests a different approach to adapt the learning rate.\\nUnfortunately, the reasoning behind the new approach is not very clear.\\nAlso, nor theory neither experiments illustrate the benefit of this new approach over standard methods.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HkNDsiC9KQ
Meta-Learning Update Rules for Unsupervised Representation Learning
[ "Luke Metz", "Niru Maheswaranathan", "Brian Cheung", "Jascha Sohl-Dickstein" ]
A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this involves minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect. In this work, we propose instead to directly target later desired tasks by meta-learning an unsupervised learning rule which leads to representations useful for those tasks. Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task. Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to different neural network architectures, datasets, and data modalities. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We further show that the meta-learned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.
[ "Meta-learning", "unsupervised learning", "representation learning" ]
https://openreview.net/pdf?id=HkNDsiC9KQ
https://openreview.net/forum?id=HkNDsiC9KQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BYmFTEBshj", "S1xX9W7lgN", "rJgRNO1Kp7", "HJeD-O1Yam", "r1eK0P1F67", "Bkgckkbah7", "SJeJvkj5hX", "r1eIOmju3m" ], "note_type": [ "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1578643190867, 1544724874555, 1542154293932, 1542154239273, 1542154192590, 1541373665936, 1541218135062, 1541088110231 ], "note_signatures": [ [ "~Deepak_Yadav1" ], [ "ICLR.cc/2019/Conference/Paper632/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper632/Authors" ], [ "ICLR.cc/2019/Conference/Paper632/Authors" ], [ "ICLR.cc/2019/Conference/Paper632/Authors" ], [ "ICLR.cc/2019/Conference/Paper632/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper632/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper632/AnonReviewer2" ] ], "structured_content_str": [ "{\"comment\": \"Hi,\\nCongratulations on your work! \\n\\nI was going through your paper and on page 5, section 3.1 Base Model, it is written x0, x1, x2, ....... xL as post non-linearity activations and z1, z2, z3 .....zL as pre non linearity activations.\\n\\nI think it should be other way around. I am not confident on this, but I think this is a possible error on your part, or I might be interpreting things differently.\\n\\n@Authors: Please let me know your thoughts on this edit.\\n\\nThanks!\", \"title\": \"Great paper | Possible incorrect pairing of variables names\"}", "{\"metareview\": \"The reviewers all agree that the idea is interesting, the writing clear and the experiments sufficient.\\n\\nTo improve the paper, the authors should consider better discussing their meta-objective and some of the algorithmic choices.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Oral)\", \"title\": \"Well written paper with an interesting idea\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for your thoughtful review! Comments below:\\n\\n\\\"The section 5.4 is a bit hard to understand, with very very small images.\\\"\\nWe apologize for the lack of clarity. We will improve this section and will increase the image size!\\n\\n\\\"cons only very modestly better than other methods. I would like to get a feel for why VAE is so good tbh (though the authors show that VAE has a problem with objective function mismatch).\\\"\\nIn generative modeling, understanding what design principles lead to reusable representations is a huge open field of study, but many people have promoted compositional generative models[1,2] and information theoretic measures of how well the model captures structure in the data [3,4]. VAEs possess both of these attributes.\\n\\n\\\"One comment: the update rule takes as inputs pre and post activity and a backpropagated error; it seems natural to also use the local gradient of the neuron's transfer function here, as many three or four factor learning rules do.\\\"\\nThis is a great suggestion! Thanks.\\n\\n[1]Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. 2013.\\n[2] Kingma, Diederik P., and Max Welling. \\\"Auto-encoding variational bayes.\\\" arXiv preprint arXiv:1312.6114 (2013).\\n[2]Hinton, Geoffrey E., et al. \\\"The\\\" wake-sleep\\\" algorithm for unsupervised neural networks.\\\" Science 268.5214 (1995): 1158-1161.\\n[3]Roweis, Sam T. \\\"EM algorithms for PCA and SPCA.\\\" Advances in neural information processing systems. 1998.\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for your thoughtful review! Comments below:\\n\\n\\\"Can you elaborate on what aspect of learning rules and why they can be transferable among different modalities and datasets?\\\"\\nThis is a hypothesis based on the observation that hand designed learning rules transfer across modalities and datasets. We structure our learning rule in such a way as to aid this generalization. The specifics are largely inspired by biological neural networks--for instance the use of a neuron-local learning rule, and by the challenges associated with making meta-training stable--for instance, the use of normalization in almost every part of the system was found to be necessary to prevent meta-training from diverging. A better understanding of what aspects of learned learning rules transfer across datasets is a fascinating question and definitely something we are pursuing in future work.\\n\\n\\\"For this type of meta-learning to be successful, can you discuss the requirements on the type of meta-objectives?\\\"\\nIn general, the meta-objective has to be easily tractable and have a well defined derivative with respect to the final layer (e.g. from backpropagation during meta-training). It should also reflect, as well as possible, performance on the eventual task. In our case, we wanted the base network to learn a representation in an unsupervised way which easily exposed class labels or other high level attributes, so we chose our meta-objective to reward few-shot learning performance using the unsupervised representation. In early experiments, we explored a number of variations on our eventual meta-objective (e.g. clustering and softmax regression). We found similar performance for these variants, and chose the meta-objective we describe in the paper (least squares) because we believed it to be the simplest.\\n\\n\\\"Besides saving computational cost, does using smaller input dimensions favor your method over reconstruction type of semi-supervised learning, e.g. VAE?\\\"\\nWe only meta-train on the datasets with the smaller input size, but we test on both sizes (Figure 4). The VAE performance is comparable for the two input sizes, while the learned optimizer decreases in performance on mnist and remains constant on fashion mnist.\\n\\n\\\"In the section \\\"generalizing over network architectures\\\", what is the corresponding supervised/VAE learning accuracy?\\\"\\nWe have not run these experiments, but we would expect the performance of the VAE to go up with increased model size.\\n\\n\\\"In the experimentation section, can you describe in more details how input permutations are conducted? Are they re-sampled for each training session for tasks? If the input permutations are not conducted, will the comparison between this method, supervised and VAE be different?\\\"\\nThey are re-sampled for each new instantiation of an inner problem and kept constant while training that task. While we have not removed them, if we did we would expect the learned update rule to overfit to the meta-training distribution, causing improved performance on non-permuted image tasks, but extremely poor performance on permuted image tasks. Doing this would make comparisons to VAEs and supervised learning misleading however, as these two methods have no notion of spatial locality (whereas the learned optimizer now would). As a result, the learned optimizer\\u2019s relative performance would probably be a lot stronger. It would be very interesting in future work to use convnets for the base model--both for the learned update rule and the baselines. However, doing so would be a fairly involved process, requiring changes to the architecture of the learned update rule.\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for your thoughtful review! Comments below:\\n\\n\\\"Motivations are not very clear in some parts. E.g., the reason for learning backward weights (V), and the choice of meta-objective.\\\"\\nOriginally, we did not learn backward weights, but in an effort to make the learning rule more biologically inspired we removed the transposed weights in favor of learned backward weights [1]. In practice, performance is surprisingly quite similar with both versions.\", \"as_per_meta_objective\": \"Exploring alternative meta-objectives would be very interesting! We choose the least squares meta-objective as it is allows us to compute the optimal final layer weights in closed form. This is important in that it allows us to easily differentiate the meta-objective with respect to the representation at the final layer (necessary for meta-training). We have explored alternative few-shot classification objectives (e.g. logistic regression, using implicit differentiation to get the appropriate derivative) but found performance to be similar and thus stuck with the simpler meta-objective.\\n\\n\\\"Experimental evaluation is limited to few-shot classification, which is very close to the meta-learning objective used in this paper. \\\"\\nFor simplicity, we used the same meta-objective at evaluation time. The use of different meta-objectives (at both meta-train and meta-test) is also very interesting to us and is something we would pursue in future work.\\n\\n\\\"The result of text classification is interesting, but not so informative given no further analysis. E.g., why domain mismatch does not occur in this case?\\\"\\nDomain mismatch does occur--just later in meta-training. Because we are learning a learning rule, as opposed to features, we expect some generalization, after all, hand designed learning rules generalize across datasets. We get some transfer performance early in meta-training, but the meta-objective on text tasks diverges later in training. We will add a few sentences to this effect. Better understanding out of domain generalization is definitely of interest to us and we are pursuing it in future work.\", \"paper_title\": \"This is a good point and we plan to change the paper title to: \\\"Meta-Learning Update Rules for Unsupervised Representation Learning\\\".\\n\\n\\n[1] Crick, F. The recent excitement about neural networks. Nature 337, 129\\u2013132 (1989).\"}", "{\"title\": \"an interesting approach to meta-learning, clear accept\", \"review\": [\"This paper introduces a novel meta-learning approach to unsupervised representation learning where an update rule for a base model (i.e., an MLP) is meta-learned using a supervised meta-objective (i.e., a few-shot linear regression from the learned representation to classification GTs). Unlike previous approaches, it meta-learns an update rule by directly optimizing the utility of the unsupervised representation using the meta-objective. In the phase of unsupervised representation learning, the learned update rule is used for optimizing a base model without using any other base model objective. Experimental evaluations on few-shot classification demonstrate its generalization performance over different base architectures, datasets, and even domains.\", \"Novel and interesting formulation of meta-learning by learning an unsupervised update rule for representation learning.\", \"Technically sound, and well organized overall with details documented in appendixes.\", \"Clearly written overall with helpful schematic illustrations and, in particular, a good survey of related work.\", \"Good generalization performance over different (larger and deeper) base models, activation functions, datasets, and even a different modality (text classification).\", \"Motivations are not very clear in some parts. E.g., the reason for learning backward weights (V), and the choice of meta-objective.\", \"Experimental evaluation is limited to few-shot classification, which is very close to the meta-learning objective used in this paper.\", \"The result of text classification is interesting, but not so informative given no further analysis. E.g., why domain mismatch does not occur in this case?\", \"I enjoyed reading this paper, and happy to recommend it as a clear accept paper. The idea of meta-learning update networks looks a promising direction worth exploring, indeed.\", \"I hope the authors to clarify the things I mentioned above. Experimental results are enough considering the space limit, but not great. Since the current evaluation task is quite similar to the meta-objective, evaluations on more diverse tasks would strengthen this paper.\", \"Finally, this paper aims at unsupervised representation learning, but it\\u2019s not clear from the current title, which is somewhat misleading. I think that's quite an important feature of this paper, so I highly recommend the authors to consider a more informative title, e.g., `Learning Rules for Unsupervised Representation Learning\\u2019 or else.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Novel idea of learning rules for unsupervised learning, need more theory/evidences on what/why meta objectives are sufficient for learning the unsupervised learning rules\", \"review\": \"This work brings a novel meta-learning approach that learns unsupervised learning rules for learning representations across different modalities, datasets, input permutation, and neural network architectures. The meta-objectives consist of few shot learning scores from several supervised tasks. The idea of using meta-objectives to learn unsupervised representation learning is a very interesting idea.\\n\\nAuthors mentioned that the creation of an unsupervised update rule is treated as a transfer learning problem, and this work is focused on learning a learning algorithm as opposed to structures of feature extractors. Can you elaborate on what aspect of learning rules and why they can be transferable among different modalities and datasets? For this type of meta-learning to be successful, can you discuss the requirements on the type of meta-objectives? Besides saving computational cost, does using smaller input dimensions favor your method over reconstruction type of semi-supervised learning, e.g. VAE?\\n\\nIn the section \\\"generalizing over datasets and domains\\\", the accuracy of supervised methods and VAE method are very close. This indicates those datasets may not be ideal to evaluate semi-supervised training.\\n\\nIn the section \\\"generalizing over network architectures\\\", what is the corresponding supervised/VAE learning accuracy?\\n\\nIn the experimentation section, can you describe in more details how input permutations are conducted? Are they re-sampled for each training session for tasks? If the input permutations are not conducted, will the comparison between this method, supervised and VAE be different?\\n\\nAfter reviewing the author response, I adjusted the rating up to focus more on novelty and less on polished results.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"substantial step towards good unsupervised and local learning\", \"review\": \"The paper describes unsupervised learning as a meta-learning problem: the observation is that unsupervised learning rules are effectively supervised by the quality of the representations that they yield relative to subsequent later semi-supervised (or RL) learning. The learning-to-learning algorithm allows for learning network architecture parameters, and also 'network-in-networks' that determine the unsupervised learning signal based on pre and post activations.\\n\\nQuality \\nThe proposed algorithm is well defined, and it is compared against relevant competing algorithms on relevant problems. \\nThe results show that the algorithm is competitive with other approaches like VAE (very slightly outperforms).\\n\\nClarity\\nThe paper is well written and clearly structured. The section 5.4 is a bit hard to understand, with very very small images. \\n\\nOriginality\\nThere is an extensive literature on meta-learning, which is expanded upon in Appendix A. The main innovation in this work is the parametric update rule for outer loop updates, which does have some similarity to the old work by Bengio in 1990 and 1992. \\n\\nSignificance\\n- pros clear and seemingly state-of-the-art results, intuitive approach, \\n-cons only very modestly better than other methods. I would like to get a feel for why VAE is so good tbh (though the authors show that VAE has a problem with objective function mismatch).\", \"one_comment\": \"the update rule takes as inputs pre and post activity and a backpropagated error; it seems natural to also use the local gradient of the neuron's transfer function here, as many three or four factor learning rules do.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
S1fDssA5Y7
Distributionally Robust Optimization Leads to Better Generalization: on SGD and Beyond
[ "Jikai Hou", "Kaixuan Huang", "Zhihua Zhang" ]
In this paper, we adopt distributionally robust optimization (DRO) (Ben-Tal et al., 2013) in hope to achieve a better generalization in deep learning tasks. We establish the generalization guarantees and analyze the localized Rademacher complexity for DRO, and conduct experiments to show that DRO obtains a better performance. We reveal the profound connection between SGD and DRO, i.e., selecting a batch can be viewed as choosing a distribution over the training set. From this perspective, we prove that SGD is prone to escape from bad stationary points and small batch SGD outperforms large batch SGD. We give an upper bound for the robust loss when SGD converges and keeps stable. We propose a novel Weighted SGD (WSGD) algorithm framework, which assigns high-variance weights to the data of the current batch. We devise a practical implement of WSGD that can directly optimize the robust loss. We test our algorithm on CIFAR-10 and CIFAR-100, and WSGD achieves significant improvements over the conventional SGD.
[ "distributionally robust optimization", "deep learning", "SGD", "learning theory" ]
https://openreview.net/pdf?id=S1fDssA5Y7
https://openreview.net/forum?id=S1fDssA5Y7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1evgS8QeV", "HJgjIKqF0Q", "SkxeTTP8Am", "S1gcqpwICm", "HylnGTP8AQ", "SkeZ7hDL07", "B1xf4Gbf0Q", "r1gREiBIa7", "SkxDmWLphX", "BJg0BCNW9Q", "HJlGy6ogqQ", "Skg4rIslqm", "Skgpnjql9X", "r1eMXmYl57", "SkgPhovg57", "B1lZOdyx5Q", "S1lr7gmJcX", "rJlmXAkAY7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "comment" ], "note_created": [ 1544934639169, 1543248210598, 1543040439581, 1543040401649, 1543040275859, 1543040025165, 1542750761753, 1541983029889, 1541394718729, 1538506310293, 1538469081876, 1538467388243, 1538464692864, 1538458394246, 1538452398750, 1538418792757, 1538367516897, 1538289179490 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper630/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper630/Authors" ], [ "ICLR.cc/2019/Conference/Paper630/Authors" ], [ "ICLR.cc/2019/Conference/Paper630/Authors" ], [ "ICLR.cc/2019/Conference/Paper630/Authors" ], [ "ICLR.cc/2019/Conference/Paper630/Authors" ], [ "ICLR.cc/2019/Conference/Paper630/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper630/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper630/AnonReviewer1" ], [ "~Weihua_Hu1" ], [ "ICLR.cc/2019/Conference/Paper630/Authors" ], [ "~Weihua_Hu1" ], [ "ICLR.cc/2019/Conference/Paper630/Authors" ], [ "~Weihua_Hu1" ], [ "ICLR.cc/2019/Conference/Paper630/Authors" ], [ "~Weihua_Hu1" ], [ "ICLR.cc/2019/Conference/Paper630/Authors" ], [ "~Weihua_Hu1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper received high quality reviews, which highlighted numerous issues with the paper. A common criticism was that the results in the paper seemed disconnected. Numerous technical concerns were raised. Reading the responses, it seems that some of these issues are nonissues, but it seems also that the writing was not sufficiently up to the standard required of this type of technical work. I suggest the authors produce a rewrite and resubmit to the next ML conference, taking the criticisms they've received here very seriously.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Needs rewrite\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"Thank you very much for your review and comments! We apologize for the writing and we will polish it. Follow-up work is proceeding and hopefully we will provide more connection of the theoretical results to the empirical result.\"}", "{\"title\": \"Response to Reviewer (Part 3/3)\", \"comment\": \"-Algorithm 1 (WSGD) needs to specify (q,r) and which G (G_1 or G_2) as inputs too.\\n-Most importantly, WSGD does not seem to be minimizing the robust risk at all. First, I'm not really sure what the G_1 variant does. If we were to follow the intuition of Theorem 1, we should be looking at the gradients, not the loss values. As for G_2, by sampling we are in fact replacing the sup over p with an average over P. This can have a significantly different behavior, and we could possibly interpret it as a slightly reduced effective batch size, especially in the case of G_2. In fact, in the experiments, when r is set to 0, this is exactly what is happening! At any rate, it is not clear at all how any of the earlier sections connect with sections 6 or 7.\\n-In the experimental section it is not clarified which of the latter two is used (I assume G_2, the randomized one, given the discussion in the end of Section 6.) \\n\\nWe admit that there is some abuse of notation. We have pointed out that in the algorithm section, G_1 is used. We proposed a framework of algorithm called WSGD. When we talk about WSGD(q,r), it refers in particular to WSGD equipped with G_1(q,r). All the theoretical analysis and experiments below are based on G_1! WSGD(q,r) calculated the stochastic gradient of a robust loss (see Appendix C), and obviously it minimizes the robust loss. So the effect of WSGD(q,r) is far beyond just reducing the batch size. We gave some discussion on G_2 in the end of Section 6 because it may work well in some tasks, which we did no\\u2122t involve in this paper. We implore you to read Sections 6 and 7 again, because there appears some misunderstanding in the previous reading. \\n\\n\\n\\n-When the authors write \\\"accuracy improvement\\\", they should more clearly say \\\"relative decrease in misclassification error\\\". That's the only thing that makes sense with the numbers, and if it does in fact the authors mistakenly say that the 5-15\\\\% improvement is for CIFAR 100 and the 5\\\\% is for CIFAR 10, it's the other way around!\\n\\nWe agree. We have corrected the mistake in the new version.\\n\\n\\n\\n-A better benchmark would have been to compare against various batch sizes, and somehow show that the results do *not* follow from batch size effects.\\n\\nAgain, we wish to emphasize that WSGD(q,r) is WSGD equipped with G_1, so the role of WSGD(q,r) is not just reducing the batch size. Furthermore, empirically speaking, the batch size almost disappears when the batch size is below 128, i.e., the model trained with batch size=32, 64, 128 always have the same performance). So we compared SGD and WSGD(q,r) with batch size 128 for variable control.\"}", "{\"title\": \"Response to Reviewer (Part 2/3)\", \"comment\": \"-This makes the question of what is the \\\"true\\\" robust risk unclear. It is tempting to simply say it is its expectation with respect to a generic sample. This is the view taken in Theorem 3, which offers a kind of generalization bound. But if one looks carefully at this, the location of the expectation and supremum should be swapped. Here is an alternative view: if want to think of the infinite sample limit, then we need to have a sequence of robustness classes P_m, that vary with m (say those that put weight only on a q-fraction of the samples, just like in the suggested WSGD). The \\\"true\\\" robust risk would be the limit of the sup of the empirical risk, this keeps the sup and expectation in the right order. Under the right conditions, this limit would indeed exist. And it is difficult to know, for a given m, how *far* the generic-sample expectation of Theorem 3 is from it. Without this knowledge, it is difficult to interpret Theorem 3 as a generalization bound.\\n-Theorem 5 gives a local Rademacher complexity. But again there is a conceptual step missing from this to strong generalization bounds, partly because we are not exactly minimizing the empirical risk within the considered class.\\n\\nYour confusion about Theorem 5 just provides the solution of your previous question. Theorem 3 doesn't aim to provide a bound on generalization error R(\\\\theta) or distributional robust loss R(\\\\theta, K) at all. It aims to provide guarantees that minimizing empirical robust loss leads to a small expected robust loss w.r.t. a generic sample. This is fundamental to the analysis on the localized Rademacher complexity in Section 5.2, since the class \\\\Theta_c we have considered requires the expected robust loss is lower than c. The alternative view you provided is quite right, but not the topic we studied in this paper. We mainly focus on how the size of P influences the generalization when m is fixed. The behavior when P_m varies with m is meaningful, but it requires us to appoint how P_m varies with m. However, we did not want to specify the shape of P in this paper.\\n\\nVarious generalization bounds on R(\\\\theta) in terms of the empirical robust loss have been developed in previous works.\\n\\n\\n\\n-Also, the discussion that follows bounding the rad_\\\\infty with |P|_\\\\infty is deficient, because it misses again the fact there are two salient terms that need to balance out.\\n\\nNote that \\\\epsilon in Theorem 5 can be chosen arbitrarily. If we choose \\\\epsilon=O(1/m^k), it will just add a term of O(ln m) to the last term. We did not find the two salient terms you have mentioned. Can you specify them?\"}", "{\"title\": \"Response to Reviewer (Part 1/3)\", \"comment\": \"We thank the reviewer for their comments. However, there are some misunderstandings that need to be corrected.\\n\\n-The notion of distributional robust loss is sound, i.e. R(\\\\theta, K). Its empirical variant is also good, \\\\hat{R}(\\\\theta, K). But the notion of robust loss defined in the paper, \\\\hat R_{S,P}(\\\\theta) with the weights on the samples, breaks from this notion. The reason is that in the former case the weights depend on the value of the sample (z) whereas in the latter they depend on the index (i). It is not evident how to map one to the other.\\n\\nFor fixed sample S, we denote P(S, K)={(P_\\\\lambda(z_1)/P_\\\\lambda_0(z_1),...,P_\\\\lambda(z_m)/P_\\\\lambda_0(z_m)): \\\\lambda \\\\in K}, which maps K into a weight set. Since we only have access to the training data, and the underlying distribution is unknown, it's impractical to use R(\\\\theta,K) or \\\\hat{R}(\\\\theta,K) to develop algorithms. So we seek some substitutes, and \\\\hat R_{S,P}(\\\\theta) is a good one. It seems to assign weights according to the indices, but it's not the case. We have assumed P is symmetrical, and when the supremum is taken over all possible weights, it doesn't matter how you arrange the instances (z_i). \\n\\n\\nIt's true that we assume the weights independent of the value of the sample, so that it's convenient to analyze the generalization properties afterwards. \\n\\n\\n\\n-I'm not sure whether Theorem 2 exactly proves what the paragraph before it is trying to explain. The fact that SGD converges into the ball seems contrived, since the quantities that we are trying to bound have nothing to do with the optimization method. If the optimum is within the ball (+/- something) then the same result should hold with the step size replaced with the (+/- something). So how does this explain escaping stationary points?\\n\\nWhat we want to argue in Theorem 2 is that when SGD converges with large step size, the robust loss can be bounded. This has nothing to do with escaping from saddle point.\\n\\n\\n\\n-For P that puts weight k over 1/k points, |P|_\\\\infty = 1/k. RAD_2(P) is bounded by 1/\\\\sqrt{k}, so it's negligible next to 1. But the covering number N can grow exponentially with k (when it's not too large, and for small \\\\epsilon, just by counting arguments). So this seems to say that for a good tradeoff in the bounds will lead to k having to be a growing fraction of m. This intuition, if true, is not presented. Not only that, but it also goes against the suggested approach of choosing some constant fraction of m.\\n\\nIt's helpful to notice that the set conv(P_k) decreases monotonically when k increases. We don't agree that the covering number N(\\\\Theta(S,P)) grows exponentially with k, instead the converse might be true. But in my point, analyzing the convergence rate of the upper bound is meaningless. We are in favour of DRO because it can achieve better results when data are limited and the model capacity is excessive.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"Thank you for the review. Here are our response to your specific comments:\\n\\n-One of the first issues to arise is that the definition of ``generalization error\\u2018\\u2019 is not the one typically used in learning theory. Here the generalization error is used for what is more generally called the risk. Generalization error often refers to as the difference R(theta) - ^R(theta) between the risk and the empirical risk (i.e., the risk evaluated against the empirical distribution). (Generally this quantity is positive, although sometimes the absolute value is bounded instead.) \\n\\nThere are many machine learning literatures which refer to the term R(\\\\theta) as generalization error, such as Foundations of Machine Learning (Mohri et al.).\\n\\nIt's clear that when the class of models is fixed, small risk is equivalent to small excess risk.\\n\\n\\n\\n-The unbiased estimate suggested on page 2 is not strictly speaking an estimator because it depends on \\\\lambda_0, which is not measurable with respect to the data. The definition of K and how it relates to the estimate \\\\hat \\\\lambda is vague. Then the robust loss is introduced where the unknown quantity is replaced by a pre-specified collection of weights. If these are pre-specified (and not data-dependent), then it is really not clear how these could be a surrogate for the distribution-dependent weights appearing in the empirical distributionally robust loss.\\n\\nWe admit that, strictly speaking, it's not an estimator, but it doesn't obstruct the idea. Actually the weight set P is data dependent. When the sample is fixed, we consider the robust loss with a pre-specified P. We replace the unknown quantity by a pre-specified collection of weights simply because K is unknown and we seek some surrogate (the set P) in hope to cover those weights in \\\\hat{R}(\\\\theta,K). When the dataset changes, we need to choose another weight set. The weight set P can be viewed as a tuning parameter.\\n\\n\\n\\n-Perhaps this is all explained clearly in the literature introducing DRO, but this introduction leaves a lot to be desired.\\n\\nWe have carefully investigate other works introducing DRO, and many define the weight set P to be all the weights p that is \\\"close\\\" to uniform distribution (p_0) using f-divergence or \\\\phi-divergence. Our approach is a natural generalization of theirs.\\n\\n\\n\\n-Theorem 2 seems to be far too coarse to explain anything. The step size is very small and so 1/eta^2 is massive. This will never be controlled by 1/mu, and so this term alone means that there is affectively no control on the robust loss in terms of the local minimum value of the empirical risk.\\n\\nThe term 1/eta^2 can be controlled by B^2. What we want to argue in Theorem 2 is that when SGD converges with large step size, the robust loss can be bounded. So the step size is not very small actually, and the term (B/eta)^2 measures how well SGD converges. \\n\\n\\n\\n- There seems to be no argument that robustness leads to any improvement over nonrobust... at least I don't see why it must be true looking at the bounds. At best, an upper bound would be shown to be tighter than another upper bound, which is meaningless.\\n\\nThe bound in Theorem 5 shows how the size of P influences the local Rademacher complexity. From what we have seen so far, we are the first to build such a bound and whereas no other upper bound at all.\\n\\n\\n---------------\\n-It seems strange to assume that the data distribution P is a member of the parametric model M. This goes against most of learning theory, which makes no assumption as to the data distribution, other than the examples being i.i.d.\\n\\nThis part just shows our motivation, and for simplicity and clarity we introduce a parametric family M. An equivalent argument without referring to M is also possible.\\n\\n\\n\\n-\\\"Conceivably, when m and c are fixed, increasing the size of P reduces the set \\u0398c\\\". Conceivably? So it's not necessarily true? I don't understand the role of conceivably true statements in a paper.\", \"the_statement_is_true_in_the_following_sense\": \"if P1 is a subset of P2, then \\\\Theta_c(P2) is a subset of \\\\Theta_c(P1). The result immediately follows from the definitions, so we omit the proof.\"}", "{\"title\": \"The paper needs major revisions, theorems are disjoint with few explanations.\", \"review\": \"The paper aims to connect \\\"distributionally robust optimization\\\" (DRO) with stochastic gradient descent. The paper purports to explain how SGD escapes from bad local optima and purports to use (local) Rademacher averages (actually, a generalization defined for the robust loss) to explain the generalization performance of SGD.\\n\\nIn fact, the paper proves a number of disjointed theorems and does very little to explain the implications of these theorems, if there are any. The theorem that purports to explain why SGD escapes bad local minima does not do this at all. Instead, it gives a very loose bound on the \\\"robust loss\\\" under some assumptions that actually rule out ReLU networks.\\n\\nThe Rademacher results for robust loss looked promising, but there is zero analysis suggesting why these explain anything. Instead, there is vague conjecture. The same is true for the local Rademacher statements. It is not enough to prove a theorem. One must argue that it bears some relationship to empirical performance and this is COMPLETELY missing.\", \"other_criticisms\": \"1. One of the first issues to arise is that the definition of \\\"generalization error\\\" is not the one typically used in learning theory. Here generalization error is used for what is more generally called the risk. Generalization error often refers to the difference R(theta) - ^R(theta) between the risk and the empirical risk (i.e., the risk evaluated against the empirical distribution). (Generally this quantity is positive, although sometimes its absolutely values is bounded instead.) \\n\\nAnother issue with the framing is that one is typically not interested in small risk in absolute terms, but instead small risk relative to the best risk available in some class (generally the same one that is being used as a source of classifiers). Thus one seeks small excess risk. I'm sure the authors are aware of these distinctions, but the slightly different nomenclature/terminology may sow some confusion.\\n\\n2. The unbiased estimate suggested on page 2 is not strictly speaking an estimator because it depends on \\\\lambda_0, which is not measurable with respect to the data. The definition of K and how it relates to the estimate \\\\hat \\\\lambda is vague. Then the robust loss is introduced where the unknown quantity is replaced by a pre-specified collection of weights. If these are pre-specified (and not data-dependent), then it is really not clear how these could be a surrogate for the distribution-dependent weights appearing in the empirical distributionally robust loss.\\n\\nPerhaps this is all explained clearly in the literature introducing DRO, but this introduction leaves a lot to be desired.\\n\\n3. \\\"This interpretation shows a profound connection between SGD and DRO.\\\" This connection does not seem profound to a reader at this stage of the paper.\\n\\n4. Theorem 2 seems to be far too coarse to explain anything. The step size is very small and so 1/eta^2 is massive. This will never be controlled by 1/mu, and so this term alone means that there is affectively no control on the robust loss in terms of the local minimum value of the empirical risk.\\n\\n5. There seems to be no argument that robustness leads to any improvement over nonrobust... at least I don't see why it must be true looking at the bounds. At best, an upper bound would be shown to be tighter than another upper bound, which is meaningless.\", \"corrections_and_typographical_errors\": \"1. There are grammatical errors throughout the document. It needs to be given to a copy editor who is an expert in technical documents in English.\\n\\n2. \\\"The overwhelming capacity ... of data...\\\" does not make sense. The excessive complexity of the sentence has led to grammatical errors.\\n\\n3. The first reference to DRO deserves citation.\\n\\n4. It seems strange to assume that the data distribution P is a member of the parametric model M. This goes against most of learning theory, which makes no assumption as to the data distribution, other than the examples being i.i.d.\\n\\n5. You cite Keskar (2016) and Dinh (2017) around sharp minima. You seem to have missed Dziugaite and Roy (2017, UAI) and Neyshabur et al (NIPS 2017), both of which formalize flatness and give actual generalization bounds that side step the issue raised by Dinh.\\n\\n6. \\\"not too hard compared\\\" ... hard?\\n\\n7. Remove \\\"Then\\\" from \\\"Then the empirical robust Rademacher...\\\". Also removed \\\"defined as\\\" after \\\"is\\\".\\n\\n8. \\\"Denote ... as an\\\" should be \\\"Let ... denote the...\\\" or \\\"Denote by ... the upper ...\\\"\\n\\n9. \\\" the generalization of robust loss is not too difficult\\\" ... difficult? \\n\\n10. \\\"some sort of \\u201csolid,\\u201d \\\" solid?\\n\\n11. \\\"Conceivably, when m and c are fixed, increasing the size of P reduces the set \\u0398c\\\". Conceivably? So it's not necessarily true? I don't understand the role of conceivably true statements in a paper.\\n\\n[This review was requested late in the process due to another reviewer dropping out of the process.]\\n\\n[UPDATE] Authors' response to my questions did not change my opinion about the overall quality of the paper. Both theory and writing need a major revision.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good motivation but shaky theory, disconnected algorithm, and weak experiments.\", \"review\": [\"This paper motivates performing a \\u201crobustified\\u201d version of SGD. It attempts to formalize this notion, proves some variants of generalization bounds, and proposes an algorithm that claims to implement such a modified SGD.\", \"The clarity of the paper can be improved. There are several notational and language ambiguities throughout. Most importantly, a couple of paragraphs that are meant to convey key intuitions about the results are very badly written (the one following Theorem 1, the one preceding Theorem 2, the last one in Section 5.1, and the one preceding Theorem 5, more on these later).\", \"Apart from these clarity issues, the significance of the results is weak. This is because although the technical statements seem correct, the leap from them to measurable outcomes (such as actual generalization bounds) are missing. Part of this is due to a lack of a good notion of \\u201ctrue\\u201d robust risk. Moreover the algorithmic contribution does not connect well with the suggested theory and the experimental results are modest at best. Here is a more detailed breakdown of my objections.\", \"The notion of distributional robust loss is sound, i.e. R(\\\\theta, K). Its empirical variant is also good, \\\\hat{R}(\\\\theta, K). But the notion of robust loss defined in the paper, \\\\hat R_{S,P}(\\\\theta) with the weights on the samples, breaks from this notion. The reason is that in the former case the weights depend on the value of the sample (z) whereas in the latter they depend on the index (i). It is not evident how to map one to the other.\", \"This makes the question of what is the \\u201ctrue\\u201d robust risk unclear. It is tempting to simply say it is its expectation with respect to a generic sample. This is the view taken in Theorem 3, which offers a kind of generalization bound. But if one looks carefully at this, the location of the expectation and supremum should be swapped. Here is an alternative view: if want to think of the infinite sample limit, then we need to have a sequence of robustness classes P_m, that vary with m (say those that put weight only on a q-fraction of the samples, just like in the suggested WSGD). The \\u201ctrue\\u201d robust risk would be the limit of the sup of the empirical risk, this keeps the sup and expectation in the right order. Under the right conditions, this limit would indeed exist. And it is difficult to know, for a given m, how *far* the generic-sample expectation of Theorem 3 is from it. Without this knowledge, it is difficult to interpret Theorem 3 as a generalization bound.\", \"Theorem 1 itself is a standard result. The discussion after Theorem 1 is the kind of argument that also explains Fisher information, and can be presented more clearly. I\\u2019m not sure whether Theorem 2 exactly proves what the paragraph before it is trying to explain. The fact that SGD converges into the ball seems contrived, since the quantities that we are trying to bound have nothing to do with the optimization method. If the optimum is within the ball (+/- something) then the same result should hold with the step size replaced with the (+/- something). So how does this explain escaping stationary points?\", \"If we accept Theorem 3 as a generalization bound, alongside with the Rademacher bounds of Theorem 4, I don\\u2019t think the paper treats the balance between the various terms adequately enough. In particular we see that the |P|_\\\\infty term in Theorem 3 has to balance out the (robust) Rademacher bound, and need it to be of the order of (1+RAD_2(P)\\\\sqrt{\\\\log N}/m). For P that puts weight k over 1/k points, |P|_\\\\infty = 1/k. RAD_2(P) is bounded by 1/\\\\sqrt{k}, so it\\u2019s negligible next to 1. But the covering number N can grow exponentially with k (when it\\u2019s not too large, and for small \\\\epsilon, just by counting arguments). So this seems to say that for a good tradeoff in the bounds will lead to k having to be a growing fraction of m. This intuition, if true, is not presented. Not only that, but it also goes against the suggested approach of choosing some constant fraction of m.\", \"Theorem 5 gives a local Rademacher complexity. But again there is a conceptual step missing from this to strong generalization bounds, partly because we are not exactly minimizing the empirical risk within the considered class. Also, the discussion that follows bounding the rad_\\\\infty with |P|_\\\\infty is deficient, because it misses again the fact there are two salient terms that need to balance out.\", \"Algorithm 1 (WSGD) needs to specify (q,r) and which G (G_1 or G_2) as inputs too.\", \"Most importantly, WSGD does not seem to be minimizing the robust risk at all. First, I\\u2019m not really sure what the G_1 variant does. If we were to follow the intuition of Theorem 1, we should be looking at the gradients, not the loss values. As for G_2, by sampling we are in fact replacing the sup over p with an average over P. This can have a significantly different behavior, and we could possibly interpret it as a slightly reduced effective batch size, especially in the case of G_2. In fact, in the experiments, when r is set to 0, this is exactly what is happening! At any rate, it is not clear at all how any of the earlier sections connect with sections 6 or 7.\", \"In the experimental section it is not clarified which of the latter two is used (I assume G_2, the randomized one, given the discussion in the end of Section 6.) When the authors write \\u201caccuracy improvement\\u201d, they should more clearly say \\u201crelative decrease in misclassification error\\u201d. That\\u2019s the only thing that makes sense with the numbers, and if it does in fact the authors mistakenly say that the 5-15% improvement is for CIFAR 100 and the 5% is for CIFAR 10, it\\u2019s the other way around! And the exception (least) improvement seems to be ResNet-34 on CIFAR-100 (not VGG-16, as they claim, unless the table is wrong.) All in all, these are all pretty weak results, albeit consistent. A better benchmark would have been to compare against various batch sizes, and somehow show that the results do *not* follow from batch size effects.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting connection between SGD and DRO, but writing and experiments need more clarity\", \"review\": \"This paper consider the connections between SGD and distributionally robust optimization. There has long been observed a connection between robust optimization and generalization. Recently, this has been explored through the lens of distributionally robust optimization. e.g., in the papers of Namkoong and Duchi, but also many others, e.g., Farnia and Tse, etc. Primarily, this paper appears to build off the work of Namkoong. \\n\\nThe key connection this paper tries to make is between SGD and DRO, since SGD in sampling a minibatch, can be considered a small perturbation to the distribution. Therefore the authors use this intuition to propose a weighted version of SGD (WSGD) whereby high variance weights are assigned to mini batch, thus making the training accomplish a higher level of distributional robustness. \\n\\nThis idea is tested on a few data sets including CIFAR-10 and -100. The results compare WSGD with SGD, and they show that the WSGD-trained models have a lower robust loss, and also have a higher (testing) accuracy. \\n\\nThis is an interesting paper. There has been much discussion of the role of batch size, and considering it from a different perspective seems to be of interest. But the connection of the empirical results to the theoretical results seems tenuous. It\\u2019s not clear how predictions of the theory match up. This would be useful to understand better. More generally, a simpler presentation of the key results would be useful, so as to allow the reader to better appreciate what are the main claims and if they are as substantial as claimed. Overall the writing needs significant polishing, though this is only at a local level, i.e, it doesn\\u2019t obscure the flow of the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"comment\": \"Thanks for your discussion!\\nThe aspect of minima selector is something our paper did not consider; we basically considered the convex scenario and provided analyses for the global solution of DRO. \\n\\nI think it would be extremely helpful to have some discussion on our paper and make the distinction clear. Thanks!\", \"title\": \"Thanks!\"}", "{\"title\": \"Reply\", \"comment\": \"We sincerely thank you for your discussion. We haven't provided the training curve in our paper because of the paper length. Empirically speaking, WSGD doesn't markedly speed up the convergence, but it achieves a lower robust loss eventually (we use cross-entropy loss).\\n\\nThe point we want to emphasize in this paper is that an optimization algorithm in deep learning is not only an optimizer but also a minima selector. We propose WSGD because it has better performance in minima selecting.\\n\\nIn the end, we wish to express our thanks for your patient discussion again!\"}", "{\"comment\": \"I see your points. Thanks for your discussion!\\n\\nI think it useful to have training curves in your paper, comparing SGD and WSGD. Intuitively, WSGD may speed up the convergence, but SGD should also achieve 0 training loss eventually. If this is the case, it is mysterious to me why WSGD generalizes better than SGD. Maybe there is inductive bias in WSGD?\\n\\nAlso, CIFAR is just a benchmark dataset; that's why it is clean.\", \"title\": \"Thanks!\"}", "{\"title\": \"Reply\", \"comment\": \"In fact, we just take advantage of of the capacity of DNNs. Due to the strong capacity of DNNs, we can pay more attention to large surrogate loss without increasing the surrogate loss of other data too much.\\n\\nWe have clarified the necessity to do DRO in our introduction section. I agree that there exists some outliers in the dataset. However, if you want to talk about outliers, you have to introduce extra domain knowledge. Suppose you have trained a model using the dataset, you can't say that the data with large loss (calculated by the model you trained) are outliers. They can be high-quality data which you didn't fit well and the real outliers can have small loss. It is entirely possible that to fit the large loss data is doing regularization on the model and alleviate overfitting.\\n\\nThe last thing I want to say is the number of outliers. Strictly speaking, it depends on your measure. In our experiment, we found that about 0.01%-0.1% of the data in CIFAR are outliers. However, what we actually do by DRO is to pay more attention to the data with top 10% or 50% (roughly speaking) loss, and the effect of outliers is negligible. Actually what we do have achieved a good performance.\"}", "{\"comment\": \"> By the way, the algorithm in our paper is not based on f-divergence, and the steeper loss may be not applicable to our algorithm.\\nOur steepness result holds as long as the larger weights are assigned to larger losses. However, I am not sure what will happen if the mini-batch losses are reweighted, which is the focus of your paper.\\n\\n> The main idea of our paper is that if the sample is of high quality (no outliers), we should pay more attention to individual instance.\\nI see your point, but I have to say that that is a very strict assumption considering the real-world applications.\\n\\n> But I think that large losses reflect a drawback of the model and thus we should pay more attention to those data.\\nAs DNNs often overfit the training data (achieve 0 training error) eventually, why do we need to pay attention to large loss data during training?\\nEven when DNNs do not have enough capacity to overfit the training data, paying more attention to large surrogate loss will inevitably increase the surrogate loss of other data, which may increase the overall misclassification rate (measured by 0-1 loss). What do you think about this point?\", \"title\": \"Reply\"}", "{\"title\": \"Reply\", \"comment\": \"I get your point. Actually we have different views on the data with large losses. The steeper loss function you constructed in your paper can be viewed as a loss function enlarging the gradient (and thus put large weights) of data with large loss. You think that data with large loss may be outliers and can do harm to our model. But I think that large losses reflect a drawback of the model and thus we should pay more attention to those data. In fact, both of the views can be true in practice, but we can't distinguish without extra domain knowledge. The main idea of our paper is that if the sample is of high quality (few outliers), we should pay more attention to individual instance. By the way, the algorithm in our paper is not based on f-divergence, and the steeper loss may be not applicable to our algorithm.\"}", "{\"comment\": \"For sure, I agree that ERM with a surrogate loss empirically gives a good classifier when tested with the 0-1 loss, which is also supported by the theory of classification calibration loss. What our paper shows is that DRO applied to classification gives (almost) the same classifier as the ERM (in terms of the 0-1 loss). Empirically, DRO can even be worse than the ERM because it puts large weights on large losses and thus can be extremely sensitive to outliers.\\n\\nI trust your empirical results as well as your theoretical results, but what I want to emphasize here is that there is a huge gap between \\\"small surrogate loss\\\" and \\\"small 0-1 loss (what we really care about)\\\", and we need to be especially careful about this gap when dealing with DRO applied to classification!\", \"title\": \"Reply\"}", "{\"title\": \"Thank you for the feedback\", \"comment\": \"Thank you for the feedback. We have carefully read your work before, but we think there is no conflict between your work and ours. In our understanding, your work compared ERM and DRO with 0-1 loss and with surrogate loss. However, in our paper we don\\u2019t deal with ERM loss at all.\\n\\nIt is true that in classification, the 0-1 loss is used for testing, which is different from a surrogate loss used for training. However, the problem you pointed out is also a problem for ERM. If we write a article \\u201cDoes Empirical Risk Minimization Give Effective Classifier?\\u201d, how will you response us? In classification, we normally think that optimizing an object function with surrogate loss such as cross-entropy will give a solution that performs well when tested with 0-1 loss. Last but not least, our algorithm really performs better than normal SGD on classification tasks in our experiment.\"}", "{\"comment\": \"Our ICML 2018 paper \\\"Does Distributionally Robust Supervised Learning Give Robust Classifier?\\\" analyzed DRO applied to classification and showed several negative results about it. The key to show our results is that in classification, the 0-1 loss is used for testing, which is different from a surrogate loss used for training.\", \"link\": \"http://proceedings.mlr.press/v80/hu18a/hu18a.pdf\\n\\nIt would be great to have discussion on our paper as your empirical evaluation is on classification tasks.\", \"title\": \"DRO applied to classification\"}" ] }
ByeDojRcYQ
COLLABORATIVE MULTIAGENT REINFORCEMENT LEARNING IN HOMOGENEOUS SWARMS
[ "Arbaaz Khan", "Clark Zhang", "Vijay Kumar", "Alejandro Ribeiro" ]
A deep reinforcement learning solution is developed for a collaborative multiagent system. Individual agents choose actions in response to the state of the environment, their own state, and possibly partial information about the state of other agents. Actions are chosen to maximize a collaborative long term discounted reward that encompasses the individual rewards collected by each agent. The paper focuses on developing a scalable approach that applies to large swarms of homogeneous agents. This is accomplished by forcing the policies of all agents to be the same resulting in a constrained formulation in which the experiences of each agent inform the learning process of the whole team, thereby enhancing the sample efficiency of the learning process. A projected coordinate policy gradient descent algorithm is derived to solve the constrained reinforcement learning problem. Experimental evaluations in collaborative navigation, a multi-predator-multi-prey game, and a multiagent survival game show marked improvements relative to methods that do not exploit the policy equivalence that naturally arises in homogeneous swarms.
[ "Reinforcement Learning", "Multi Agent", "policy gradient" ]
https://openreview.net/pdf?id=ByeDojRcYQ
https://openreview.net/forum?id=ByeDojRcYQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJlurooWl4", "SkeAXaEjhX", "r1x9C-Bq2X", "HkeINOFOn7" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544825663791, 1541258533960, 1541194194459, 1541081134456 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper629/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper629/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper629/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper629/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": [\"Pros:\", \"interesting novel formulation of policy learning in homogeneous swarms\", \"multi-stage learning process that trades off diversity and consistency (fig 1)\"], \"cons\": [\"implausible mechanisms like averaging weights of multiple networks\", \"minor novelty\", \"missing ablations of which aspect is crucial\", \"dubious baseline results\", \"no rebuttal\", \"One reviewer out of three would have accepted the paper, the other two have major concerns. Unfortunately the authors did not revise the paper or engage with the reviewers to clear up these points, so as it stand the paper should be rejected.\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-review\"}", "{\"title\": \"An interesting idea, with some good theoretical justification and interesting evaluations. There could be more precise details (particularly on experiments) and the approach used to combine gradients could be better related to prior work.\", \"review\": \"## Summary\\nThe authors present an approach to training collaborative swarms of agents based around giving all agents identical (or near identical) policies. The training regime involves individual agents rolling out trajectories based on slight perturbations of an agent of focus keeping the policy of other agents fixed. This is repeated for each agent, then these trajectories are used to batch update the joint policy with an average gradient.\\n\\nOn the whole, I think the paper is well written and the idea novel. There are places where the explanations could be clearer and details more explicit (see below for examples). There are some interesting evaluations but I am not sure these are as rigourous as they could be, in particular (but not limited to) the survival game. I am, however, recommending this for acceptance as on balance the positives outweigh the negatives.\\n\\n## More detailed comments\\nThe authors could make it a bit clearer what existing work on averaging policy gradients exist, and whether their approach is a natural extention of these existing approaches to their swarm domain, or whether there is additional novelty there. It is unclear to me which is the case. They talk about meta-learning in the related work but it is unclear precisely how they relate this to their own work.\\n\\nThe authors could describe their experiments a little more explicitly. For instance, they say that agents are penalised for getting too close in the navigation task, but do not say how this penalty is constructed. Is it a step function based on distance or something else? Also, they should state what parameters they use for each of the environmental factors, e..g minimum distance etc.\\n\\nThe survival game is poorly described, as are choices for the evaluation of it. I realise these games are designed elsewhere, but if the exact same parameters are used as in the original papers then this should be stated. Finally, I am a little unclear why the survival game cannot be compared with other algorithms, even if those algorithms fail to learn anything. I realise that the algorithms with decentralised actors won't scale here, but something like the mean field approaches described by the authors in the related work, or even less sophisticated algorithms using some (but not all) features of their own approach would show something interesting. And the choices of 225 and 630 agents needs better justification.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"This paper addresses MARL in the case of having many homogeneous agents with shared policy. Some concerns about the the validity of projection.\", \"review\": \"The introduction of the paper is well-written and the authors quite clearly explain the purpose; however, I believe that the notation should be revisited to further simplify them. The algorithm pretty much is similar to the A2C algorithm (very minor differences) and overall, I don't see the contribution of the paper to be significant enough. Also, there are a few other concerns that I summarize next:\\n\\n1) In example 1, just knowing the relative distance with all other agents is not equivalent to knowing the full state of the environment. This is because the angles with the other agents are important; i.e. you need to know (r,\\\\theta) Polar Coordinates.\\n\\n2) I personally don't like using the word \\\"constrained'' used in this paper. Going back to constrained RL literature, the purpose of constrained RL is, for example, not entering the hazardous states. At the first time reading the paper, I thought that the constraints are referring to such cases, e.g. make sure that the agents never hit each other. But, the concept of constraint used in this work is totally different and it simply means copying the network weights.\\n\\n3) In section 3, using the neural networks and averaging of the weight does not make any sense. What does it mean to average weights of several NN? NN is a nonlinear function approximator, and you cannot average weights. Based on your algorithm, I see that you aggregate the gradients which is a correct approach. In fact, the projection step defined in page 5 is never used in your implementation I guess, because otherwise, this algorithm will not work.\\n\\n4) The distributed model pretty much resembles A2C algorithm where each agent can be considered as a thread. At every time, you only do a gradient step in one of the threads and for the rest, you use the central policy. This way, you stabilize the non-stationarity caused by concurrently learning policy. I do not see any major difference.\\n\\n5) What is the reason that you do not use the Critic?\\n\\n6) Having $\\\\theta_n=\\\\theta$ implies that $\\\\pi_n = \\\\pi$, but the other way around does not hold. Constraints of (8) are not equivalent to (5).\\n\\n7) Are you using different policies for different agents when using MADDPG or TRPO_Kitchensink? I think for a fair comparison, the agents should also share the policies in these algorithms too. It is very hard to believe that TRPO_Kitchensink and MADDPG almost learn nothing, or even learn in reverse direction (Fig 3).\\n\\n8) I think that the baseline comparisons for the case of having a small number of agents are necessary.\", \"minor\": [\"In section 3.1, the notations are over-populated. I would suggest simplifying the notations.\", \"In (4), (5),(6), argmax_\\\\pi\", \"(16) is simply the sum of gradients of two consecutive policy gradient steps which can be derived by (sum of grad = grad of sum). You might add this as an intuition beyond this formula.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting paper, some concerns with formalism and objective. Missing baselines.\", \"review\": \"This paper proposes a distributed policy gradient method for learning policies with large, collaborative, homogeneous swarms of agents.\\n\\nFormalism / objective: \\nThe setting is introduced as a \\\"collaborative Markov team\\\", so the objective is to maximise total team reward, as expressed in equation (3). This definition of the objective seems inconsistent with the one provided at line (14): Here the objective is stated as maximising the agent's return, L_n, after [k] steps of the agent updating their parameters with respect to L_n, assuming all other agents are static. I think the clearest presentation of the paper is to think about the algorithm in terms of meta-learning, so I will call this part the 'inner loop' from now on. \\nNote (14) is a very different objective: It is maximising the return of an agent optimising 'selfishly' for [k] steps, rather than the \\\"collaborative objective\\\" mentioned above. This seems to break with the entire premise of collaborative optimisation, as it was stated above.\", \"my_concern_is_that_this_also_is_reflected_in_the_experimental_results\": \"In the food gathering game, since killing other agents incurs \\\"a small negative reward\\\", it is never in the interest of the team to kill other team-mates. However, when the return of individual agents is maximised both in the inner loop and the outer loop, it is unsurprising that this kind of behaviour can emerge. Please let me know if I am missing something here.\", \"other_comments\": \"-The L_n(theta, theta_n) is defined and used inconsistently. Eg. compare line (9), L_n(theta_n, theta), with line below, L_n(theta, theta_n). This is rather confusing \\n-In equation (10) please specific which function dependencies are assumed to be kept? My understanding is that \\\\theata_n is treated as a function of theta including all the dependencies on the policies of other agents in the environment? \\n-Related to above, log( pi_\\\\theta_n ( \\\\tan_n)) in line 16 is a function of all agents policies through the joint dependency on \\\\theta. Doesn't that make this term extremely expensive to evaluate? \\n-Why were the TRPO_kitchensink and A3C_kitchensink set up to operate on the minimum reward rather than the team reward as it is defined in the original objective? It is entirely possible that the minimum reward is much harder to optimise, since feedback will be sparse. \\n-The survival game uses a discrete action space. I am entirely missing MARL baseline methods that are tailored to this setting, eg. VDN, QMIX, COMA etc to name a few. Even IQL has not been tried. Note that MADDPG assumes a continuous action space, with the gumble softmax being a common workaround for discrete action spaces which has not been shown to be competitive compared to the algorithms mentioned above. \\n-Algorithmically the method looks a lot like \\\"Learning with Opponent Learning Awareness\\\", with the caveat that the return is optimised after one step of 'self-learning' by each agent rather than after a step of 'Opponent-learning'. Can you please elaborate on the similarity / difference? \\n-Equation (6) and C1 are presented as contributions. This is the standard objective that's commonly optimised in MARL when using parameter sharing across agents.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ryxDjjCqtQ
Deconfounding Reinforcement Learning in Observational Settings
[ "Chaochao Lu", "José Miguel Hernández Lobato" ]
In this paper, we propose a general formulation to cope with a family of reinforcement learning tasks in observational settings, that is, learning good policies solely from the historical data produced by real environments with confounders (i.e., the factors affecting both actions and rewards). Based on the proposed approach, we extend one representative of reinforcement learning algorithms: the Actor-Critic method, to its deconfounding variant, which is also straightforward to be applied to other algorithms. In addition, due to lack of datasets in this direction, a benchmark is developed for deconfounding reinforcement learning algorithms by revising OpenAI Gym and MNIST. We demonstrate that the proposed algorithms are superior to traditional reinforcement learning algorithms in confounded environments. To the best of our knowledge, this is the first time that confounders are taken into consideration for addressing full reinforcement learning problems.
[ "confounder", "causal inference", "reinforcement learning" ]
https://openreview.net/pdf?id=ryxDjjCqtQ
https://openreview.net/forum?id=ryxDjjCqtQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rkx2ElI-lN", "Bkx0NwKtCX", "S1xZZDFFCX", "S1gEh7FYR7", "H1lrLmtKA7", "SJl8CZFK0Q", "Skeu0k3O0Q", "HJxY5yhdC7", "ryxNPyhd0X", "Bygl71nuC7", "SygU8Vhl0X", "SyeZ0H9h2Q", "S1gO5nK3nm", "HkguruuhnQ", "H1lpVE_3nQ", "r1x1gx_hhm", "ByxwiAPhhQ", "Byg9XiEs3X", "BJelvuEjnm", "Skluaf-shX", "BJxhQ0ejhX", "Hyx9C2Vqhm", "SJg7qLoY2X", "BJg3yE_t27", "r1xS8HyYn7", "SJe23dR_3X", "HyxkLV9DhQ", "rJlf2Fdv3m", "SJewqGpE37", "SyelGt3E3m", "SJgBBuGQ2m", "B1gSHMy7nX", "rygqNuS7jX", "SygNdtNXsQ", "rJeq_vV7sm", "H1lyWAJXoX", "HyxA4Dozom", "Byx_yQIbim", "Bkg9HFNWsX", "rJxsAvR1jX", "BylZvATko7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "official_review", "official_review", "comment", "comment", "official_review", "official_comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "comment", "comment", "official_comment", "official_comment", "comment", "comment", "official_comment", "comment", "official_comment", "comment" ], "note_created": [ 1544802356103, 1543243574149, 1543243512942, 1543242668126, 1543242572738, 1543242189932, 1543188431996, 1543188368938, 1543188315524, 1543188247943, 1542665293969, 1541346760783, 1541344400486, 1541339199552, 1541338164882, 1541337062926, 1541336735513, 1541258018107, 1541257303957, 1541243584233, 1541242403840, 1541192913551, 1541154442811, 1541141475715, 1541104972569, 1541101748127, 1541018695406, 1541011882169, 1540833934634, 1540831496045, 1540724797144, 1540710973278, 1539688497820, 1539684716458, 1539684209616, 1539665399369, 1539647286260, 1539560160039, 1539553602269, 1539463122580, 1539460697386 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper628/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "ICLR.cc/2019/Conference/Paper628/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper628/AnonReviewer2" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper628/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "ICLR.cc/2019/Conference/Paper628/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper628/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The paper studies RL based on data with confounders, where the confounders can affect both rewards and actions. The setting is relevant in many problems and can have much potential. This work is an interesting and useful attempt. However, reviewers raised many questions regarding the problem setup and its comparison to related areas like causal inference. While the author response provided further helpful details, the questions remained among the reviewers. Therefore, the paper is not recommended for acceptance in its current stage; more work is needed to better motivate the setting and clarify its relation to other areas.\\n\\nFurthermore, the paper should probably discuss its relation to (1) partially observable MDP; and (2) off-policy RL.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting work, with unclear motivation and relation to previous work\"}", "{\"title\": \"More Detailed Rebuttal about Specific Comments\", \"comment\": \"Re (1): Refer to Section 2.3 and Section 2.4, where we describe more methods of adjusting for confounders.\\n \\nRe (2): The kidney stone example is used throughout the paper, referring to Section 1, Section 2.1, Section 2.2, Section 3.1, Footnote 2, Appendix F.2, and Appendix H.3.\\n \\nRe (3): Z, sampled using Equation (6), has to be used in reinforcement learning algorithms, because we need the state transition when generating trajectories/rollout. Refer to Section 4.1, Section 4.4, and Appendix E.\\n \\nRe (4): Refer to Appendix F.2 for an intuition of the difference, and to Section 4.4 in which, as shown in Figure 3(c), in each episode our deconfounding algorithm using p(r_{t+1}|z_t, do(a_t=a)) almost chooses the optimal action at each time step, whilst the vanilla algorithm using p(r_{t+1}|z_t, a_t) makes a wrong decision for more than half time.\\n \\nRe (5): Refer to Section 4.2 and Appendix H.3 for the details about how to define the confounding datasets in which the reward exactly depends on the action and the confounder. Also, a straightforward analogy of kidney stones to this confounding dataset is provided in Appendix H.3 as well.\\n \\nRe (6): Refer to Section 4.3.\\nActually, to demonstrate the validity of our deconfounding model, denoted by M_decon, we compare with the original model (i.e., the model similar to that shown in Figure 1(b) but without the confounder u), denoted by M_orin. We train M_decon by optimizing Equation (19) but train Morin by a little different loss function excluding the confounder u whose full derivation can be found in Appendix C. Both models are separately trained in a batch manner on the training set (i.e., 140K sequences of length five of images) of the confounding dataset. Afterwards, following the steps depicted in Section 4.1, we use each trained model to perform the reconstruction task on the training set, and both reconstruction and counterfactual reasoning tasks on the testing set (i.e., 28K sequences of length five of images). Figure 2 presents a comparison of M_decon and M_orin, in terms of reconstruction and counterfactual reasoning on the confounding Pendulum dataset. The second row is based on M_decon (Figure 1(b)), whilst the top row comes from Morin. It is evident that the results generated by the deconfounding model is superior to those produced by the model not taking into account the confounder. To be more specific, as shown in the zoom of samples on the bottom row, Morin generates more blurry images than M_decon, because, without modelling the confounder u, M_orin is forced to average over its multiple latent states resulting in more blurry samples. \\n \\nRe (7): Refer to Section 4.4.\\nwe will evaluate the proposed deconfounding actor-critic (AC) method by comparing with its vanilla version on the confounding Pendulum dataset. In the vanilla AC method, given a learned M_orin, we optimize the policy by calculating the gradient presented in Equation (21) on the basis of the trajectories/rollouts generated through M_orin. Equation (21) involves two functions: V (z_t; \\u03c6_V ) and \\u03c0(a_t|z_t; \\u03b8), whose parameters can be found in Appendix J. It is worth noting that, in this vanilla case, each reward r_{t+1} is produced from the conditional distribution p(r_{t+1}|z_t, a_t). In contrast, the proposed deconfounding AC method is built on M_decon. Although the same gradient method (Equation (21)) is utilized to optimize the policy, we base the deconfounding AC approach on the different trajectories/rollouts generated by M_decon in which each reward r_{t+1} relies on the interventional distribution p(r_{t+1}|z_t, do(a_t)) computed using Equation (20).\\n\\nIn the training phase, for both vanilla AC and deconfounding AC, we run a respective experiment over 1500 episodes with 200 time steps each. In order to reduce non-stationarity and to decorrelate updates, the generated data is stored in an experience replay memory and then randomly sampled in a batch manner (Mnih et al., 2013; Riedmiller, 2005; Schulman et al., 2015; Van Hasselt et al., 2016). In each episode, we summarize all the rewards and further average the sums over a window of 100 episodes to obtain a smoother curve. As shown in Figure 3(a), obviously our deconfounding AC algorithm performs significantly better than the vanilla AC algorithm in the confounded environment.\\n\\nIn the testing phase, we first randomly select 100 samples from the testing set, each starting a new episode, and then use the learned policies to perform reasoning over 200 time steps as we did during the training time. From the resulting 100 episodes, we plot the total reward for each, shown in Figure 3(b), and compute the percentage of the optimal action T1 in each episode, presented in Figure 3(c). It is worth noting that Figure 3(c) tells us that in each episode our deconfounding AC almost chooses the optimal action at each time step, whilst the vanilla AC makes a wrong decision for more than half time.\"}", "{\"title\": \"More Detailed Rebuttal about High-level Comments\", \"comment\": \"Regarding High-level Comments:\\n \\nRe (1): Refer to Abstract, Section 1, and especially to the last paragraph of Appendix A in which we explained the reason why we consider only adjusting for the confounder u.\\nNote that, in our model we have to differentiate the two types of confounders: the time-independent confounder u and time-dependent confounders {z_t}(t=1,...,T), each playing a respective role in the model. The former, as a global confounder, will affect the whole course of treatment, and therefore should be adjusted for. In the example of kidney stones, the existence of the confounder (i.e., the size of stones) will lead to a wrong treatment if not adjusting for it. In contrast, the time-varying confounders {z_t} act as states in RL, which, in principle, should not be adjusted for, because the goal in RL is to learn a good policy in which any action is indeed supposed to be conditional on a specific state. On the other hand, in terms of rewards, what an agent expects at each time step is exactly the immediate reward when taking a specific action at a specific state, without the need of adjusting for states. It is worth noting that the case with time-varying confounders {z_t} can be thought of to meet a pseudo or weak causal sufficiency assumption under which the causal effects of actions on rewards will not be influenced by states at each time step (Zhang et al., 2017; 2015). This key difference motivates us to only adjust for the time-independent confounder u in this paper. In addition, as shown in Figure 1(b), in our case where a policy depending only on the confounder is applied to generating the data (Section 4.2), the arrow from z_t to at is not necessary so that z_t can be viewed as not a confounder of at and r_{t+1}. This also provides another reason why we do not need to adjust for z_t.\\n \\nRe (2)-(3): Refer to Section 2.4, Section 3.3, and Appendix A for the solution to identification. The main idea is that we used proxy variables to help identify causal effects of our model (Section 2.4 and Section 3.3). Besides, the causal parameters of our deconfounding model can be identified in the existence of multiple observed proxy variables (Appendix A).\\n \\nRe (4-5): Refer to the last paragraph of Appendix A for the difference between Z and u, and to Section 4.3 and Section 4.4 for both experiments with and without u. It is worth noting that, as shown in Figure 3(c), in each episode our deconfounding algorithm considering u almost chooses the optimal action at each time step, whilst the vanilla algorithm not considering u makes a wrong decision for more than half time.\"}", "{\"title\": \"Batch-Norm\", \"comment\": \"Batch Norm is necessay, see the last page of the updated paper.\"}", "{\"title\": \"More Detailed Rebuttal\", \"comment\": \"Thanks for your comments. We have updated the paper and made the paper much clearer. We recommend you re-read the full updated paper to get some refreshed stuff.\", \"re_motivation_and_identification\": \"Refer to Section 1.\\n In this paper, we propose a general formulation to cope with a family of reinforcement learning tasks in observational settings, that is, learning good policies solely from the historical data produced by real environments with confounders (i.e., the factors affecting both actions and rewards). Actually, in recent years, reinforcement learning (RL) has made great progress, spawning a large number of successful applications especially in terms of games (Silver et al., 2016; Mnih et al., 2013; Ope- nAI, 2018). Within this background, much attention has been devoted to the development of RL algorithms with the goal of improving treatment policies in healthcare (Gottesman et al., 2018). In fact, various RL algorithms have been proposed to infer better decision-making strategies for me- chanical ventilation (Prasad et al., 2017), sepsis management (Raghu et al., 2017a;b), and treatment of schizophrenia (Shortreed et al., 2011). In healthcare, a common practice is to focus on the ob- servational setting, because ones do not wish to experiment with patients\\u2019 lives without evidence that the proposed treatment strategy is better than the current practice (Gottesman et al., 2018) 1. As pointed out in (Raghu et al., 2017a), even if in the observational setting, RL also has advantages over other machine learning algorithms especially in two situations: when the ground truth of a \\u201cgood\\u201d treatment strategy is unclear in medical literature (Marik, 2015), and when training examples do not represent optimal behavior. On the other hand, although causal inference (Pearl, 2009) has been ex- tensively explored and used in healthcare and medicine (Liu et al.; Soleimani et al., 2017; Schulam & Saria, 2017; Alaa et al., 2017; Alaa & van der Schaar, 2018; Atan et al., 2016), the efficient ap- proach to dealing with time-varying data is still unclear (Peters et al., 2017; Hernn & Robins, 2018). On the basis of the discussion above, in this paper we attempt to combine advantages on both sides to cope with an important family of RL problems in the observational setting, that is, learning good policies solely from the historical data produced by real environments with confounding bias.\", \"re_the_method\": \"Refer to Section 2.4, Section 3.3, and Appendix A. The main idea is that we used proxy variables to help identify causal effects of our model (Section 2.4 and Section 3.3). Besides, the causal parameters of our deconfounding model can be identified in the existence of multiple observed proxy variables (Appendix A).\", \"re_quibbles\": \"Refer to Section 3.1 and Section 3.4. We have to emphasize that the factorization assumption in terms of Gaussian is a widely used technique in machine learning communities. I agree that the real data is not made up of factorized Gaussians, but the assumption of factorized Gaussian is the first step and also the easiest way to deeply understand how the proposed approach works. It is not necessary to get ourselves lost in the complicated distributions, which is not beneficial to capturing some insights about the nature of the model. More importantly, even with such simplified assumption, experiments presented in Section 4.3 and Section 4.4 show that our model work much better. Especially as shown in Figure 3(c), in each episode our deconfounding algorithm almost chooses the optimal action at each time step, whilst the vanilla algorithm makes a wrong decision for more than half time.\", \"re_experiments\": \"Refer to the whole Section 4 and the corresponding appendices. Note that, in this new draft we include all the details about the experiments, especially about how to design a reasonable reward and Appendix H.3 also provides a straightforward analogy to help readers understand the design. As mentioned above, Figure 3(c) demonstrated that the confounder u plays an extremely important role in reinforcement learning algorithms, because in that experiment, in each episode our deconfounding RL algorithm almost chooses the optimal action at each time step, whilst the vanilla RL algorithm makes a wrong decision for more than half time.\"}", "{\"title\": \"More Detailed Rebuttal\", \"comment\": \"Thanks for your comments. We have updated the paper and please refer to Section 2.4, Section 3.3, and Appendix A of the new version for the solution to identification. The main idea is that we used proxy variables to help identify causal effects of our model (Section 2.4 and Section 3.3). Besides, the causal parameters of our deconfounding model can be identified in the existence of multiple observed proxy variables (Appendix A).\"}", "{\"title\": \"To all reviewers\", \"comment\": \"Thanks to all the reviewers. In the updated paper, we have addressed all the issues the reviewers are concerned about. If you have more questions, please feel free to contact us.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thanks for your comments. We have updated the paper and made the paper much clearer. We recommend you re-read the full updated paper to get some refreshed stuff.\", \"re_motivation\": \"Refer to Section 1.\", \"re_the_method\": \"Refer to Section 2.4, Section 3.3, and Appendix A.\", \"re_quibbles\": \"Refer to Section 3.1 and Section 3.4. We have to emphasize that the factorization assumption in terms of Gaussian is a widely used technique in machine learning communities. I agree that the real data is not made up of factorized Gaussians, but the assumption of factorized Gaussian is the first step and also the easiest way to deeply understand how the proposed approach works. It is not necessary to get ourselves lost in the complicated distributions, which is not beneficial to capturing some insights about the nature of the model. More importantly, even with such simplified assumption, experiments presented in Section 4.3 and Section 4.4 show that our model work much better. Especially as shown in Figure 3(c), in each episode our deconfounding algorithm almost chooses the optimal action at each time step, whilst the vanilla algorithm makes a wrong decision for more than half time.\", \"re_experiments\": \"Refer to the whole Section 4 and the corresponding appendices. Note that, in this new draft we include all the details about the experiments, especially about how to design a reasonable reward and Appendix H.3 also provides a straightforward analogy to help readers understand the design. As mentioned above, Figure 3(c) demonstrated that the confounder u plays an extremely important role in reinforcement learning algorithms, because in that experiment, in each episode our deconfounding RL algorithm almost chooses the optimal action at each time step, whilst the vanilla RL algorithm makes a wrong decision for more than half time.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thanks for your comment. We have updated the paper and solved all the issues you are concerned about. In the following, we will point out which part in our new draft answers your question, respectively. Nevertheless, we still recommend you re-read the full updated paper to get some refreshed stuff.\", \"regarding_high_level_comments\": \"Re (1): Refer to Abstract, Section 1, and especially to the last paragraph of Appendix A in which we explained the reason why we consider only adjusting for the confounder u.\\n\\nRe (2)-(3): Refer to Section 2.4, Section 3.3, and Appendix A for the solution to identification.\\n\\nRe (4-5): Refer to the last paragraph of Appendix A for the difference between Z and u, and to Section 4.3 and Section 4.4 for both experiments with and without u. It is worth noting that, as shown in Figure 3(c), in each episode our deconfounding algorithm considering u almost chooses the optimal action at each time step, whilst the vanilla algorithm not considering u makes a wrong decision for more than half time.\", \"regarding_specific_comments\": \"Re (1): Refer to Section 2.3 and Section 2.4.\\n\\nRe (2): The kidney stone example is used throughout the paper, referring to Section 1, Section 2.1, Section 2.2, Section 3.1, Footnote 2, Appendix F.2, and Appendix H.3.\\n\\nRe (3): Z, sampled using Equation (6), has to be used in reinforcement learning algorithms, because we need the state transition when generating trajectories/rollout. Refer to Section 4.1, Section 4.4, and Appendix E.\\n\\nRe (4): Refer to Appendix F.2 for an intuition of the difference, and to Section 4.4 in which, as shown in Figure 3(c), in each episode our deconfounding algorithm using p(r_{t+1}|z_t, do(a_t=a)) almost chooses the optimal action at each time step, whilst the vanilla algorithm using p(r_{t+1}|z_t,a_t) makes a wrong decision for more than half time.\\nRe (5): Refer to Section 4.2 and Appendix H.3 in which a straightforward analogy is provided as well.\\nRe (6): Refer to Section 4.3.\\nRe (7): Refer to Section 4.4.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thanks for your comments. We have updated the paper and please refer to Section 2.4, Section 3.3, and Appendix A of the new version for the solution to identification.\"}", "{\"comment\": \"Pros:\\nWe were thinking of a reward graph with and without \\\"u\\\" to see how it will make a difference during variational inference step but actor-critic algorithms show that no matter what difference it makes during training the model, it sure has an impact while learning in actor critic. (+1 for paper update)\", \"issues\": \"While reproducing experiments, we are getting NaN loss during training. We have clipped gradients to -1 to 1, Xavier initialized layers, Z-normalized actions/rewards, L2 regularized dense layers and used same architecture. Any additional things to be taken care of?\", \"title\": \"NaN issue during training\"}", "{\"title\": \"Response\", \"comment\": \"In our case, as shown in Fig 5, the posterior of u has an obvious difference from its unit Gaussian prior, even though their KL loss converged. Therefore, sampling from q is better.\"}", "{\"comment\": \"KL loss ensures two distributions are close by. Draw as many \\\"u\\\" values from p(u) instead of estimating x,a,r.\", \"title\": \"KL Divergence b/w q(u|x,a,r) and p(u) takes care of it\"}", "{\"title\": \"Response\", \"comment\": \"Re \\\"Why use q anyway?\\\"\\nPlease keep in mind that u is a latent variable. q(u|x, a, r) is the posterior containing the information from the data whilst p(u) is nothing but a prior. \\n\\nRe \\\"Why need to estimate x,a,r from z.\\\"\\nBecause q(u|x, a, r) depends on (x, a, r) which are unknown during the testing phase.\"}", "{\"comment\": \"You have built your model \\\"p\\\". Use it to sample u values. Why use q anyway? Why need to estimate x,a,r from z. That's a convoluted approach.\", \"title\": \"Why not just sample from p instead of q?\"}", "{\"title\": \"Response\", \"comment\": \"Yes, u is sampled from the model. In Equation (18), (x, a, r) in the posterior q(u|x, a, r) are estimated from the model rather than the observations. We will use different notations in the updated version.\"}", "{\"title\": \"Response\", \"comment\": \"You can think so, because all Gaussians in the paper are assumed to be with diagonal covariance matrices.\"}", "{\"title\": \"interesting problem\", \"review\": \"I have read the discussion from the authors. my evaluation stays the same.\\n--------\\nthis paper studies an interesting question of how to learn causal effects from observational data generated from reinforcement learning. they work with a very challenging setting where an unobserved confounder exists at each time step that affects actions, rewards and the confounder at next time step.\\n\\nthe authors fit latent variables models to the observational data and perform experiments.\\n\\nthe major concern is on the causal inference side, where it is not easy to claim anything causal in such a complicated system with unobserved confounders. causal inference with unobserved confounders cannot be simply solved by fitting a latent variable model. there exists negative examples even in the simplest setting that two distinct causal structure can lead to the same observational distribution. for example here, https://www.alexdamour.com/blog/public/2018/05/18/non-identification-in-latent-confounder-models/\\n\\nit could be helpful if the authors can lay out the identification assumptions for causal effects. before claiming anything causal and justifying experimental results.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Strong and important idea - presentation and execution can be improved\", \"review\": \"The paper addresses an important and often overlooked issue in off-policy reinforcement learning - the possibility of confounding between the agent's actions and the rewards. This is a subject which has been exhaustively explored in the causal inference literature, and the authors are very correct in suggesting that it should be incorporated into the world of reinforcement learning. Specifically they propose a generative model with a global latent confounder that is inferred using a variational autoencoder architecture.\\n\\nThe paper is generally well-written, though some points could be made clearer in my opinion, as detailed below. The experiments are constructed by introducing confounding into existing datasets; performance seems to be good, but I am not entirely sure whether the given architecture is necessary, see comments below.\", \"high_level_comments\": \"(1) Classic RL deals with confounders all the time. The state is a confounder between the action and the reward. The issue of confounding becomes less trivial when one is performing off-policy RL when the original policy is *unknown*. This is exactly the case that the authors mention when they cite the recent work by Gottesman et al. (2018) who deal with using RL to learn from the actions of physicians in a hospital. While I am sure the authors are aware of these distinctions, I think the paper would be better if this is spelled out very explicitly. This includes explaining why this issue doesn't come up in classic RL.\\n\\n(2) Assuming the case above - off-policy RL with unknown confounders - one would usually assume \\\"no unmeasured confounding\\\", i.e. that the observed actions are an unknown but learnable function of the observed states. That is basically the scenario of most off-policy RL.\\n\\n(3) However, the authors strive to go one step beyond the case (2), to a situation where there is an *unmeasured* confounder affecting both observed actions and rewards. If nothing is known about this unmeasured confounder, then it is generally impossible to learn effective policies, as the causal effects of actions are not identifiable from the observed data. In this paper, the authors make an implicit assumption that while the confounder is unmeasured, it can still be inferred from the data. This is an intermediate step between \\\"no unmeasured confounding\\\" and \\\"complete unmeasured confounding\\\". This is related to work on using proxy variables e.g. Kuroki & Pearl (2014) and even more closely related to the work cited by Louizos et al. (2017).\\nAgain, I think the paper would be much improved if all this is addressed explicitly. \\n\\n(4) An important consequence of point (3) above is that in fact adding the single global latent-confounder U is not, in itself, very important from a causal perspective. The sequence of variables Z_1... Z_T are already latent confounders that are assumed to be inferrable from data. It is true that the addition of the global U might change the statistical and optimization properties of the model. This leads to a very important conclusion: the authors should test their model with and without U. I think this specific ablation experiment is crucial. In many cases I am sure that the assumption of a global latent confounder is a good one and is especially useful in the VAE case where it will make optimization more stable. However, in principle, all of U's roles could be taken within the sequence of Z's, and I am curious to see in practice how big of an effect it has.\\n\\n(5) I wish to add that even if the U variable turns out to not add much empirically, this work is still valid since the sequence of Z's can themselves be considered inferred latent confounders.\", \"specific_comments\": \"(1) 2.3: there are more than 2 ways of computing the do-operator. RCTs and backdoor are the best known approaches, but not the only ones, e.g. there is frontdoor adjustment. \\n\\n(2) I think the paper would be easier to follow if there was one concrete example used throughout. This will make it easier to understand and possibly verify/criticize the assumptions of the generative model.\\n\\n(3) Related to \\\"higher-level point (4)\\\" above, in eqs. 17 & 18 note that Z_t is unknown, same as U. Both are inferred. This also leads to the question which Z_t is actually used in practice? Is it the mean, or is it also sampled from the approximate posterior q?\\n\\n(4) Below eq. 19, it would be very useful for the readers if you could explain exactly when would there be a difference between the two versions p(r_{t+1}|z_t,a_t) and p(r_{t+1}|z_t, do(a_t=a))\\n\\n(5) In the description of all the experiments I was missing a crucial point: how does the introduced confounder affect the reward? Is it only through the different actions? The way it is currently explained, it seems like the added variable introduces lack of *overlap*, but not strictly confounding.\\n\\n(6) The description of the experiment in 4.3 could be more detailed. What exactly was the training and test? What RL method was used? What did the baseline optimize for? I would like to see an ablation experiment where U is not included in the model. \\n\\n(7) In 4.5, what is the \\\"vanilla\\\" method? And as mentioned above, I would like to see an ablation experiment where U is not included in the model.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"comment\": \"Shouldn't u be sampled from the model because the Deconfounding Q learning pipeline starts after the model is built which doesn't have access to observations (x,a,r)?\", \"title\": \"Sampling u in Eq 18.\"}", "{\"comment\": \"While calculating p(x|z) are you assuming that all the features (784) are independent? As X is 784 dimensional, calculating p(x|z) will give numerical instability. How are you mitigating this?\", \"title\": \"Feature independence assumption in p(x|z)?\"}", "{\"title\": \"Setting doesn't make sense for RL and experiments don't evaluate causal questions\", \"review\": \"This paper presents a method for reinforcement learning (RL) in settings where the relationship between action and reward is confounded by a latent variable (unobserved confounder). While I firmly believe that RL would benefit from taking causality more seriously, this paper has many fatal flaws that make it not ready for publication.\\n\\nFirst, and most importantly, the paper is unclear about the problem it is trying to solve. It talks about confounded RL as being settings in which a confounder affects both the action and reward. In typical RL settings this wouldn\\u2019t make sense: in RL you get to choose the policy so it doesn\\u2019t make sense to assume that the choice of action is confounded while you\\u2019re doing RL. To get around this, the authors assume that they\\u2019re working with observational data and doing RL on a generative model leant from the observational data. But by doing this, they have assumed away the key advantage that RL has over causal inference: the ability to experiment in the world. The authors justify this assumption by considering high-stakes settings where experimentation is either too risky or too costly, but they don\\u2019t explain why you would want to do RL at all when you could just do causal inference directly. If you can\\u2019t experiment, RL offers no advantages over standard causal inference methods and bring serious disadvantages (sample-efficiency, computational cost, etc.). \\n\\n# Method\\nThe authors learn a variational approximation to a particular graphical model that they assume for their RL setting. They then treat the variational approximation as the true distribution which allows them to perform causal inference via the backdoor correction. They claim this is identified but this is false - it is only identified with respect to the variational distribution, not the true distribution and we have no a priori reason to believe that the variational distribution well-approximated the true distribution. In principle, the authors could have tested how well this works experimentally but their experimental setup has problems which prevent this being evaluated.\", \"quibbles\": [\"Page 3: the authors claim the model is \\u201cwithout loss of generality\\u201d but this is false - there are many settings that would not conform to this model: e.g. the multi agent settings that economics studies; health settings with placebo effects where reward depends on observations directly; etc.\", \"Page 4 above the equations: either the equations describe the variational approximation to the generative model or the equations shouldn\\u2019t all be factorized normal distributions. Real data isn\\u2019t made up of factorized normals.\", \"# Experiments\"], \"the_authors_evaluate_their_method_on_three_simulated_datasets\": \"Confounding MNIST, Confounding Cartpole and Confounding Pendulum. All three have the same methodological problems so I\\u2019ll only focus on the MNIST dataset. They synthesize their MNIST dataset but corrupting a subset of MNIST digits with noise and treating actions as rotations. Rewards are given by the absolute difference in angle between the rotated digit and the original unrotated digit. \\u201cConfounding\\u201d is added by having a binary latent variable affect the amount that the digit is rotated - but importantly, the reward isn\\u2019t affected directly by the latent variable. Because of this, there isn\\u2019t actually a confounding problem - the \\u201cconfounder\\u201d simply changes the rotation of the digit and can be treated as additional experimentation from the perspective of causal inference. The authors evaluate their method by examining reconstructions of the MNIST digit, but this simply checks how well the variational inference is working, not whether the causal inference is working (there would be no way to evaluate the latter on this dataset because there is no confounding). Effectively all they find is a better-designed variational distribution will do a better job of reconstructing the input (without modelling the latent u, the VAE is forced to average over its two states resulting in more blurry samples).\\n\\nThe RL evaluations aren\\u2019t described in enough detail to conclusively explain the difference observed, but it seems to be driven by the fact that the standard RL methods are working with worse variational approximation distributions.\\n\\n# Summary\\nThis work studies a setting in which the correct baselines would be causal inference algorithms (but they aren\\u2019t considered) and their experimental evaluation has serious flaws that prevent it supporting the claims made in the paper.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Response\", \"comment\": \"Re 1. As described in Section 3.3.2, q(u|x, a, r) is modelled in the same way as q(z |x, a, r), each of which is parameterised by a bi-directional LSTM. However, unlike q(z |x, a, r) in which z is time-dependent (i.e., z_t corresponds to x_t, a_t, r_t at each time step), in q(u|x, a, r) u is independent of time steps, meaning that u combines all the information of the whole sequence.\\n\\nRe 2. Yes.\"}", "{\"comment\": \"1>Since u is time independent which particular mean and variance are used to calculate the q(u|x,a,r).\\n\\n2> Just to confirm : should q(a|x) and q(r|x,a) be added to the loss function.\", \"title\": \"Architecture for measuring q(u|x,a,r) is unclear\"}", "{\"title\": \"Response\", \"comment\": \"Exactly, we trained a separate architecture without u.\"}", "{\"comment\": \"To get reconstructed image in Fig 3 top row, are you training a separate architecture over changed loss without u? Or is there a clever hack to mitigate retraining?\", \"title\": \"MNIST Reconstruction without Confounder\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your comment.\\n\\nRe 1. We exactly followed the same setting of Healing MNIST in [1]. As [1] said, \\u201cthe squares within the sequences are intended to be analogous to seasonal flu or other ailments that a patient could exhibit that are independent of the actions and which last several timesteps\\u201d. We want to show that our model can learn long-range patterns, which plays an important role in medical applications. The squares are added after the rotation. \\n\\nRe 2. \\u201cSome policy\\u201d can be any policy, e.g., random rotations or rotations toward vertical. But in our case, considering the confounder, we used the policy where action is affected by the confounder u. \\n\\nRe 3. Simply speaking, the confounder not only affects the action through the magnitude, 22.5 \\u2264 |a| \\u2264 45 or 0 \\u2264 |a| < 22.5, but also affects the reward through the direction (i.e., clockwise or counterclockwise), a or -a, where a and -a will result in different rewards.\\n\\nWe will clarify these in the new draft. Thank you for your suggestion. \\n\\n[1] Rahul G Krishnan, Uri Shalit, and David Sontag. Deep kalman filters. arXiv preprint arXiv:1511.05121, 2015.\"}", "{\"title\": \"Clarification on confounding MNIST\", \"comment\": \"1. What is the purpose of the \\\"sequence of three consecutive squares (2 x 2 in pixel)\\\"? Are they added before or after the rotation?\\n2. What is the \\\"some policy\\\" that is perform on the images? Random rotations? Or rotations toward vertical?\\n3. Does the confounder, u, only affect the magnitude of rotation? i.e. rotation given u = 1 is between 22.5 and 45, while u=0 it is between 0 and 22.5? As far as I can see, u doesn't affect the reward directly? Is that correct?\"}", "{\"title\": \"Just as described in the paper\", \"comment\": \"Dear reader,\", \"re_1\": \"Actually dimension of u can be any number. In the paper we let u=2 because it is easy to visualise it in the 2D plot as shown in Figure 5.\", \"re_2\": \"512 = 4 x 4 x 32, so it can be reshaped to a square 4 x 4 with 32 channels.\\n\\nWe hope this helps.\"}", "{\"comment\": \"Clarifications:\\n1. How is u value chosen? Pg 14\\n2. How is f1 resized after FC1(512)? Depth not mentioned. Pg 15\", \"title\": \"FC1 of f1 and Dimension of u\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your comment.\", \"re_1\": \"Exactly.\", \"re_2\": \"In default, softplus is applied after each intermediate layer, and the explicitly claimed activation functions are applied to the output of the final layer. Here, take f5 and f6 for example, f5 and f6 share the parameters of the first five FC100 layers, each followed by softplus, but have different parameters in their own final layer, i.e., FC1 has two outputs (the output of f5 and the output of f6) which are respectively followed by sigmoid and softplus.\", \"re_3\": \"In our architecture, each function modelled with a neural network has only one output: mu or sigma^2. Since each pair of functions (e.g., f1 and f2, f5 and f6, etc.) share the parameters of all layers except the final layer, that is, they have their own final layers which are parallel outputs, each followed by a respective activation function. In the case of f5 and f6, as described in Re 2, FC1 has two outputs, each followed by sigmoid and softplus, representing mu (the output of f5) and sigma^2 (the output of f6), respectively. We will clarify this with a figure in the appendix of the new draft. Thank you for your suggestion.\"}", "{\"comment\": \"Lets take an example network f5 ,f6\\n{FC 100 , FC 100 , FC 100 } \\u2192 FC 100 \\u2192 FC 100 \\u2192 FC 1 \\u2192 {sigmoid, softplus}\\n\\n1. Is the output of FC 100, FC 100, FC 100 concatenated and sent to FC100 followed by FC100 and then FC1? .\\n2. Are the activation functions applied after each layer or at the output of the final FC (FC1)?\\n3. The activation functions are denoted by a {.}. By definition, does that mean they are parallel operators? In that context what is Mu and sigma for f5 and f6 respectively? It will be better if you can explain how the activation functions are applied and if possible show a visual representation to get a more intuitive understanding of the network topology\", \"title\": \"Clarification in Architecture\"}", "{\"comment\": \"Given the training data incorporates for hindsight.\", \"title\": \"Good one!\"}", "{\"title\": \"Trade-off\", \"comment\": \"As we know, there always is a trade-off between time and accuracy in MC methods. Considering this balance, we set N= 400 for experimental results.\"}", "{\"title\": \"It actually is.\", \"comment\": \"Thank you for your interest in our paper. Happy to discuss with you all the stuff about causal concepts.\\n\\nRe Thought (1): What you understand about counterfactual reasoning is exactly right, which is also what we did in the paper. Given the fact that we know the training set (e.g., we knew the fact that what happened when we took action a_1 at x_1 in the training set), we want to know \\u201cwhat would have happened had we taken a different action a_2 at an unseen x_2 in the testing set?\\u201d That is exactly what counterfactual reasoning does. Note that, like ones did in [1], we primarily emphasise the inference on unseen data, which plays a pivotal role in the following RL part.\\n\\nRe Thought (2): in our paper, we do not have the interventional distribution p(a | x). \\na) If what you meant is q(a | x), it is an auxiliary distribution in the variational inference part, for which we did not take intervention into account, because that is not what we studied in the paper;\\nb) If you meant p(a | z), that is actually the policy function in RL. Usually we also do not factor the intervention into the policy, because the definition of the policy is a conditional distribution.\\nActually we only considered the intervention in the reward function p(r | z, do(a)) where z is treated as a fixed constant at each time step.\\n\\nWe hope this helps.\\n\\n[1] Rahul G Krishnan, Uri Shalit, and David Sontag. Deep kalman filters. arXiv preprint arXiv:1511.05121, 2015.\"}", "{\"comment\": \"What value of N is chosen in Eq 18?\\n\\nWhile plotting Fig 5 N=128, but is it the value chosen even for experimental results?\", \"title\": \"N value in Eq 18\"}", "{\"comment\": \"Thoughts:\\n(1)Paper claims to have done counterfactual reasoning which according to Pearl's literature(as cited in paper) is not counterfactual. Counterfactual has to incorporate hindsight -> What action a(t) will be taken if you are in state x(t) \\\"given the fact\\\" that different state x'(t) and action a'(t) were taken. \\\"Given the fact\\\" makes all the difference.\", \"claim\": \"Paper generates samples based on conditionals\", \"edit\": \"(2)Why not additional experiment on state intervention as q(action=a | state=x) != q(action=a | do(state=x)) where inequality arises due to back door path x <- z -> a opening up. This changes equations accordingly.\", \"title\": \"Counterfactual claim is not really \\\"Counterfactual\\\" nor \\\"Interventional\\\"\"}", "{\"title\": \"Thank you for your useful feedback.\", \"comment\": \"Thank you for your useful feedback and we are glad you like the idea.\\n\\nAs described in the last paragraph on Page 1, generally speaking, the training pipeline consists of two steps:\", \"step_1\": \"Given the time-independent confounding assumption (Section 3.1), we learn the deconfounding model, as presented in Fig 2, from the observational data;\", \"step_2\": \"We optimise the policy or Q-function based on the deconfounding model we learned in Step 1.\\n\\nMore specifically, in Step 1, given the observational data (x, a, r), we optimise the variational lower bound (Eq.11) with two extra terms (Eq.15 and Eq.16). Once the deconfounding model is learned, we know the state transition function p(z_t | z_{t-1}, a_{t-1}) and can also calculate the deconfounding reward function p(r_t | z_t, do(a_t)) according to Eq.17. In Step 2, we treat the learned deconfounding model as a RL environment like CartPole in OpenAI Gym, and directly exploit it to generate trajectories/rollouts through the state transition function and the deconfounding reward function. On the basis of the generated trajectories/rollouts, we can train the Q-network using Eq.19 or the policy network using Eq.20.\\n\\nWe will clarify this in the new draft. Thank you.\"}", "{\"comment\": \"Amazing! I really like the idea.\\n\\nIt would have been nice if you had specified training pipeline which tells what is trained after what. \\n\\nCan you elaborate on that?\", \"title\": \"Training pipeline\"}", "{\"title\": \"It is actually the widely used factorization assumption in variational inference.\", \"comment\": \"Thank you for your interest in our paper and for checking our formula.\\n\\nIn the model part, you are exactly right, where p(z, u | x, a, r) != p(u | x, a , r) p(z | x, a, r). However, in the variational inference part, for simplicity, we use the well known trick, factorization assumption, to obtain q(z, u | x, a, r) = q(u | x, a , r) q(z | x, a, r) as we claimed on Page 13. \\n\\nActually, we can think q(u | z, x, a, r) = q(u | x, a, r) because (x, a, r) already contains all the information about z.\"}", "{\"comment\": \"In Eq (10) Page 5, equation written is,\\nq(z, u | x, a, r) = q(u | x, a, r).q(z | x, a, r)\\n\\nBut q(u | z, x, a, r) != q(u | x, a, r) as v-structure z1 --> r2 <-- u opens up. \\n\\nThis will affect Eq (12), Eq (18) etc. Please have a look.\", \"title\": \"Formula Check\"}" ] }
BkG8sjR5Km
Emergent Coordination Through Competition
[ "Siqi Liu", "Guy Lever", "Josh Merel", "Saran Tunyasuvunakool", "Nicolas Heess", "Thore Graepel" ]
We study the emergence of cooperative behaviors in reinforcement learning agents by introducing a challenging competitive multi-agent soccer environment with continuous simulated physics. We demonstrate that decentralized, population-based training with co-play can lead to a progression in agents' behaviors: from random, to simple ball chasing, and finally showing evidence of cooperation. Our study highlights several of the challenges encountered in large scale multi-agent training in continuous control. In particular, we demonstrate that the automatic optimization of simple shaping rewards, not themselves conducive to co-operative behavior, can lead to long-horizon team behavior. We further apply an evaluation scheme, grounded by game theoretic principals, that can assess agent performance in the absence of pre-defined evaluation tasks or human baselines.
[ "Multi-agent learning", "Reinforcement Learning" ]
https://openreview.net/pdf?id=BkG8sjR5Km
https://openreview.net/forum?id=BkG8sjR5Km
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BygXAwjzgE", "HkgfkJPnaX", "Sylf0sIn6X", "BJlGC9UhpX", "BJl-oTGeaX", "Skx1dt70hX", "rygaShhcn7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544890314760, 1542381273814, 1542380490470, 1542380233716, 1541578137146, 1541450086608, 1541225540792 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper627/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper627/Authors" ], [ "ICLR.cc/2019/Conference/Paper627/Authors" ], [ "ICLR.cc/2019/Conference/Paper627/Authors" ], [ "ICLR.cc/2019/Conference/Paper627/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper627/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper627/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper studies population-based training for MARL with co-play, in MuJoCo (continuous control) soccer. It shows that (long term) cooperative behaviors can emerge from simple rewards, shaped but not towards cooperation.\\n\\nThe paper is overall well written and includes a thorough study/ablation. The weaknesses are the lack of strong comparisons (or at least easy to grasp baselines) on a new task, and the lack of some of the experimental details (about reward shaping, about hyperparameters).\\n\\nThe reviewers reached an agreement. This paper is welcomed to be published at ICLR.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"An interesting new task to study learning cooperation between agents\"}", "{\"title\": \"author response to review 3\", \"comment\": \"We thank the reviewer for constructive feedback. The contribution of our work extends beyond the introduction of a novel environment. We use the domain to study the emergence of coordination by analyzing the behaviors of decentralized agents. We carried out ablation studies to surface important ingredients for effective learning in multi-agent cooperative-competitive games. Our work highlights a fundamental difficulty in evaluation on multi-agent domains, with or without benchmarks, which we alleviate through a principled Nash averaging evaluation scheme.\", \"we_address_each_point_individually\": \"1) Q) \\u201cWhat makes this environment an importantly different testbed or development environment?\\u201d\\nA) The environment will provide the ML community with a cooperative-competitive multi-agent environment in a simulated physical world which is accessible and flexible. It is accessible because it uses a widely adopted physics simulator and research platform. It is also accessible in the sense that we have demonstrated a solution using end-to-end RL. It is flexible because although the current paper describes a relatively simple agent embodiment (chosen to draw attention to multi-agent coordination), the environment can be extended in terms of body complexity as well as the number of players and could become part of a wider multi-task suite with consistent physics. We believe it is an important contribution to create such an environment, release it, and publish the first set of results on it. Further, the environment rules are simple but complexity emerges from sophisticated behavior and interactions between independent physically embodied agents. As such we have seen a level of emergent cooperation in a simulated physical world, which has not been witnessed before by end-to-end RL.\\n\\nQ) \\u201cThe new environment should [...] offer new challenges existing algorithms have not addressed at all.''\\nA) Learned cooperation of embodied independent RL agents in physical worlds is an unsolved problem, and a significant challenge for all existing approaches. To our knowledge there is no published environment that allows us to study this problem with realistic simulated physics where agents must acquire and leverage physical motor skills in order to coordinate with others in an open-ended manner.\\n\\n2) Q) \\u201cWhy is it important to have a decentralized training procedure when the authors have control over all the agents?\\u201d\\nA) We agree that the environment could be used to investigate centralized approaches which could yield faster learning in this particular problem (but may not in general scale to more agents). However, we chose to study the emergence of coordination in decentralized, non-communicating agents, which is a significant unsolved problem important for real-world multi-agent problems (e.g. interaction between self-driving cars from different manufacturers, or human-agent interactions) where centralized solutions may not be feasible, and is more consistent with human learning.\\n\\n3) Q) \\u201cIt's hard to evaluate new algorithms when the domain studied is also new.\\u201d & \\u201cWe have no sense for state-of-the-art performance on this domain across a range of algorithms\\u201d\\nA) We agree that evaluation is difficult in the absence of clear baselines on a novel domain. We have combined state-of-the-art distributed RL and continuous control, with additional improvements, and suggest that this is a sensible reference solution for future investigations. We performed a detailed ablation study precisely to answer the question: what are the important ingredients for successful multi-agent learning on this novel, challenging domain?\\n\\n4) Q) \\u201cThe authors indicate that evaluating the quality of an algorithm for a competitive context is hard in the absence of established benchmarks\\u201d\\nA) We disagree with reviewer\\u2019s assessment that highlighting difficulties in evaluation undermines the contribution of this work. There have been multiple studies (sec 4.3) where conclusions have been drawn according to simple multi-agent evaluation schemes. Our work shows where existing evaluation procedures fall short. We adopted an evaluation scheme via Nash averaging and demonstrated the discrepancy between our methods and a tournament (Figure 10). We do not claim that our evaluation method resolves the issue completely, but we believe it provides a more principled evaluation scheme. Even for domains where we possess human baselines or programmed bots evaluation is still difficult for the same underlying reason. It is important to introduce domains in which these problems arise, such as this one.\\n\\nQ) \\u201cwhat are the unique things that can be studied with this new environment?\\u201d\\nA) See 1)\\n\\nQ) \\u201cHold an open competition to get benchmarks created by other teams of researchers\\u201d\\nA) we agree that our environment would be suitable for a competition, since the environment is an easily accessible MuJoCo environment. This could be an exciting future project, beyond the current paper scope.\"}", "{\"title\": \"author response to review 1\", \"comment\": \"We thank the reviewer for their constructive feedback. We address each point individually:\\n\\nRe. correlation of rewards within and across teams:\\n\\nIn our setup we distinguish between the raw sparse reward events / raw continuous performance metrics (all denoted by r), and the individual agent\\u2019s preferences for these (denoted by alpha). While the binary reward events \\u2018goal\\u2019 and \\u2018concede\\u2019 are correlated within team, but anti-correlated across teams, this is not true for all continuous metrics (it is for ball-vel-to-goal but not for vel-to-ball). Independently, each agent can have different preferences for each of the signals and associated discount factors. These quantities are evolved via PBT and thus vary across agents and over time. As a consequence, even when the signal itself is perfectly (anti-)correlated between agents this is almost never true for the resulting reward received by the agents and they may thus acquire different behaviors.\\n\\nRe. relative importance of hyperparameter adjustments performed by evolution: \\n\\nThe reviewer raised an important question regarding population-based training. Given that the PBT procedure drives evolution towards agents whose hyper-parameters and model parameters are the most competitive within the current population of agents (in terms of winning the game), a parameter that is irrelevant for the learning progress should not exhibit a consistent trend across experiment replicas (as each hyper-parameter is initialized randomly and then evolved through an evolution procedure that selects, inherits and mutates where mutation applies a random multiplicative perturbation). We concretely observed in our work (Figure 4) that both actor and critic learning rates as well as discount factor and entropy cost exhibit clear trends over the course of training. Regarding learning rates specifically, we believe that our PBT procedure re-discovers the commonly employed learning rate annealing schedule for accelerated learning. We have added a new Section E in the appendix comparing the evolution of hyperparameters across three experiments with different seeds: entropy cost and critic learning rates evolve consistently across experiments indicating that performance is more sensitive to these parameters. The critic learning rate in particular decreases over time. Actor learning rate is relatively less consistent across the three experiments, indicating that performance is less sensitive to fine tuning the actor learning rate.\"}", "{\"title\": \"author response to review 2\", \"comment\": \"We thank the reviewer for their constructive feedback.\"}", "{\"title\": \"Well-written submission with good analysis\", \"review\": \"The paper proposes a new environment - 2vs2 soccer - to study emergence of multi-agent coordinated team behaviors. Learning relies on population-based training of agent's shaped reward mixtures and approach of nash averaging is used for evaluation.\", \"clarity\": \"the paper is well-written and clear. The ablations provided are helpful in understanding how much different introduced components matter, and quantitative and qualitative analysis of resulting behavior is quite nice\", \"originality\": \"the individual pieces of this work (PBT, SVG, nash averaging) have been introduced previously, but this paper puts them together in a well-chosen manner.\", \"significance\": \"I believe this paper proposes a number of interesting observations (effects of PBT, evaluation, effects of recurrent policies to overcome non-stationarity issues) that I believe would be of value to the part of ICLR community doing research in multi-agent systems.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": [\"Summary: The authors use competition as a way to train agents in a complex continuous team-based control task: a 2 player soccer game. Agents are paired randomly into a team of 2 and play another team of 2. The key aspect of the proposed algorithm is the use of population based training.\", \"Strong Points\", \"The authors propose a convincing methodology for speeding up learning in coordinated MARL.\", \"The Nash Averaging approach suggested for evaluating in the presence of cycles is interesting and a useful tool for evaluation when there are no easy baselines\", \"The authors do convincing ablation studies to show that the PBT is the most important part of the learning algorithms and does well even when paired with a simple feed forward model\", \"Questions\", \"The authors use reward shaping of the form: \\u201cWe design shaping reward functions {rj : S \\u00d7 A \\u2192 R}j=1,...,nr P , weighted so that r(\\u00b7) := nr j=1 \\u03b1j rj (\\u00b7) is the agent\\u2019s internal reward and, as in Jaderberg et al.\\u201d I\\u2019m not sure I follow how this works, without the additional dense shaping in the soccer game the reward is 0/1 depending on if one\\u2019s team wins or loses, so won\\u2019t one\\u2019s rewards always be perfectly correlated with those of one\\u2019s teammates and perfectly anticorrelated with those of the other team? Does this only work with the dense shaping (e.g. vel-to-ball)?\", \"I would like to see which of the PBT controlled hyperparameters actually matter for the increase in training speed. Do the learning rates matter (since they\\u2019re also being changed by the Adam optimizer as training goes) or is it about the discount factor/entropy regularizer?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The paper presents a new simplified RoboCup environment that may be of some interest\", \"review\": \"This paper introduces a new multiagent research environment---a simplified version of 2x2 RoboSoccer using the MuJoCo physics engine with spherical players that can rotate laterally, move forwards / backwards, and jump.\\n\\nThe paper deploys a fine-tuned version of population-based sampling on top of a stochastic value gradient reinforcement learning algorithm to train the agents. Some of the fine-tunings used include deploying different discount factors on multiple different reward channels for reward shaping.\\n\\nThe claimed novel contributions of the paper are (1) a new multiagent testbed, (2) a decentralized training procedure, (3) fine-tuning reward shaping, and (4) highlighting the challenges in evaluation in novel multiagent competitive environments.\\n\\nOverall, my judgment is that the paper is fine, but the authors have not helped me to understand the significance of their contributions.\", \"taking_each_in_turn\": \"(1) What is the significance of the new environment? What unique characteristics make it difficult? What makes this environment an importantly different testbed or development environment? The connection to RoboSoccer is motivating but tenuous. The new environment should have particular characteristics that expose problems with past algorithms or offer new challenges existing algorithms have not addressed at all.\\n\\n(2) Why is it important to have a decentralized training procedure when the authors have control over all the agents? If it will allow faster training, has the authors' algorithm been demonstrated to accomplish that goal? \\n\\n(3) It's hard to evaluate new algorithms when the domain studied is also new. We have no sense for state-of-the-art performance on this domain across a range of algorithms. The authors conduct a careful ablation study on their new algorithm but do not compare their approach to other classes of algorithms.\\n\\n(4) The authors indicate that evaluating the quality of an algorithm for a competitive context is hard in absence of established benchmarks---whereas in single-agent or cooperative environments progress can be measured against the goal of the environment, progress in competitive environments requires comparison to approaches that are thought to be good. Here the authors are themselves pointing out a fundamental problem with introducing new competitive multiagent testbeds, and the authors don't resolve this tension. Since the main contribution of the work is the environment, it's hard to see how this point the authors themselves make doesn't undermine that central contribution.\\n\\nBesides other comments mentioned above, a couple other ways to improve the paper would be:\\n- Clarify why this environment is important to be introducing---what are the unique things that can be studied with this new environment?\\n- Hold an open competition to get benchmarks created by other teams of researchers\", \"some_minor_comments\": \"- $n_r$ is not defined explicitly in the text as far as I have found\\n- The authors state: \\\"The specific shaping rewards use for soccer are detailed in Section 4.2\\\" but I couldn't find them there. \\n\\n---\\n\\nPost-rebuttal \\n\\nMy main concern was assessing the value of the overall contribution of the paper. The other reviewers seem to appreciate both the new environment being offered and the combination of techniques deployed in the authors' solution. If there is an audience that will appreciate this work at ICLR as seems to be indicated by those reviews, then I would increase my score to marginally above the acceptance threshold.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
Bkg8jjC9KQ
Optimistic mirror descent in saddle-point problems: Going the extra (gradient) mile
[ "Panayotis Mertikopoulos", "Bruno Lecouat", "Houssam Zenati", "Chuan-Sheng Foo", "Vijay Chandrasekhar", "Georgios Piliouras" ]
Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond. By necessity, most theoretical guarantees revolve around convex-concave (or even linear) problems; however, making theoretical inroads towards efficient GAN training depends crucially on moving beyond this classic framework. To make piecemeal progress along these lines, we analyze the behavior of mirror descent (MD) in a class of non-monotone problems whose solutions coincide with those of a naturally associated variational inequality – a property which we call coherence. We first show that ordinary, “vanilla” MD converges under a strict version of this condition, but not otherwise; in particular, it may fail to converge even in bilinear models with a unique solution. We then show that this deficiency is mitigated by optimism: by taking an “extra-gradient” step, optimistic mirror descent (OMD) converges in all coherent problems. Our analysis generalizes and extends the results of Daskalakis et al. [2018] for optimistic gradient descent (OGD) in bilinear problems, and makes concrete headway for provable convergence beyond convex-concave games. We also provide stochastic analogues of these results, and we validate our analysis by numerical experiments in a wide array of GAN models (including Gaussian mixture models, and the CelebA and CIFAR-10 datasets).
[ "Mirror descent", "extra-gradient", "generative adversarial networks", "saddle-point problems" ]
https://openreview.net/pdf?id=Bkg8jjC9KQ
https://openreview.net/forum?id=Bkg8jjC9KQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BkgeT0Eu4E", "BJlCswEOEN", "r1gcb0RBgV", "r1x7Pw_4yE", "HJxro3I4kV", "rkxGKGDY6m", "r1lVGZvtTm", "SkxF31vKpm", "HkxnI7tn3Q", "H1xUfVr9hX", "SyeTm7oIhm" ], "note_type": [ "official_comment", "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1549450935861, 1549449125763, 1545100801749, 1543960410626, 1543953565318, 1542185593922, 1542185228025, 1542184881190, 1541342036214, 1541194765793, 1540956965438 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper626/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper626/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper626/Authors" ], [ "ICLR.cc/2019/Conference/Paper626/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper626/Authors" ], [ "ICLR.cc/2019/Conference/Paper626/Authors" ], [ "ICLR.cc/2019/Conference/Paper626/Authors" ], [ "ICLR.cc/2019/Conference/Paper626/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper626/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper626/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Convergence to some solution, yes, not necessarily the x* in the proof, no\", \"comment\": \"Hey,\\n\\nThanks for your comments and positive feedback!\\n\\nYes, in the proof of Theorem 4.1, x* is a \\\"special\\\" solution point which satisfies the variational inequality formulation (VI) of the saddle-point problem globally. This point is used to establish convergence to the solution set of the problem, but it is not necessarily the end state of the algorithm - i.e., it is not the \\\"solution point x*\\\" alluded to in the statement of the theorem.\\n\\nThanks for catching this ambiguity, typo correction on its way!\"}", "{\"comment\": \"Hi, very nice paper! In the proof of Theorem 4.1 (in appendix D, end of page 19), where you only have coherence (not necessarily strict), it seems like you only establish convergence of OMD to some point but not necessarily to x^*. Am I missing something?\", \"title\": \"Proof of Theorem 4.1\"}", "{\"metareview\": \"This paper investigates the usage of the extragradient step for solving saddle-point problems with non-monotone stochastic variational inequalities, motivated by GANs. The authors propose an assumption weaker/diffrerent than the pseudo-monotonicity of the variational inequality for their convergence analysis (that they call \\\"coherence\\\"). Interestingly, they are able to show the (asympotic) last iterate convergence for the extragradient algorithm in this case (in contrast to standard results which normally requires averaging of the iterates for the stochastic *and* mototone variational inequality such as the cited work by Gidel et al.). The authors also describe an interesting difference between the gradient method without the extragradient step (mirror descent) vs. with (that they called optimistic mirror descent).\\n\\nR2 thought the coherence condition was too related to the notion of pseudo-monoticity for which one could easily extend previous known convergence results for stochastic variational inequality. The AC thinks that this point was well answered by the authors rebuttal and in their revision: the conditions are sufficiently different, and while there is still much to do to analyze non variational inequalities or having realistic assumptions, this paper makes some non-trivial and interesting steps in this direction. The AC thus sides with expert reviewer R1 and recommends acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Some progress for analysis of non-monotone variational inequalities\"}", "{\"title\": \"Thanks for the extra round of feedback!\", \"comment\": \"Many thanks for the extra round of feedback and the encouraging remarks! We reply to the points you raised below:\\n\\n1. Regarding the example of a coherent problem with a general convex solution set.\\n\\nAgain, for simplicity, focus on the optimization case, i.e., the minimization of a function f:X->R (X convex). In this case, letting X* = argmin f, and writing g(x) for the (sub)gradient of f, the (strict) coherence requirement takes the form:\\n \\n- <g(x),x-x*>\\u22650 for all x in X and all x* in X*.\\n - Equality holds above if and only if x lies in X*.\\n\\nNow, fix some convex subset C of X, and let f(x) = dist(x,C)^2 (where dist denotes the standard Euclidean setwise distance). By construction, f is convex (though not strictly so) and X*=C. Convexity guarantees the first requirement of coherence. For the second, note that g(x) is a multiple of x - proj_C(x) so, for any x* in X*, the product <g(x),x-x*> vanishes only if x lies itself in C (since C=X*).\\n\\nOf course, the above function is convex, but if we perturb f away from C = X* in an appropriate way, non-convex examples can also be constructed (though there are diminishing returns regarding the simplicity of the resulting example).\\n\\n[NB: just to avoid any misunderstanding, the above concerns the definition of coherence as presented in the *original* version of the paper; the current version includes examples with non-convex solution sets like x^2 y^2 as we outlined in our first reply.]\\n\\n\\n2. Thanks for the pointer to Chen and Rockafellar, it looks very promising for future study! The reviewer's suggestion seems very plausible but the devil is often in the details, so we would need more time in order to provide a more definitive reply.\\n\\n\\nWe cannot revise the paper at this time, but we'd of course be happy to do so along the lines above if accepted.\"}", "{\"title\": \"Thank you for you detailed answer\", \"comment\": \"Thank you for you detailed answer.\\n\\n\\\"[We can provide a concrete example if the referee finds this useful]\\\" would love to.\\n\\nRegarding 3. I would like to say that the strict coherence assumption is an extension of the strict monotonicity assumption with which you can also prove last iterate converge. Nemirovski, Nesterov, Juditski focus on general monotonicity (the equivalent of you general coherence with which you do not prove any last iterate convergence result)\\nAn interesting point I would like to make is that Last iterate convergence have been proven in the literature under the *strong* monotonicity assumption see for instance [Chen et al. 1997] (the Forward-backward algorithm is a generalization of the MD algorithm). Maybe you could have convergence rate under a *strong* coherence assumption (but also raising the question to what extend *strong* coherence assumption is realistic)\\n\\n\\nChen, George HG, and R. Tyrrell Rockafellar. \\\"Convergence rates in forward--backward splitting.\\\" SIAM Journal on Optimization 7.2 (1997): 421-444.\"}", "{\"title\": \"Thanks for the feedback! (see below why pseudo-monotonicity is quite different)\", \"comment\": \"We thank the reviewer for their constructive remarks! We reply point-by-point below:\\n\\n1.\\tTo be sure, coherence does not cover all GAN problems: GANs can be so complex that we feel that any endeavor to account for all problems would be chimeric (at least, given our current level of understanding of the GAN landscape). Being fully aware of this, our goal in this paper was simply to provide concrete theoretical evidence that the inclusion of an extra-gradient step can help resolve many of the problems that arise in practice (and, in particular, cycling and oscillatory mode collapses). In this regard, our paper tackles a significantly wider framework than the 2018 ICLR paper of Daskalakis et al. which only addressed bilinear models.\\n\\nFurthermore, we would like to point out that Corollaries 3.2 and 3.3 are only *sufficient* conditions for coherence. To make an analogy with convex analysis, in practice, when trying to determine whether a given function is convex, one of the standard techniques is to show that its Hessian matrix is diagonally dominant - and, hence, positive-semidefinite. Obviously, this is just a sufficient condition, but it is still useful in practice. We view Corollaries 3.2 and 3.2 in a similar light: they show that our results cover a wide array of cases of practical (and theoretical) interest, without attempting to be exhaustive.\\n\\n\\n2.\\tRegarding the relation with pseudo-monotonicity: despite any formal similarities, we would like to point out that coherence and pseudo-monotonicity can be quite different. As an example, take the objective function (2.2) in our paper: for x_1 = 1/2, we get f(1/2,x_2) = (x_2^2 - 2)^2 (4 + 5x_2^2) / 16, which has *two* well-separated maximizers, i.e. it is not even quasi-concave - implying in turn that (2.2) is not pseudo-monotone (it is, in fact, multi-modal in x_2).\\n\\nMoreover, as we pointed in our reply to Reviewer 2, the version of coherence that we presented was the simplest possible one (and we did so for reasons of clarity and ease of presentation). Our definition can be weakened substantially by considering the following definition of \\\"weak coherence\\\":\", \"definition\": \"We say that f is weakly coherent if:\\n(i) There exists a solution p of (SP) that satisfies (VI).\\n(ii) Every solution x* of (SP) satisfies (VI) locally, i.e., g(x) (x - x*) \\u2265 0 for all x sufficiently close to x*.\\n\\nAs we pointed out in our reply to Reviewer 2, under this *weaker* definition of coherence, the solution set of (SP) need no longer be convex, thus making the difference with pseudo-monotone problems even more pronounced. As a very simple example, consider the case where Player 1 controls x,y in [-1,1], and the objective function is f(x,y) = x^2 y^2, i.e., Player 2 has no impact in the game (just for simplicity). In this case, the solution set of the problem is the cross-shaped set X* = {(x,y) : x=0 or y=0}, which is non-convex - in stark contrast to the convex structure of the solution set of pseudo-monotone problems.\\n\\nWe will update our manuscript accordingly as soon as possible to make this change!\\n\\nWe will also include a detailed discussion of the paper by Noor et al. - we were not aware of it, and we thank the reviewer for bringing it to our attention.\\n\\n\\n3.\\tRegarding the integration of Adam in our proof technique: we agree with the reviewer that this is a worthwhile extension, but not one that can be properly undertaken without completely changing the structure of the paper and its focus. Adam has a very specific update structure and requires the introduction of significant machinery to handle theoretically, so we do not see how it can be done without greatly shifting the scope and balance of our treatment and analysis.\"}", "{\"title\": \"Thanks for the feedback!\", \"comment\": [\"We thank the reviewer for their in-depth remarks and positive evaluation! We reply point-by-point below:\", \"1.\\tRegarding the structure of the solution set of a coherent problem: we agree that this structural question can be investigated further but, given space constraints, we are concerned that this might potentially dilute the focus of the paper. Nevertheless, we would like to take advantage of the openreview format to answer in detail the referee's questions regarding the solution set of a coherent problem:\", \"As the referee already points out, uniqueness can be easily taken care of by considering the constant function: the solution set of this problem is the entire feasible region, though the problem is null coherent [and vacuously strictly coherent if we interpret Definition 2.1 to hold for the empty set in the case of strict coherence.] More interesting examples with a zeroed-out direction also exist: for instance, the problem f(x_1,x_2) = x_1^2 is strictly coherent, but its solution set is an affine space.\", \"Whether the solution set is an affine space intersected with the set of constraints: in the current formulation, it can be shown that the solution set of a coherent problem is a convex space, though not necessarily one obtained as the intersection of an affine set with the feasible region. [We can provide a concrete example if the referee finds this useful]\", \"However, as we state in the paper, the definition of coherence can be weakened substantially, and our results still go through. Specifically, consider the following definition of \\\"weak coherence\\\":\"], \"definition\": \"We say that f is weakly coherent if:\\n(i) There exists a solution p of (SP) that satisfies (VI).\\n(ii) Every solution x* of (SP) satisfies (VI) locally, i.e., g(x) (x - x*) \\u2265 0 for all x sufficiently close to x*.\\n\\nUnder this *weaker* definition of coherence, the solution set of (SP) need no longer be convex! To see this, consider a very simple optimization example where Player 1 controls x,y in [-1,1], and the objective function is f(x,y) = x^2 y^2 (i.e., Player 2 has no impact in the game, just for simplicity). In this case, the solution set of the problem is the cross-shaped set X* = {(x,y) : x=0 or y=0}, which is non-convex!\\n\\nWe chose to focus on the case where the solutions of (SP) and (VI) coincide for simplicity and clarity of presentation; however, we will update our manuscript accordingly as soon as possible to make this change!\\n\\n\\n2.\\tIndeed, the results are only asymptotic - but, as the reviewer states, we know of virtually no other results at this level of generality, and the analysis has to start somewhere. We agree that getting rates is an important problem, but we believe that all this cannot be addressed within a single paper.\\n\\n\\n3.\\tRegarding the similarity of proof techniques with MD/OMD: we would like to point out that conventional MD/OMD proof techniques are typically quite different as they focus on the convergence of the so-called \\\"ergodic average\\\" of the sequence of iterates (see e.g., the cited literature by Nemirovski, Nesterov, Juditski et al., and many others). Averaging techniques rely crucially on the problem being convex-concave and cannot be used in a non-monotone setting; as a result, we took a completely different approach relying on a quasi-Fej\\u00e9r analysis inspired by recent work on Bregman proximal methods in operator theory.\\n\\n\\n4.\\tWe concur that our results can be extended to non-zero-sum games, this is a great observation! Again, we did not make this link explicit in our paper for simplicity, but we will definitely update our manuscript accordingly.\\n\\n\\n5.\\tRegarding the name \\\"optimistic mirror descent\\\". In the original NIPS 2013 paper of Rakhlin and Sridharan, the authors present two variants of OMD: one is essentially the mirror-prox algorithm of Nemirovski (2004), and the other is a \\\"momentum\\\"-like variant which was further studied by Daskalakis et al. in their recent 2018 ICLR paper. Regrettably, there is a fair bit of confusion in the literature regarding what \\\"optimistic\\\" descent is: personally, we have a strong preference for the original \\\"mirror-prox\\\" terminology of Nemirovski (after all, in saddle-point problems, the method is *not* a descent method). However, we used the OMD terminology of Rakhlin and Sridharan because it seems to be more easily recognizable in the GAN community.\\n\\n\\n6. Minor comments: We will take care of those, thanks!\"}", "{\"title\": \"Thanks for the feedback!\", \"comment\": \"We thank the reviewer for their positive and encouraging feedback! We also feel that the inclusion of an extra-gradient step can greatly enhance the stability of GAN training methods, and can provide further key insights.\"}", "{\"title\": \"A good paper\", \"review\": \"This paper is trying to find a saddle-point of a Lagrangian using mirror descent. Mirror descent based methods use Bregman divergence to encode the convexity and smoothness of objective function beyond the euclidean structure. The main contribution of this paper is adding an extra gradient step to the standard MD, i.e., step 5 in Algorithm 2 as well as stochastic versions. Numerical experiments support their results.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A first step to handle non-convexity in saddle point optimization\", \"review\": [\"This work provides the converge proof of the last iterates of two stochastic methods (almost surely) that the author called mirror descent and optimistic mirror descent under an assumption weaker than monotonicity called coherence.\", \"Roughly, the definition of coherence is the equivalence between being a saddle point and the solution of the Minty variational inequality.\", \"Overall, I think that this paper try to tackle an interesting problem which is to prove convergence of saddle point algorithms under weaker assumption than monotonicity of the operator.\", \"However, I have some concerns:\", \"I think that the properties of coherent saddle point could be more investigated. For instance is the set of coherent saddle point connected ? It would be very relevant for GANs. You claim that \\\"neither strict, nor null coherence imply a unique solution to (SP),\\\" but I do not see any proof of that statement (both provided examples have a unique SP). I agree that you can set $g$ to $0$ in some directions to get an affine space a of saddle points but is there examples where the set of solution is not an affine space (intersected with the constraints) ?\", \"First of all the results are only asymptotic. (I agree that it can be mitigated saying that there is (almost) no results on non-monotone VI and it is a first step to try to handle non-convexity of the objective functions.)\", \"One big pro of this work might have been new proof techniques to handle non-monotonicity in variational inequalities but the coherence assumption looks like to be the weakest condition to use the standard proof technique of convergence of the (MD) and (OMD). Nevertheless, this work is still interesting since it handles in a subtle way stochasticity (I did not have time to check Theorem 2.18 [Hall & Heyde 1980], I would be good to repeat it in the appendix for self-completeness)\", \"This work could be easily extended to non zero-sum games which is crucial in practice since most of the state of the art GANs (such as WGAN with gradient penalty or non saturating GAN) are non zero-sum games.\", \"Are you sure of the use of the denomination Optimistic mirror descent ? What you are presenting is the extragradient method. These two methods are slightly different, If you look at (5) in (Daskalaki et al., 2018) you'll notice that the updates are slightly different from you (OMD), particularly (OMD) require two gradient computations per iteration whereas (5) in (Daskalaki et al., 2018) requires only one. (it just requires to memorize the previous gradient)\"], \"minor_comment\": [\"For saddle point (and more generally variational inequalities) Mirror descent is no longer a descent algorithm. The name used by the literature is mirror-prox method (see Juditsky's paper)\", \"in (C.1) U_n is not defined anywhere but I guess it is $\\\\hat g_n - g(X_n)$.\", \"Some cited paper are published paper but cited as arXiv paper.\", \"Lemma D.1 could be extended to the case (\\\\sigma \\\\neq 0) but the additional noise term might be hard to handle to get a result similar as Thm 4.1\", \"for $\\\\sigma \\\\neq 0$.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Coherent condition is highly related to the pseudo-monotone property in operator theory.\", \"review\": \"Prons:\\nThis paper provides an optimistic mirror descent algorithm to solving minmax optimization problem. Its global convergence is guaranteed under the coherence property. The experimental results are promising.\", \"cons\": \"1.\\tThe coherence property is still a strong assumption. The sufficient conditions provided in Corollary 3.2 and 3.3 to guarantee coherence property are too specific to cover existing GAN models. \\n\\n2.\\tThe current theoretical contribution seems incrementally. From the perspective of operator theory, the coherence property is highly related to the pseudo-monotone property. Extragradient method to solve the pseudo-monotone VIP has already existed in the literature [1]. The proposed OMD can be simply regarded a stochastic extension of [1] and simultaneously generalize the European distance in [1] to Bregman distance. \\n\\n3.\\tThe integrating of Adam and OMD in the experiments is very interesting. To match the experiments, we highly recommend the authors to show the convergence of OMD + Adam with or without coherence condition, rather than requiring a diminishing learning rate.\\n\\n[1] Noor, Muhammad Aslam, et al. \\\"Extragradient methods for solving nonconvex variational inequalities.\\\" Journal of Computational and Applied Mathematics 235.9 (2011): 3104-3108.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
S1gUsoR9YX
Multilingual Neural Machine Translation with Knowledge Distillation
[ "Xu Tan", "Yi Ren", "Di He", "Tao Qin", "Zhou Zhao", "Tie-Yan Liu" ]
Multilingual machine translation, which translates multiple languages with a single model, has attracted much attention due to its efficiency of offline training and online serving. However, traditional multilingual translation usually yields inferior accuracy compared with the counterpart using individual models for each language pair, due to language diversity and model capacity limitations. In this paper, we propose a distillation-based approach to boost the accuracy of multilingual machine translation. Specifically, individual models are first trained and regarded as teachers, and then the multilingual model is trained to fit the training data and match the outputs of individual models simultaneously through knowledge distillation. Experiments on IWSLT, WMT and Ted talk translation datasets demonstrate the effectiveness of our method. Particularly, we show that one model is enough to handle multiple languages (up to 44 languages in our experiment), with comparable or even better accuracy than individual models.
[ "NMT", "Multilingual NMT", "Knowledge Distillation" ]
https://openreview.net/pdf?id=S1gUsoR9YX
https://openreview.net/forum?id=S1gUsoR9YX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJgao10a-4", "Syg3ME65WV", "HyxyZDU_-E", "SklGs5kuZV", "H1lRUHSAlV", "B1l6eTYLxN", "HklY6xKgx4", "Hkg1r9LyeE", "rkxYUy8c0m", "B1eviDZ90Q", "r1eMyJtFAX", "SJl9EyzYCX", "r1xF8KtDAQ", "HklZyU6H0X", "BJl2FgvHAQ", "Byl2y81BA7", "Syl_hCHNCX", "rylLCTSV0Q", "B1lkPgr40m", "Skgw0644AQ", "rJxh264VAX", "H1xAh5EE0Q", "H1egc944A7", "HkxYyeX5n7", "rkeY6_gchm", "r1eGErKF2m" ], "note_type": [ "official_comment", "comment", "official_comment", "comment", "official_comment", "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1546669989237, 1546470419887, 1546311415199, 1546283674397, 1545651542236, 1545145589420, 1544749248618, 1544673847465, 1543294801273, 1543276446795, 1543241433572, 1543212850326, 1543113041177, 1542997465259, 1542971523799, 1542940132406, 1542901423940, 1542901197721, 1542897751204, 1542897102895, 1542897076256, 1542896310436, 1542896263719, 1541185504786, 1541175488934, 1541145898133 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper625/Authors" ], [ "~krtin_kumar1" ], [ "ICLR.cc/2019/Conference/Paper625/Authors" ], [ "~krtin_kumar1" ], [ "ICLR.cc/2019/Conference/Paper625/Authors" ], [ "~krtin_kumar1" ], [ "ICLR.cc/2019/Conference/Paper625/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper625/Authors" ], [ "ICLR.cc/2019/Conference/Paper625/Authors" ], [ "ICLR.cc/2019/Conference/Paper625/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper625/Authors" ], [ "ICLR.cc/2019/Conference/Paper625/Authors" ], [ "ICLR.cc/2019/Conference/Paper625/Authors" ], [ "ICLR.cc/2019/Conference/Paper625/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper625/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper625/Authors" ], [ "ICLR.cc/2019/Conference/Paper625/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper625/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper625/Authors" ], [ "ICLR.cc/2019/Conference/Paper625/Authors" ], [ "ICLR.cc/2019/Conference/Paper625/Authors" ], [ "ICLR.cc/2019/Conference/Paper625/Authors" ], [ "ICLR.cc/2019/Conference/Paper625/Authors" ], [ "ICLR.cc/2019/Conference/Paper625/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper625/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper625/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"reply\", \"comment\": \"Method 2 is exactly what we proposed. Our work is not a simple application of KD for multilingual NMT (e.g., method 1). Instead, we need to carefully design the distillation method, including when to distill and which language to distill.\"}", "{\"comment\": \"I get your point but let me explain things in a more clear way, there are 3 methods,\", \"method_1\": \"NLL + KD Loss (without early stopping pure KD)\", \"method_2\": \"NLL+KD or only NLL (early stopping or oscillation with small mag. and freq.) (this is not pure KD, sometimes and for some languages NLL loss (baseline) is used)\", \"method_3\": \"NLL loss only (Multilingual)\\n\\nWe know on Tedtalk dataset Method 2 > Method 3 > Method 1\\n\\nThis might indicate that Method 2 i.e. early stopping is the cause of significant improvement and KD might not be helping a lot since Method 1 > Method 3 and method 1 uses KD loss always.\", \"title\": \"reply\"}", "{\"title\": \"Reply\", \"comment\": \"Thanks for your interests in this work.\\n\\nHowever, your conclusion is not correct. Knowledge distillation performs better than the multilingual baseline (purely NLL loss) on all the language pairs you listed. \\n\\nEarly stopping itself belongs to the distillation method we proposed. Without early stopping can be another alternative of our proposed method, but we have demonstrated that this alternative will yield worse performance in the paper. So we choose the version with early stopping, which makes sense as you cannot continue to learn from a teacher that is much worse than you (1 BLEU score in this paper) when you gradually improve and surpass your teacher, which will make you worse too. \\n\\nAfter early stopping on a certain language, purely NLL loss is used and some other languages are still using distillation loss. This is not equal to the baseline since the multilingual model now has already achieved better performance than the baseline model due to distillation, and also, the baseline uses NLL loss on all languages from the beginning while our method switches to NLL loss dynamically depending on the accuracy gap between teacher and student on each language. In this stage, NLL loss is just introduced to ensure the model is not biased to the language still with distillation loss, since if you do not use any loss on the early stopping language, the accuracy will drop due to the model is just trained to optimize other languages. \\n\\nAt last, you cannot achieve the improvements in this paper with purely NLL loss from the beginning.\"}", "{\"comment\": \"Thanks for your reply.\\n\\nSo to conclude there will be oscillation with small magnitude and frequency. The reason I am curious about oscillation is because of your results on Ted Talk Dataset,\\n\\n------------------------------------------------------------------------------------------------------------\\nLanguage | Baseline | Early Stopping | No Early Stopping \\nBg | 27.76 | 29.18 (+1.42) | 28.07 (0.31)\\t \\nEt | 14.86 | 15.63 (+0.77)\\t | 12.64 (-2.22) \\t \\nFi | 16.12 | 17.23\\t(+1.11) | 15.13 (-0.99)\\t \\nFr | 38.27 | 34.32\\t(-4.4) | 33.69 (-4.58)\\t \\nGl | 30.32 | 31.9 (+1.58) | 30.28 (-0.04)\\t \\nHi | 19.93 | 21 (+1.07)\\t | 18.86 (-1.07)\\t \\nHy | 20.25 | 21.17\\t(+0.92) | 19.88 (-0.37) \\nKa | 16.71 | 18.27 (+1.56)\\t | 14.04 (-2.67)\\t \\nKu | 11.83 | 13.38 (+1.55)\\t | 8.5 (-3.33)\\t \\nMk | 31.85 | 32.65 (+0.8)\\t | 32.1 (0.25)\\t \\nMy | 13.85 | 15.17 (+1.32)\\t | 14.02 (0.17)\\t \\nSl | 22.52 | 23.68 (+1.16)\\t | 22.1(-0.42) \\nZh | 18.81 | 19.39 (+0.58)\\t | 17.22 (-1.59)\\t \\nPl | 23.5 | 24.3 (+0.8)\\t | 25.05 (1.55) (0.75)\\t \\nSk | 28.97 | 29.91 (+0.94) \\t | 30.45 (0.94) (0.54)\\t \\nSv | 35.92 | 36.92 (+1)\\t | 37.88 (1.96) (0.96) \\n-------------------------------------------------------------------------------------------------------------\\n\\nBased upon above results\\n- No early stopping is better in 6/16 cases, out of these the last 3 languages Pl, Sk and Sv have significantly higher scores\\n- For the 3 languages Pl, Sk, Sv no early stopping is better than early stopping, while for other cases early stopping performs better\\n- Thus knowledge distillation seems to be significantly beneficial only 3/16 languages. Since you do not report results on no early stopping on other datasets, I do not know if KD is actually beneficial in other cases. \\n- If KD is not beneficial then the performance improvement that you have achieved is due to early stopping, which in a way tends to pick better of the two (Baseline vs KD), since only using NLL is essentially the baseline model\\n\\nWhat is your opinion about this conclusion? Do you have results without early stopping on other datasets as well?\", \"title\": \"reply\"}", "{\"title\": \"reply\", \"comment\": \"Thanks for your interest in this work. Yes, once student improves beyond the threshold then the NLL loss is used. But when the accuracy of the student model drops below the threshold, ALL loss is used again. There may be some situation with small oscillation, but not that much, within +/- 0.2 BLEU score. Small accuracy oscillation is common in deep model training. For some cases, when NLL loss is used, the accuracy of student model can be improved again due to multilingual training, away from the oscillation area.\"}", "{\"comment\": \"Based upon the Algorithm 1 ALL loss and NLL loss can oscillate around the threshold point but in Section 3.3 in the discussion about early stopping it seems that you imply that once student improves beyond the threshold then the NLL loss is used always. Which scenario is correct? will there be an oscillation?\", \"title\": \"Question about early stopping\"}", "{\"metareview\": \"This paper presents good empirical results on an important and interesting task (translation between several language pairs with a single model). There was solid communication between the authors and the reviewers leading to an improved updated version and consensus among the reviewers about the merits of the paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Accept\"}", "{\"title\": \"Thanks for your attention\", \"comment\": \"Thanks for the detailed comments. We believe we have addressed your concerns and clarified your points in the rebuttal. Do you have an updated assessment of our paper? Thanks for your consideration.\"}", "{\"title\": \"reply from authors\", \"comment\": \"Thanks for pointing out. The BLEU scores in [1] are calculated by multi-bleu.perl, so the scores are directly comparable on WMT14 en-de. We have also noticed that mteval-v13a.pl is used in [2]. We have calculated our BLEU scores on WMT16 with mteval-v13a.pl, and found that the BLEU scores by mteval-v13a.pl are just 0.3 BLEU score (on average) less than that calculated by multi-bleu.perl on this dataset. The overall BLEU scores are still comparable (roughly within +/- 0.5 BLEU score).\"}", "{\"title\": \"Clarification needed for BLEU scores\", \"comment\": \"I appreciate that you ran one-to-many experiments. I'm a bit confused about the BLEU comparison on WMT. In your paper, you mention using multi-bleu.perl, while [2] uses mteval-v13a.pl, which does its own internal tokenization (on the detokenized input). These two scripts are not equivalent.\"}", "{\"title\": \"reply from authors (further results)\", \"comment\": \"We update the results on the higher-capacity experiment on Ted talk dataset, with transformer_base as the teacher model. The average gain of our distillation method over individual models (\\u25b31) is 4.31, and the average gain of our distillation method over multilingual baseline (\\u25b32) is 1.06. You can see as we change the teacher model from small to base, the gains over multilingual baseline also get improved from 0.68 to 1.06. The detailed numbers are shown below.\\n\\n---------------------------------------------------------------------------------------------------------------------------------\\nLanguage | ar-en | bg-en | cs-en | da-en | de-en | el-en | es-en | et-en | fa-en | fi-en| frca-en\\n\\u25b31\\t | 0.05 | -4.42 | 2.09 | 4.91 | 0.16 | 1.76 | 0.92 | 8.53 | 0.56 | 6.95 | 16.2\\n\\u25b32 | 2.63 | 4.65 | 0.56 | 1.58 | 1.73 | 0.53 | 0.84 | 0.66 | 1.96 | 0.61 | 0.74\\n---------------------------------------------------------------------------------------------------------------------------------\\nLanguage | fr-en | gl-en | he-en | hi-en | hr-en | hu-en| hy-en| id-en | it-en | ja-en | ka-en\\n\\u25b31\\t | 0.71 | 19.37 | 0.2 | 10.37 | 1.59 | 0.86 | 9.32 | 1.51 | 0.22 | 0.33 | 11.62\\n\\u25b32 | 1.81 | 0.39 | 2.19 | 0.67 | 0.7 | 1.29 | 0.31 | 1.09 | 1.43 | 0.79 | 0.25\\n---------------------------------------------------------------------------------------------------------------------------------\\nLanguage | ko-en | ku-en | lt-en | mk-en | my-en | nb-en | nl-en | pl-en | ptbr-en | pt-en | ro-en\\n\\u25b31 | 0.03 | 8.34 | 4.58 | 10.62 | 8.07 | 14.64 | 0.37 | 1.87 | 0.6 | 8.8 | 0.68\\n\\u25b32 | 1.25 | 0.8 | 0.47 | 0.5 | 0.35 | 0.54 | 1.33 | 1.09 | 1.22 | 1.39 | 1.14\\n---------------------------------------------------------------------------------------------------------------------------------\\nLanguage | ru-en | sk-en | sl-en | sq-en | sr-en | sv-en | th-en | tr-en | uk-en | vi-en | zh-en\\n\\u25b31 | 1.28 | 4.55 | 11.58 | 5.02 | 1.69 | 2.19 | 1.59 | 0.03 | 2.05 | 0.03 | 7.08\\n\\u25b32 | 0.8 | 0.87 | 0.83 | 1.24 | 0.54 | 0.42 | 0.8 | 1.87 | 0.43 | 0.82 | 0.61\\n---------------------------------------------------------------------------------------------------------------------------------\"}", "{\"title\": \"rebuttal from authors (more results)\", \"comment\": \"We provide the results of one-to-many translation on WMT16 here. The BLEU scores in () represent the difference between the multilingual model and individual models. Delta represents the improvements of our multi-distillation method over the multi-baseline. We can see our method consistently outperforms the multilingual baseline on all languages, with nearly 1-2 BLEU score gains, and can nearly match the accuracy of the individual models.\\n\\n------------------------------------------------------------------------------------------------------------\\nLanguage | Individual | Multilingual-baseline | Multilingual-distill | Delta \\nen-cs | 22.58 | 21.39 (-1.19)\\t | 23.10 (0.62) \\t | 1.81 \\nen-de | 31.40 | 30.08 (-1.32)\\t | 31.42 (0.02) \\t | 1.34 \\nen-fi | 22.08 | 19.52 (-2.56)\\t | 21.56 (-0.52)\\t | 2.04 \\nen-lv | 14.92 | 14.51 (-0.41)\\t | 15.32 (0.40)\\t | 0.81 \\nen-ro | 31.67 | 29.88 (-1.79)\\t | 31.39 (-0.28)\\t | 1.51 \\nen-ru | 24.36 | 22.96 (-1.40)\\t | 24.02 (-0.34) | 1.06 \\n-------------------------------------------------------------------------------------------------------------\\n\\nWe also compare our baseline with previous works. As we use the transformer_base model, we directly compare our individual baseline with the Transformer paper [1] on En-De translation pair on WMT14 test set. Our individual model can achieve 27.27 BLEU score while Transformer paper can achieve 27.30, which is comparabe. We also compare our individual baseline with previous work (such as Table 3 in [2]) on WMT16, which are also comparable.\\n\\n\\n[1] Vaswani, Ashish, et al. \\\"Attention is all you need.\\\" NIPS 2017.\\n[2] Sennrich, Rico, et al. \\\"The University of Edinburgh's Neural MT Systems for WMT17.\\\" arXiv preprint arXiv:1708.00726 (2017).\"}", "{\"title\": \"response from authors\", \"comment\": \"Thanks for your comments!\\n\\nWe found language family has smaller influence than the training data size on the gains of our multilingual distillation method. However, language family can have influence on the performance of multilingual model training. According to our previous studies, if the languages from the same language family are trained together using the multilingual baseline method, the accuracy will still drop compared with the individual model training, but the accuracy drop is less than that training the languages from different families, since one language may benefit from the data from similar languages.\"}", "{\"title\": \"Final comment\", \"comment\": \"I don't have any significant problems with this response, except for the hope that the gains for the current (KD) method over standard multilingual in the high-capacity setting will improve if you also use high-capacity teacher models. The teacher models are trained on much smaller data, so one wouldn't expect them to benefit from higher capacity as much as the multilingual model. Also, it would be interesting to see what happens to the multilingual results if we used an even bigger model like transformer_big. I don't think the results of these experiments should hurt prospects for acceptance, however, because the settings with smaller models and fewer language pairs remain relevant.\"}", "{\"title\": \"Thank you for your responses\", \"comment\": \"I found most of them to be satisfying. One additional Q though: Does the language family have no impact on the performance? Your answer to Q1 suggests that everything depends only on the training data.\"}", "{\"title\": \"reply from authors\", \"comment\": \"Thanks for your comments again!\\n\\n1. For the training time, we think it is reasonable to assume the individual models are pre-given. Every production system at least needs to compare the accuracy of the multilingual model with the individual models, to make sure the multilingual model is accuracy enough for online serving. Therefore, the multilingual baseline model also needs the individual models for comparison. In this situation, training the individual models can be not considered into the total training time of our method. \\n\\n2. We just choose the default setting for the transformer model in the original paper [1], without any tuning on the model capacity. For the original model, we use transformer_small with 20M model parameters. For the higher-capacity model, we use transformer_base with 65M model parameters. \\n\\nFor the improvements, we calculate the average gain over the 44 languages on Ted talk dataset. For transformer_small, \\u25b31(distill - individual) is 3.79, \\u25b32(distill - baseline) is 1.35. For transformer_base, \\u25b31(distill-individual) is 4.25, \\u25b32 (distill-baseline) is 0.68. The rebuttal experiments are actually time limited, so for the higher-capacity model, we just increase the capacity of the multlingual-baseline and multilingual-distil (our method) from transformer_small to transformer_base, keeping the individual models as transformer_small. So the teacher models (individual models) for our method are still transformer_small, and thus the gain over multilingual-baseline (\\u25b32) becomes smaller. We can find small teacher models can still teach large student model with improvements, as a byproduct finding of our experiment. We will run the experiments with transformer_base as the individual models (teacher models) to verify that our method can consistently achieve large gain with higher-capacity model.\\n\\n3. For the En-Cs data, there are four data sources for WMT16 En-Cs parallel training data, according to http://www.statmt.org/wmt16/translation-task.html . They are Europarl v7 (647K), Common Crawl corpus (162K), News Commentary v11 (191K) and CzEng 1.6pre (51M). The total training data is 52M. In order to make the training data roughly the same cross different languages, we just choose the first three data, which are 1M in total. We want to make our experiment setting clean and simple, so we do not additionally get part of the data from CzEng 1.6pre (51M) to make it summed to ~5M. We will add this data description in the appendix. \\n\\n[1] Vaswani, Ashish, et al. \\\"Attention is all you need.\\\" NIPS 2017.\"}", "{\"title\": \"okay, good to know\", \"comment\": \"Thanks for running all these extra experiments.\"}", "{\"title\": \"thanks for confirming\", \"comment\": \"Summarizing the responses:\\n\\n1. So training time expressed in GPU hours does more than double, although wall time can be shorter if enough GPUs are available to train individual languages in parallel. I'm not sure I buy the argument about keeping pre-trained individual models around, since you'd presumably want to re-train if more data or better models become available.\\n\\n2. Comparing these higher-capacity results to table 3, it looks like the gains over multilingual are smaller. It would be helpful to give average gain, both for table 3 and these new numbers. Also, this isn't very meaningful without knowing what the capacity increase was, and whether you optimized capacity for this setting.\\n\\n3. If the goal was to have roughly equal corpus sizes, why not use ~5M sentences from En-Cs, rather than 1M?\"}", "{\"title\": \"rebuttal from authors\", \"comment\": \"We thank Reviewer 3 for the reviews and comments! Here are our responses to the comments.\\n\\n1. Regarding the results of many-to-many translations\\nWe have provided the English-to-many translation results on the IWSLT dataset in the table below. The BLEU scores in () represent the difference between the multilingual model and individual models. Delta represents the improvements of our multi-distillation method over the multi-baseline. We can see our method consistently outperforms the multilingual baseline on all languages, and can nearly match or even surpass the accuracy of the individual models, even if one-to-many translation is considered harder than many-to-one translation. We have also updated the results in the paper. \\n\\nWe will provide more results on the WMT dataset in the following days and make comparison with previous works. \\n\\n------------------------------------------------------------------------------------------------------------\\nLanguage | Individual | Multilingual-baseline | Multilingual-distill | Delta \\nen-ar | 13.67 | 12.73 (-0.94) | 13.80 (0.13) \\t | 1.07 \\nen-cs | 17.81 | 17.33 (-0.48)\\t | 18.69 (0.88) \\t | 1.37 \\nen-de | 26.13 | 25.16 (-0.97)\\t | 26.76 (0.63) \\t | 1.60 \\nen-he | 24.15 | 22.73 (-1.42)\\t | 24.42 (0.27)\\t | 1.69\\nen-nl | 30.88 | 29.51 (-1.37) | 30.52 (-0.36)\\t | 1.01 \\nen-pt | 37.63 | 35.93 (-1.70)\\t | 37.23 (-0.40)\\t | 1.30 \\nen-ro | 27.23 | 25.68 (-1.55)\\t | 27.11 (-0.12)\\t | 1.42 \\nen-ru | 17.40 | 16.26 (-1.14)\\t | 17.42 (0.02) \\t | 1.16 \\nen-th | 26.45 | 27.18 (0.73) \\t | 27.62 (1.17) \\t | 0.45 \\nen-tr | 12.47 | 11.63 (-0.84)\\t | 12.84 (0.37) \\t | 1.21 \\nen-vi | 27.88 | 28.04 (0.16) \\t | 28.69 (0.81) \\t | 0.65 \\nen-zh | 10.95 | 10.12 (-0.83)\\t | 10.41 (-0.54) | 0.29 \\n-------------------------------------------------------------------------------------------------------------\\n\\n2. Regarding training time\\nThe individual models need to be pre-trained, which will incur additional time. According to the training time statistics on IWSLT dataset with NVIDIA V100 GPU, it takes nearly 4 hours to train the individual model with 1 GPU. The total GPU time is 4hours *12 GPUs for 12 languages. The training time for multilingual baseline is nearly 11hours * 4GPUs, while our method is nearly 13 hours*4GPUs. Our method only takes extra 2hours*4GPUs for the multilingual training and 4 hours*12GPUs for the individual model pre-training. Furthermore, we can assume the individual models are pre-given, which is reasonable because the production system usually wants to adapt the individual translation into multilingual setting, at the benefit of saving maintenance cost while with no accuracy degradation or even with accuracy improvement, which is exactly the goal of this work.\\n\\n3. Regarding the top-K distillation\\nYes, we normalize the top K probabilities so that they sum to 1. We have added the description in the new version.\\n\\n4. Regarding sequence-level knowledge distillation\\nWe have tried sequence-level knowledge distillation. It results in consistently inferior accuracy on all languages compared with word-level knowledge distillation used in our work. The results are listed as below. \\n-------------------------------------------------------------------------------------------\\nLanguage | Sequence-level distillation | Word-level distillation\\nen-ar | 12.79\\t | 13.80 \\nen-cs | 17.01 | 18.69 \\nen-de | 25.89 | 26.76 \\nen-he | 22.92 | 24.42\\nen-nl | 29.99 | 30.52 \\nen-pt | 36.12\\t | 37.23 \\nen-ro | 25.75 | 27.11 \\nen-ru | 16.38 | 17.42 \\nen-th | 27.52 | 27.62 \\nen-tr | 11.11 | 12.84 \\nen-vi | 28.08 | 28.69 \\nen-zh | 10.25 | 10.41 \\n--------------------------------------------------------------------------------------------\"}", "{\"title\": \"rebuttal from authors [2/2]\", \"comment\": \"\", \"question4\": \"Can you comment on the total training time?\", \"answer4\": \"The individual models need to be pre-trained, which will incur additional time. According to the training time statistics on IWSLT dataset with NVIDIA V100 GPU, it takes nearly 4 hours to train the individual model with 1 GPU. The total GPU time is 4hours *12 GPUs for 12 languages. The training time for multilingual baseline is nearly 11hours * 4GPUs, while our method is nearly 13 hours*4GPUs. Our method takes extra 2hours*4GPUs for the multilingual training and 4 hours*12GPUs for the individual model pretraining. Furthermore, we can assume the individual models are pre-given, which is reasonable because the production system usually wants to adapt the individual translation into multilingual setting, at the benefit of saving maintenance cost while with no accuracy degradation or even with accuracy improvement, which is exactly the goal of this work.\", \"question5\": \"What happens when you do not stop the distillation?\", \"answer5\": \"We have shown the results when we do not stop the distillation in the submitted version (now Table 5 in the updated version). The performance of most of the languages will get worse.\", \"question6\": \"The performance gain.\", \"answer6\": \"We\\u2019d like to point out that our goal is to train multiple languages in one model, without performance degradation compared with individual models. Our method actually outperforms individual models on many languages, which is a good byproduct. Furthermore, on the IWSLT/WMT/Ted datasets, we outperform the multilingual baseline on most languages by more than 1 BLEU score, and some languages by more than 2 BLEU scores. These are very good improvements for neural machine translation as stated in previous works [1][2].\\n\\n[1] Gehring, Jonas, et al. \\\"Convolutional sequence to sequence learning.\\\" ICML 2017.\\n[2] Vaswani, Ashish, et al. \\\"Attention is all you need.\\\" NIPS 2017.\"}", "{\"title\": \"rebuttal from authors [1/2]\", \"comment\": \"We thank Reviewer 2 for the reviews and comments! Here are our responses to the comments.\", \"question1\": \"How does the divergence between the source and target language influence the performance, and why is there more important on some languages and less on others?\", \"answer1\": \"We found that the performance gains of our method correlate with the training data size of each language. If a language has small training data, then it is likely to get more improvement due to the benefit of multilingual training.\", \"question2\": \"Why are the improvements on the TED dataset so much higher?\", \"answer2\": \"We guess you meant the improvements of our method over individual models on the TED dataset. Some languages in the TED dataset are of small data size, and thus they get higher improvement from multilingual training, by leveraging the training data from other (similar) languages.\", \"question3\": \"What happens when the target language is something other than English?\", \"answer3\": \"We have provided the English-to-many translation results on the IWSLT dataset in the table below. The BLEU scores in () represent the difference between the multilingual model and individual models. Delta represents the improvements of our multi-distillation method over the multi-baseline. We can see our method consistently outperforms the multilingual baseline on all languages, and can nearly match or even surpass the accuracy of the individual models, even if one-to-many translation is considered harder than many-to-one translation. We have also updated the results in the paper. We will provide more results in the following days.\\n------------------------------------------------------------------------------------------------------------\\nLanguage | Individual | Multilingual-baseline | Multilingual-distill | Delta \\nen-ar | 13.67 | 12.73 (-0.94) | 13.80 (0.13) \\t | 1.07 \\nen-cs | 17.81 | 17.33 (-0.48)\\t | 18.69 (0.88) \\t | 1.37 \\nen-de | 26.13 | 25.16 (-0.97)\\t | 26.76 (0.63) \\t | 1.60 \\nen-he | 24.15 | 22.73 (-1.42)\\t | 24.42 (0.27)\\t | 1.69\\nen-nl | 30.88 | 29.51 (-1.37) | 30.52 (-0.36)\\t | 1.01 \\nen-pt | 37.63 | 35.93 (-1.70)\\t | 37.23 (-0.40)\\t | 1.30 \\nen-ro | 27.23 | 25.68 (-1.55)\\t | 27.11 (-0.12)\\t | 1.42 \\nen-ru | 17.40 | 16.26 (-1.14)\\t | 17.42 (0.02) \\t | 1.16 \\nen-th | 26.45 | 27.18 (0.73) \\t | 27.62 (1.17) \\t | 0.45 \\nen-tr | 12.47 | 11.63 (-0.84)\\t | 12.84 (0.37) \\t | 1.21 \\nen-vi | 27.88 | 28.04 (0.16) \\t | 28.69 (0.81) \\t | 0.65 \\nen-zh | 10.95 | 10.12 (-0.83)\\t | 10.41 (-0.54) | 0.29 \\n-------------------------------------------------------------------------------------------------------------\"}", "{\"title\": \"rebuttal from authors [1/2]\", \"comment\": \"We thank Reviewer 1 for the reviews and comments! Here are our responses to the comments.\\n\\n1. Regarding the training time\\nThe individual models need to be pre-trained, which will incur additional time. According to the training time statistics on IWSLT dataset with NVIDIA V100 GPU, it takes nearly 4 hours to train the individual model with 1 GPU. The total GPU time is 4hours *12 GPUs for 12 languages. The training time for multilingual baseline is nearly 11hours * 4GPUs, while our method is nearly 13 hours*4GPUs. Our method only takes extra 2hours*4GPUs for the multilingual training and 4 hours*12GPUs for the individual model pretraining. Furthermore, we can assume the individual models are pre-given, which is reasonable because the production system usually wants to adapt the already trained individual translation models into multilingual model, at the benefit of saving maintenance cost while with no accuracy degradation or even with accuracy improvement, which is exactly the goal of this work.\\n\\n2. Regarding higher-capacity multilingual model\\nWe have trained larger models on the Ted talk dataset. Our method still consistently outperforms the multilingual baseline model and the individual models, as shown in the table below, where \\u25b31 means the BLEU score improvements of our method over the individual models, \\u25b32 means the BLEU score improvements of our method over the multi-baseline model.\\n----------------------------------------------------------------------------------------------------------------------------------------\\nLanguage | ar-en | bg-en | cs-en | da-en | de-en | el-en | es-en | et-en | fa-en | fi-en | frca-en\\n\\u25b31\\t | 0.14 | -5.39 | 2.13 | 5.04 | 0.39 | 1.66 |0.69 | 8.46 | 0.32 | 6.78 | 16\\n\\u25b32 | 1.73 | 3.27 | 0.1 | 1.11 | 1.54 | 0.23 | 0.31 | 0.34 | 1.52 | 0.21 | 0.34\\n----------------------------------------------------------------------------------------------------------------------------------------\\nLanguage | fr-en | gl-en | he-en | hi-en | hr-en | hu-en| hy-en| id-en | it-en | ja-en | ka-en\\n\\u25b31\\t | 0.37 | 19.38 | -0.17 | 10.49 | 1.64 | 0.35 | 9.59 | 1.58 | 0.19 | 0.14 | 11.59\\n\\u25b32 | 0.87 | 0.2 | 1.71 | 0.48 | 0.57 | 0.4 | -0.12 | 0.66 | 1.23 | 0.49 | 0.12\\n----------------------------------------------------------------------------------------------------------------------------------------\\nLanguage | ko-en | ku-en | lt-en | mk-en | my-en | nb-en | nl-en | pl-en | ptbr-en | pt-en | ro-en\\n\\u25b31 | 0.16 | 7.89 | 4.53 | 10.89 | 8.19 | 14.24 | -0.13 | 1.91 | 0.61 | 8.82 | 0.72\\n\\u25b32 | 1.18 | 0.4 | 0.38 | 0.39 | 0.08 | 0.04 | 0.73 | 0.79 | 0.88 | 1.01 | 0.74\\n----------------------------------------------------------------------------------------------------------------------------------------\\nLanguage | ru-en | sk-en | sl-en | sq-en | sr-en | sv-en | th-en | tr-en | uk-en | vi-en | zh-en\\n\\u25b31 | 1.29 | 4.58 | 11.98 | 5.16 | 1.79 | 2.46 | 1.37 | -0.04 | 2.13 | 0.25 | 6.96\\n\\u25b32 | 0.7 | 0.58 | 0.73 | 0.88 | 0.14 | 0.29 | 0.4 | 1.53 | 0.28 | 0.42 | 0.12\\n----------------------------------------------------------------------------------------------------------------------------------------\\n\\n3. Regarding corpora size\\nThere is a typo in Table 9 in the appendix. The training data size for WMT16 En-De is 4.5M bilingual sentence pairs, while for En-Cs is 1M bilingual sentence pairs. We have corrected it in the new version. We used 1M training data for En-Cs in order to make the training data on all languages roughly the same.\"}", "{\"title\": \"rebuttal from authors [2/2]\", \"comment\": \"4. Regarding out-of-English translation\\nWe have provided the English-to-many translation results on the IWSLT dataset in the table below. The BLEU scores in () represent the difference between the multilingual model and individual models. Delta represents the improvements of our multi-distillation method over the multi-baseline. We can see our method consistently outperforms the multilingual baseline on all languages, and can nearly match or even surpass the accuracy of the individual models, even if one-to-many translation is considered harder than many-to-one translation. We have also updated the results in the paper. We will provide more results in the following days.\\n------------------------------------------------------------------------------------------------------------\\nLanguage | Individual | Multilingual-baseline | Multilingual-distill | Delta \\nen-ar | 13.67 | 12.73 (-0.94) | 13.80 (0.13) \\t | 1.07 \\nen-cs | 17.81 | 17.33 (-0.48)\\t | 18.69 (0.88) \\t | 1.37 \\nen-de | 26.13 | 25.16 (-0.97)\\t | 26.76 (0.63) \\t | 1.60 \\nen-he | 24.15 | 22.73 (-1.42)\\t | 24.42 (0.27)\\t | 1.69\\nen-nl | 30.88 | 29.51 (-1.37) | 30.52 (-0.36)\\t | 1.01 \\nen-pt | 37.63 | 35.93 (-1.70)\\t | 37.23 (-0.40)\\t | 1.30 \\nen-ro | 27.23 | 25.68 (-1.55)\\t | 27.11 (-0.12)\\t | 1.42 \\nen-ru | 17.40 | 16.26 (-1.14)\\t | 17.42 (0.02) \\t | 1.16 \\nen-th | 26.45 | 27.18 (0.73) \\t | 27.62 (1.17) \\t | 0.45 \\nen-tr | 12.47 | 11.63 (-0.84)\\t | 12.84 (0.37) \\t | 1.21 \\nen-vi | 27.88 | 28.04 (0.16) \\t | 28.69 (0.81) \\t | 0.65 \\nen-zh | 10.95 | 10.12 (-0.83)\\t | 10.41 (-0.54) | 0.29 \\n-------------------------------------------------------------------------------------------------------------\\n\\n5. Regarding the weight on the distillation loss and turning back the distillation loss\\nThanks for the suggestion. We have provided the results for turning back the distillation loss with a hard threshold in the submitted version. According to the number (Table 7 in the updated version), back distillation improves the accuracy of individual models. We quickly try a simple adaptive weight that changes according to BLEU gap between the teacher and student model: \\\\lambda = 0.9* (1/2)^{max(BLEU_student + 2 - BLEU_teacher, 0)}, which means if a student is lower than the teacher by more than 2 BLEU score, the weight is 0.9. After that, the weight is decayed exponentially. The initial results on IWSLT dataset demonstrate that there is no much difference (with an average of 0.3 BLEU score) compared with the constant weight we used in this paper. We will conduct a comprehensive study on this kind of back distillation you mentioned in the future work. \\n\\n6. Regarding the gradient accumulation strategy\\nWe have conducted experiments to analyze if it is critical for the importance of gradient accumulation. We found it is not critical for the model training. We run an experiment on the IWSLT dataset without gradient accumulation, i.e., updating the model parameters immediately with the training data from a single language. But in order to make sure the update in the two settings has the same batch size, which is a critical hyperparameter for model training, we increase the batch size for the single language by 12 times, to be the same with the batch size in the setting of gradient accumulation. We found the accuracies of the two settings are nearly the same, with 0.3 BLEU higher or lower at most.\\n\\n7. Regarding the writing and typos\\nWe have fixed the typos in the new version. In top-K distillation, the teacher distribution is renormalized.\"}", "{\"title\": \"Effective knowledge distillation for multilingual NMT, at the cost of increased training time\", \"review\": \"The authors apply knowledge distillation for many-to-one multilingual\\nneural machine translation, first training separate models for each language\\npair. For most language pairs, performance matches or improves upon\\nsingle-task baselines.\", \"strengths\": \"Improvements upon the baselines are fairly impressive, especially for the\\n44-language model.\\n\\nThe approach is quite simple and could be easily implemented by other groups.\\n\\nThe paper is well-written and easy to understand.\\n\\nAt inference, only a single model needs to be retained, which is memory-efficient.\", \"weaknesses\": \"The authors only test distillation in a many-to-one scenario. I believe that\\nproviding results for many-to-many multilingual NMT would be valuable.\\n\\nOverall, this approach increases training time as all single-task models\\nmust have converged before beginning distillation.\\n\\nThe authors provide no direct comparison to other work, which makes it hard to\\nknow how strong the baselines are. At least for WMT, I would suggest reporting\\nresults with mteval-v13a (or SACREBLEU), so that results can be compared against\\nofficial results.\", \"questions\": \"For the top-K approach, do you normalize the top K probabilities so that they\\nsum to 1 or not?\\n\\nDid you consider applying sequence knowledge distillation (Kim and Rush, 2016)\\n(using the baseline beam search output as references) instead of word knowledge\\ndistillation?\\n\\n***\", \"edit\": \"In my opinion, the changes made after the review period clearly improve the quality of the paper. I am increasing my rating from 6 to 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Straightforward, effective technique for improving multilingual NMT, some experiments missing.\", \"review\": \"Summary: Train a multilingual NMT system using the technique of Johnson et al (2017), but augment the standard cross-entropy loss with a distillation component based on individual (single-language-pair) teacher models. Periodically compare the validation BLEU score of the multilingual model with that of each individual model, and turn off distillation for language pairs where the multilingual model is better. On three different corpora (IWSLT, WMT, TED) with into-English translation from numbers of source languages ranging from 6 (WMT) to 44 (TED), this technique outperforms standard distillation for every language pair, and outperforms the individual models for most language pairs. Supplementary experiments justify the strategy of selectively turning off distillation, and quantify the effect using only the top 8 vocabulary items in distillation.\\n\\nThe main idea makes sense, and the results are very convincing, especially since it appears that hyper-parameters were not tuned extensively (eg, weight of 0.5 on the distillation loss, for all language pairs). Implementation should be very straightforward, especially with the trick of pre-computing top-k probabilities from the teacher model at each corpus position. One small barrier to practical application that the authors fail to acknowledge is the requirement to train individual models, which will at least double training time compared to a single multilingual model.\\n\\nThe main missing experiment is higher-capacity multilingual models, which Johnson et al show to be beneficial in settings with a large number of language pairs. Using a multilingual model of the same (relatively small) size as the individual models as is done here is likely to be suboptimal, especially for the 44-language pair TED setting. A related point is that the corpora used seem to be quite small (eg 4.5M and 1M sentences for WMT Czech and German, respectively, while the available training corpora are closer to 15M and 4.5M). Although performance relative to individual models is still impressive - and seems to be better than than in previous work - this makes the experiments comparing to the multilingual baseline less meaningful.\\n\\nAlso missing are experiments on out-of-English translation, which would establish the viability of the proposed technique for many-to-many translation via bridging. Out-of-English is a more difficult problem than into-English. I can\\u2019t see any reason the proposed technique wouldn\\u2019t also work in this setting, but this remains to be shown.\\n\\nAlthough it\\u2019s great that the technique is shown to work without embellishments, there are a few obvious strategies it would have been interesting to explore, such as making the weight on the distillation loss dependent on the difference in performance between the multilingual and individual models; and allowing for the distillation loss to be turned back on if the performance of the multilingual model starts to drift back down for a particular language pair. I also wondered about the effect of the gradient accumulation strategy in algorithm 1, where individual batches from each language pair are effectively grouped into one giant batch for the purpose of parameter updates. I can see that this could stabilize training, but it would be good to know whether it\\u2019s crucial for success, especially when the number of language pairs is large.\", \"further_details\": \"As aforementioned -> As mentioned\\n\\n(1) 2nd line: Doesn't make sense as written. You need to distinguish the gold\\ny_t from hypothesized ones in the 1() function.\\n\\nAbove (2): is served as -> serves as\\n\\n3.2 First paragraph. Since D presumably consists of D^l for all languages l,\\nL_ALL(D,...) should be a function of teacher parameters theta^l for all\\nlanguages l rather than just one as written.\\n\\nIn top-K distillation, is the teacher distribution renormalized or simply\\ntruncated?\\n\\nGeneralization analysis, pg 8: presumably you are sampling from N(0, sigma^2) -\\nthis should be described as such.\", \"reference\": \"Johnson et al, \\u201cGoogle\\u2019s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation\\u201d TACL, 2017.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Solid experimentation but...\", \"review\": \"... I would have liked to see some more insights.\\n\\nThe authors present a method for distilling knowledge from individual models to train a multilingual model. The motivation stems from the fact that while most s-o-t-a multilingual models are compact (as compared to k individual models) they fall short of the performance of the individual models. The authors demonstrate that using knowledge distillation, the performance of the multilingual model can actually be better than the individual models. \\n\\nPlease find below my comments and questions.\\n\\n1) The authors have done a commendable job of validating their hypothesis on multiple datasets. Solid experimentation is definitely the main strength of this paper.\\n\\n2) However, this strength also makes way for a weakness. The entire experimental section is just filled with tables and numbers. The same message is repeated across these multiple tables (multi+distill > single > multi). Beyond this message there are no other insights. For example, \\n\\n- How does the performance depend on the divergence between source and target language?\\n- Why is there more important on some languages and less on others ?\\n- Why are the improvements on the TED dataset so much higher as compared to the other 2 datasets.\\n- What happens when the target language is something other than English? All the experiments report results from X-->English, why not in the other direction? The model then is not really \\\"completely\\\" multilingual. It is multi-source-->single target. \\n- Can you comment on the total training time ?\\n- What happens when you do not stop the distillation even when the accuracy of the student crosses that of the teachers ? What do you mean by accuracy here? Only later when you mention that \\\\threshold = 1 BLEU it became clear that accuracy means BLEU in this context ?\\n\\n3) Is it all worth it? One disappointing factor is that end of all this effort where you train K individual models and one monolithic model with distillation, the performance gain for most language pairs is really marginal (except on the TED dataset). I wonder if the same improvements could have been obtained by even more carefully fine tuning the baseline models itself.\\n\\n4) On a positive note, I like the back-distillation idea and the experiments on top-K distillation\\n\\n+++++++++++++++++++\\nI have updated my rating after reading author's responses\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
B1l8iiA9tQ
Backdrop: Stochastic Backpropagation
[ "Siavash Golkar", "Kyle Cranmer" ]
We introduce backdrop, a flexible and simple-to-implement method, intuitively described as dropout acting only along the backpropagation pipeline. Backdrop is implemented via one or more masking layers which are inserted at specific points along the network. Each backdrop masking layer acts as the identity in the forward pass, but randomly masks parts of the backward gradient propagation. Intuitively, inserting a backdrop layer after any convolutional layer leads to stochastic gradients corresponding to features of that scale. Therefore, backdrop is well suited for problems in which the data have a multi-scale, hierarchical structure. Backdrop can also be applied to problems with non-decomposable loss functions where standard SGD methods are not well suited. We perform a number of experiments and demonstrate that backdrop leads to significant improvements in generalization.
[ "stochastic optimization", "multi-scale data analysis", "non-decomposable loss", "generalization", "one-shot learning" ]
https://openreview.net/pdf?id=B1l8iiA9tQ
https://openreview.net/forum?id=B1l8iiA9tQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Ske_97TJe4", "S1lt8Yuan7", "Bke37P8q2Q", "HkgFP83Y3X" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544700816062, 1541405009030, 1541199651825, 1541158497431 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper624/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper624/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper624/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper624/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"Dear authors,\\n\\nAll reviewers pointed out that the proximity with Dropout warranted special treatment and that the justification provided in the paper was not enough to understand why exactly the changes were important. In its current state, this work is not suitable for publication to ICLR.\\n\\nShould you decide to resubmit this work to another venue, please take the reviewers' comments into account.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Limited novelty compared to Dropout\"}", "{\"title\": \"The proposed Backdrop is similar to the traditional Dropout method. Overall this paper lacks of novelty and the observed generalization performance does not have convincing justification.\", \"review\": \"This paper proposes a stochastic based method, namely Backdrop, for updating the network structures via backpropagation type methods. Backdrop inserts masking layers along the network; it acts as the identity in the forward pass, but as randomly masks parts of the backward gradient propagation. The paper claims this approach can significantly improves the overall generalization performance.\\n\\nAlthough some difference to Dropout is summarized in Section 2, I still feel these two methods have almost the same idea, with just different implementation. Actually this Backdrop seems to have one more limitation in the parameter complexity, as it introduces several mask layers but keep the dense structures from other intermediate layers. \\n\\nThe proposed Backdrop uses Bernoulli distribution to select active variables. This is the very fundamental way in the conventional Dropout method. On the other hand, the authors do not provide convincing justification how this can guarantee the improvement in subsequent generalization.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Small modification and not enough comparison to other methods\", \"review\": \"The authors propose to apply Dropout only in the backward pass, by applying a mask sampled from a Bernoulli distribution. They claim that this method can help in situations like optimizing non-decomposable losses where minibatch SGD is not viable.\\n\\nFirst and foremost, the paper has an acknowledgement paragraph that gives information violating, in my sense, the anonymity requirement. \\n\\nThis being said, I have other concerns with the paper, and this possible violation didn't effect much my rating. \\n\\nFirst, the authors claim that the proposed method \\\"is a flexible strategy for introducing data-dependent stochasticity into the gradient\\\". However, it doesn't seem to me that the sampled dropped nodes are data-dependent. \\n\\nIt is also not clear to me why the proposed method is better suited to non-decomposable losses and hierarchically structured data than the classical Dropout.\\n\\nMoreover, while the method is clearly related to Dropout, the paper lacks of comparison to this regularizer. \\n\\nThis being said, the idea is sound, and can have a good impact in for example combining the good aspects of batch-normalization and dropout. However, the authors structured the paper on a completely different argument that doesn't convince me for the reasons cited above.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting idea, lacking proper improvements and/or applications provided\", \"review\": \"This paper introduces a data dependent strategy to mask parts of the partial derivatives in the chain rule computation.\", \"typically_with_papers_proposing_modifications_of_the_training_regime_of_the_neural_network_one_would_expect_one_of_three_outcomes\": \"- a well justified, mathematically sound method, well tested in simple cases and with some proof of concept results on proper tasks\\n - a more heuristic, empirical driven research, where strong results on proper tasks\\n - method, however justified, allows us to do something previously impossible, removing some limitations/constraints (like biologically plausible learning etc.)\\n\\nIn its current form paper seems to lack any of these characteristics. On one hand method lacks any guarantees and on the other paper does not present significant improvements under any approved metrics, nor it introduces new which can be properly quantified. In fact, authors explicitly claim that empirical section \\\"Note that in these experiments, the purpose is not to achieve state of the art performance, but to exemplify how backdrop can be used and what measure of performance gains one can expect.\\\". \\n\\nWith methods like this it is almost obvious that resulting update is not an unbiased gradient estimator of any function. Consequently convergence/learning guarantees that we have for GD or SGD no longer apply. Do authors have any thoughts on how bad can it get? As noted in the text, other methods of \\\"dropping\\\" data (such as dropout) don't have this issue as they still estimate proper gradients. Here, since dropping is done inside the network only on backwards pass, resulting estimates could, in principle, lead to oscilations, divergence and other issues. If these are not encountered in practice it might be interesting to understand why. \\n\\nIf authors prefer to go through more empirical path, one would expect at least to see some baselines for tasks proposed, rather than comparing Backdrop to SGD. There are many methods that could be applied in scenarios like this, including dozens forms of dropout (which, as authors note, is not aimed at the same goals, but this does not mean that it will not shine under the metrics introduced, as they are non-standard and so - noone tested them in this exact regime).\\n\\nI am happy to revisit my rating given authors restructure paper towards one of these paths (or other one which is not listed here).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
S1xBioR5KX
Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization
[ "Hesham Mostafa", "Xin Wang" ]
Modern deep neural networks are highly overparameterized, and often of huge sizes. A number of post-training model compression techniques, such as distillation, pruning and quantization, can reduce the size of network parameters by a substantial fraction with little loss in performance. However, training a small network of the post-compression size de novo typically fails to reach the same level of accuracy achieved by compression of a large network, leading to a widely-held belief that gross overparameterization is essential to effective learning. In this work, we argue that this is not necessarily true. We describe a dynamic sparse reparameterization technique that closed the performance gap between a model compressed through iterative pruning and a model of the post-compression size trained de novo. We applied our method to training deep residual networks and showed that it outperformed existing reparameterization techniques, yielding the best accuracy for a given parameter budget for training. Compared to existing dynamic reparameterization methods that reallocate non-zero parameters during training, our approach achieved better performance at lower computational cost. Our method is not only of practical value for training under stringent memory constraints, but also potentially informative to theoretical understanding of generalization properties of overparameterized deep neural networks.
[ "sparse", "reparameterization", "overparameterization", "convolutional neural network", "training", "compression", "pruning" ]
https://openreview.net/pdf?id=S1xBioR5KX
https://openreview.net/forum?id=S1xBioR5KX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BygwjLFblN", "BygrZof1eV", "r1ec05zJgN", "SJe6PrRM1E", "B1lQ5wWRAQ", "ByxlFw-CCQ", "HygbBOiaCQ", "HJeW2aMhAm", "Hkg_eTIICQ", "rkgM53LLA7", "Hyl7OhLI0m", "rJgKJhLI0m", "H1g-TRNAn7", "HyldHxls2X", "Hkl9XKWc37", "Byg8iX-V2m", "HyeBcckEnm", "ByepGM9GnX", "rketmW5fnQ", "HJgPaBbb2X", "SJgRILw0jm", "SygDfLE0sQ", "HyllVTC6sQ", "HJg6Q-STom", "SyllM-B6jQ", "HygC0-roiQ", "SkeE2kiqom", "rkxbTjYKoX", "Hyxhn39wjX", "HklYTgofs7", "r1lxccLzo7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment", "official_comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "comment", "comment", "comment", "comment", "official_comment", "comment" ], "note_created": [ 1544816286643, 1544657661184, 1544657618224, 1543853412933, 1543538571316, 1543538551852, 1543514168904, 1543413161355, 1543036144087, 1543036041827, 1543036010998, 1543035873177, 1541455544798, 1541238847769, 1541179681687, 1540785054296, 1540778637063, 1540690453475, 1540690208761, 1540588990709, 1540417110343, 1540404750932, 1540381991608, 1540342052983, 1540342024164, 1540211157746, 1540169644323, 1540099000690, 1539972275581, 1539645633495, 1539627656041 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper622/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "ICLR.cc/2019/Conference/Paper622/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper622/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "ICLR.cc/2019/Conference/Paper622/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper622/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper622/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper622/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The authors presents a technique for training neural networks, through dynamic sparse reparameterization. The work builds on previous work notably SET (Mocanu et al., 18), but the authors propose to use an adaptive threshold for and a heuristic for determining how to reparameterize weights across layers. \\nThe reviewers raised a number of concerns on the original manuscript, most notably 1) that the work lacked comparisons against existing dynamic reparameterization schemes, 2) an analysis of the computational complexity of the proposed method relative to other works, and that 3) the work is an incremental improvement over SET.\\nIn the revised version, the authors revised the paper to address the various concerns raised by the reviewers. To address weakness 1) the authors ran experiments comparing the proposed approach to SET and DeepR, and demonstrated that the proposed method performs at least as well, or is better than either approach. While the new draft is in the ACs view a significant improvement over the initial version, the reviewers still had concerns about the fact that the work appears to be incremental relative to SET, and that the differences in performance between the two models were not very large (although the author\\u2019s note that the differences are statistically significant). The reviewers were not entirely unanimous in their decision, which meant that the scores that this work received placed it at the borderline for acceptance. As such, the AC ultimately decide to recommend rejection, though the authors are encouraged to resubmit the revised version of the paper to a future venue.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Revised draft is a significant improvement over the initial submission\"}", "{\"title\": \"Would you please respond to our rebuttal?\", \"comment\": \"Thank you again for your useful comments, which were common concerns raised by all 3 reviewers, including (1) lack of comparison to prior work, and (2) inaccurate claims of contributions. \\n\\nIn the revision submitted together with a point-by-point rebuttal to your review on Nov. 23, we have fully addressed these concerns, and Reviewer #3 found our revision satisfactory. \\n\\nAs it has been weeks since our previous response, we wonder whether you would share the same view on the revised manuscript's suitability for publication, or if there were still lingering concerns, we would appreciate it if you could reply to our most recent point-by-point rebuttal with specifics so that we could address them. \\n\\nThank you very much!\"}", "{\"title\": \"Would you please respond to our rebuttal?\", \"comment\": \"Thank you again for your useful comments, which were common concerns raised by all 3 reviewers, including (1) lack of comparison to prior work, and (2) inaccurate claims of contributions. \\n\\nIn the revision submitted together with a point-by-point rebuttal to your review on Nov. 23, we have fully addressed these concerns, and Reviewer #3 found our revision satisfactory. \\n\\nAs it has been weeks since our previous response, we wonder whether you would share the same view on the revised manuscript's suitability for publication, or if there were still lingering concerns, we would appreciate it if you could reply to our most recent point-by-point rebuttal with specifics so that we could address them. \\n\\nThank you very much!\"}", "{\"title\": \"Authors' response (DeepR results on resnet50)\", \"comment\": \"Due to the high computational requirements of DeepR, the results for DeepR on resnet50 were not available in time for the revision submission. We include the results (top-1, top-5 accuracy) below. The accuracy of DeepR lags behind our method and behind SET.\\n\\n+----------------------------------------------------------------------------------------------+\\n| Method | Sparsity = 0.9 | Sparsity = 0.8 |\\n+----------------------------------------------------------------------------------------------+\\n| Mocanu et al. 2018 (SET) | 70.4, 90.1 | 72.6, 91.2 |\\n+----------------------------------------------------------------------------------------------+\\n| Bellec et al. 2017 (DeepR) | 70.2, 90.0 | 71.7, 90.6 |\\n+----------------------------------------------------------------------------------------------+\\n| Ours | 71.6, 90.5 | 73.3, 92.4 |\\n+----------------------------------------------------------------------------------------------+\"}", "{\"title\": \"Authors' response (Part 1 of 2)\", \"comment\": [\"Thank you for your comments. We believe the argument on the lack of novelty lacks factual support. Our point-by-point response:\", \"\\\"Moreover, the added heuristics often seem to be marginally important... Figure 2(a) ...almost the same performance as SET\\\" This argument disregards key evidence for the opposite. First, our algorithm did lead to significantly better accuracy than SET on all the benchmarks we tested, no matter how small the differences are in certain cases. The differences are in fact more substantial in other cases. For instance, the improvement on Imagenet is more prominent (Table 2). Second, the improvement is especially more apparent at high sparsity levels where SET failed catastrophically while our method does not (see Figure 6 in appendix E). Finally, our automatic parameter allocation heuristic that discovered the number of parameters to allocate to each layer is entirely novel.\", \"Even if our method lacked any novelty or performance improvement as compared to SET (which it does not as our revised manuscript shows with quantitative evidence), it would still be a significant stride forward from Mocanu et al. 2018 just to apply their exact same method to show its usefulness in training deep sparse convolution nets such as Resnet-50 in action on large datasets like Imagenet. In other words, Mocanu et al. 2018 did not even show if SET is at all applicable to convolution layers, let alone deep Resnets.\", \"We agree with you that a theoretical guarantee of convergence and stability is not presented, and the heuristic could not be cast into an optimization procedure of an objective function. But we do not think this is an adequate reason to dismiss an empirical paper. Convergence guarantees are hard to come by in deep networks where there are no convergence guarantees of even basic stochastic gradient descent in most real-world cases. We practiced empirical rigor in the paper: we exhaustively described our method and experiments in detail, empirically validated our method's performance and stability, and publicized all source code for all experiments, a standard of which even the papers you cited in your review (e.g. Mocanu et al. 2018) fell shy (note that there is no theoretical guarantee of SET's stability in that paper either). Rigorous empirical findings is beneficial to the field because the dissemination of knowledge on \\\"what works\\\" would eventually lead to theory on \\\"why it works\\\".\"]}", "{\"title\": \"Authors' response (Part 2 of 2)\", \"comment\": [\"With regard to \\\"numerous DNN compression methods introduced at multiple major ML and CV conference over the last year\\\", we did a comprehensive survey of compression papers, which are listed in Appendix C, together with their properties. We still see that the best compression performance is achieved by Zhu and Gupta, 2017. With adequate due diligence we honestly claim that `tensorflow.contrib.model_pruning` (Zhu and Gupta, 2017) is a \\\"state-of-the-art\\\" sparse compression method \\\"known to us\\\" at the time of this paper being written. We would appreciate it if you could provide us with a specific stronger baseline, demonstrably stronger than Zhu & Gupta 2017, and we are happy to benchmark our method against it and call it \\\"state-of-the-art\\\". Absent such a method in the literature that you could refer to us, the claim that we did not compare ours to the best sparse compression method in existence is a rather weak one, and unfair. Note that most recent pruning techniques deals with structured or filter-wise sparsity that does not directly compare with our method and significantly underperformed non-structured pruning (see Appendix C, D, see Liu et al. 2018).\", \"We believe there is a misunderstanding regarding your response on hyperparameter tuning in writing \\\"This seems to reinforce the notion that this is not a fundamentally different model, but one intimately related to existing papers.\\\" We referred to \\\"hyperparameters for training\\\", such as learning rate, momentum, L1/L2 decay, etc..., these were taken the same as those presented in the original paper (e.g. He et al. 2015) where the model (e.g. Resnet-50) was presented. These do not include hyperparameters for sparse reparameterization (which is the subject of this paper and we did tune those). We do not believe this is related to the novelty of our method at all--our paper does not produce \\\"fundamentally different model\\\"s, but trains sparse versions of existing, successful models, such as Resnet. If anything, using the exact hyperparameters for training as in the original papers only strengthens our method--our method is robust enough that we do not need to re-tune hyperparamters for training it and it just worked well with the original network's hyperparameters.\", \"With regard to DeepR, we did demonstrate a significant accuracy/performance advantage over DeepR for the case of WRN-28-2 on CIFAR10. Experiment on Imagenet is much more challenging due to the high computational cost of DeepR, but we agree that it should be (and will be) part of the paper (note that the DeepR paper never attempted experiments at this scale). With concrete evidence, our results on CIFAR10, together with the significantly faster training times of our method are strong indicators that our method is superior to DeepR in both accuracy and speed. We agree with you that Table 3 shows ours and SET both introduce negligible computational overhead, but our method has other advantages over SET (e.g. producing better accuracy, able to train at high sparsity levels), besides the slight reduction in computational cost.\"]}", "{\"title\": \"comments to the author rebuttal\", \"comment\": \"Although the experimental section has been expanded a bit and the overall paper has been upgraded, I still believe that the novelty of this work is limited. As mentioned in my original review, the proposed pipeline represents reasonably engineered modifications of the existing SET pipeline, but there are no significant insights beyond this. Moreover, the added heuristics often seem to be marginally important. For example, if we look at Figure 2(a) in the revision, the proposed algorithm has almost the same performance as SET. For these and other reasons below, I continue to believe then that this work is perhaps below the bar for ICLR, while acknowledging the effort it takes to engineer and test this type of model.\", \"other_lingering_concerns_with_this_work\": [\"With regard the hyperparameter tuning, the rebuttal comments state that \\\"we did not tune them and simply used the same hyperparameter settings in the original papers where these models were described.\\\" This seems to reinforce the notion that this is not a fundamentally different model, but one intimately related to existing papers.\", \"The revised version claims in the introduction that the proposed method is more efficient than existing dynamic sparse reparameterization training techniques. But this seems like an overstatement, because if we look at Table 3, the proposed method requires almost exactly the same computational complexity as the existing SET method (the difference is only in the forth significant digit).\", \"The rebuttal states that \\\"(Zhu & Gupta, 2017) is the strongest sparse compression baseline known to us.\\\" But as I mentioned previously, there have been numerous DNN compression methods introduced at multiple major ML and CV conference over the last year, and it is essential to check these proceedings to become aware of the latest developments. This is especially true for an empirical paper of this type without any analytical contribution. Also, the revision continues to imply without justification that (Zhu & Gupta, 2017) represents the state-of-the-art (see bullet point 4 on page 5).\", \"According to the rebuttal, some DeepR experiments are still running and therefore could not be included in the revision. But this is not an excusable omission, because as an obvious, direct competitor to the proposed method, these results should have been present in the original submission.\", \"As mentioned in my original review, the proposed algorithmic steps are not minimizing any particular energy function per se, and yet there is no discussion of convergence or stability. This was not addressed in the rebuttal.\"]}", "{\"title\": \"Thanks for the update!\", \"comment\": \"Thanks for the comprehensive revision as well as providing new experiments and comparisons. Your revision adequately addresses the concerns I raised in my original review.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"Thank you for the review. We have substantially revised the manuscript with the following major changes:\\n\\n1. Contributions in Introduction (also the rest of the manuscript) completely rewritten to state novelty accurately \\n2. Inclusion of results of additional performance benchmark against existing methods, DeepR and SET\\n3. Inclusion of results of computational cost benchmarked against existing methods, DeepR and SET\\n4. Revised Experimental Results section and two additional appendices that further expanded the scope of comparison to structured compression methods\\n\\nWe hope the improved manuscript is worthy of publication now.\", \"our_response_to_your_specific_comments\": \"1. We have rewritten the entire manuscript to state our novelty accurately. We now make only two claims in contributions, which are carefully limited to the exact scope of this investigation. We specifically eliminated claims of \\\"first\\\". \\n\\n2. We have strengthened the manuscript by including results of full comparisons to SET and DeepR (the new Figure 2, Table 2) to support our main claim that ours outperformed these methods.\\n\\n3. We have included a new Table 3, containing the computational overhead of our parameter reallocation in comparison to those of DeepR and SET. Indeed, re-training times are a drawback of pruning methods. We now state clearly that our method actually runs for a fewer number of epochs than pruning-based methods--our method only trained for the same number of epochs as the original training of the large dense model in pruning methods minus the additional retraining (see the revised Appendix A). \\n\\n* Note: Due to the high computational requirements of DeepR, the results for DeepR on resnet50 were not available in time for the revision submission. We include the results (top-1, top-5 accuracy) below. The accuracy of DeepR lags behind our method and behind SET.\\n\\n+----------------------------------------------------------------------------------------------+\\n| Method | Sparsity = 0.9 | Sparsity = 0.8 |\\n+----------------------------------------------------------------------------------------------+\\n| Mocanu et al. 2018 (SET) | 70.4, 90.1 | 72.6, 91.2 |\\n+----------------------------------------------------------------------------------------------+\\n| Bellec et al. 2017 (DeepR) | 70.2, 90.0 | 71.7, 90.6 |\\n+----------------------------------------------------------------------------------------------+\\n| Ours | 71.6, 90.5 | 73.3, 92.4 |\\n+----------------------------------------------------------------------------------------------+\"}", "{\"title\": \"Response to Reviewer (Part 1/2)\", \"comment\": \"Thank you for the review. We have substantially revised the manuscript with the following major changes:\\n\\n1. Contributions in Introduction (also the rest of the manuscript) completely rewritten to state novelty accurately \\n2. Inclusion of results of additional performance benchmark against existing methods, DeepR and SET\\n3. Inclusion of results of computational cost benchmarked against existing methods, DeepR and SET\\n4. Revised Experimental Results section and two additional appendices that further expanded the scope of comparison to structured compression methods\\n\\nWe hope the improved manuscript is worthy of publication now.\", \"our_response_to_your_specific_comments\": \"1. On benchmarking against existing methods: We fully agree with you on the weakness of the manuscript due to lack of quantitative comparisons to prior work. We believe this revision adequately rectified it. Specifically, we performed additional WRN-28-2 on CIFAR10 and Resnet-50 on Imagenet experiments for DeepR and SET. The results are presented in the new Figure 2, Table 2 and Table 3. We now show with concrete evidence that our method outperformed both DeepR and SET.\\n\\n2. On hyperparameter tuning: For hyperparameters of training the models, we did not tune them and simply used the same hyperparameter settings in the original papers where these models were described. For hyperparameters of reparameterization by DeepR and SET, because the original papers did not attempt experiments at the same scale, we did a hyperparameter sweep for DeepR and reported the best result; for SET, in order to make a fair comparison, we used the exact same hyperparameters for both SET and ours. These facts were not clearly stated in the previous version, but now clearly stated together with the list of hyperparameters (Table 4) in the revision. \\n\\n3. On our method's similarity to SET: Our method is inspired by SET, based on the similar sparse reparameterization mechanism, but with an adaptive threshold and a heuristic for automatic parameter reallocation across layers. These differences might seem incremental, but we believe our work made a substantial stride forward from what was reported in Mocanu et al. 2018, for the following reasons (as now stated in the revised manuscript). (a) Our method did produce significantly better generalizing sparse models than SET, and fully closed the performance gap toward compression by iterative pruning, of which SET in some cases still fell short (we now provide quantitative results). (b) Automatic parameter reallocation without manual configuration of sparsity per layer makes sparse training much more scalable: the burden of hyperparameter tuning is constant instead of scales with network depth. (c) Finally, Mocanu et al. 2018 demonstrated their method on multi-layer perceptrons on MNIST. We believe scaling up to deep convolutional networks such as Resnet-50 and to large datasets such as Imagenet is not just a trivial increment.\"}", "{\"title\": \"Response to Reviewer (Part 2/2)\", \"comment\": \"4. On benchmark against post-training compression baselines: Thank you for raising this important point. We agree that the line of work from Han et al. 2015 to Zhu & Gupta 2017 is only a subset of all compression methods. Even though Zhu & Gupta 2017 is the strongest sparse compression baseline known to us, we now state clearly that we close the performance gap to the iterative pruning method of Zhu and Gupta 2017, instead of saying \\\"compression methods\\\" in general. \\n\\n5. On structured compression method such as ThiNet: In the previous version of the manuscript, we did not benchmark against structured compression method such as ThiNet because they (a) produce dense instead of sparse models, and (b) significantly underperformed non-structured compression, such as Zhu & Gupta 2017, despite their efficiency on GPUs. In the revision, we made the following changes to address this issue: (a) we included comparisons against two representative structured pruning methods in Table 2; (b) we included a new Appendix C to compare and contrast a wide range of methods, painting a broad picture of relevant existing methods to show where our method stands; (c) we did additional experiments to impose group structure on sparsity using our method, and show degraded results (the new Appendix D); (d) we specifically discussed the issue of structured versus non-structured sparsification, and its implications for optimal computing hardware architecture (last two paragraphs of the Discussion section). \\n\\n6. On the difficulty of comparing results across papers: In the revision we included our own experiments of DeepR and SET, carefully controlled for comparison to ours. For comparison with ThiNet (Luo et al. 2017) and SSS (Huang & Wang 2017), we adapted the results from the original papers (See the new Table 2). To ameliorate the potential minor differences in experimental protocols, we also report the relative difference from the full dense model performance reported in that same paper (square brackets in the new Table 2)--comparison of methods can now be based on how much accuracy degradation from a controlled baseline a method introduces, rather than on absolute accuracy figures.\\n\\n7. On computational cost: We now include quantifications of computational overhead of our method, DeepR and SET, in the last paragraph of the Experimental Results section and in Table 3. \\n\\n8. On other comments: (a) We included the full dense baseline in the new Table 2 (rightmost column). (b) We included a new Appendix D to present extra experiments where we applied our methods to group pruning of 3x3 kernels. We show that this led to a significant but minor degradation in performance. We also discussed the pros and cons of structured vs. non-structured sparsification in Discussion and Appendix C as stated above. \\n\\n* Note: The current PDF of the manuscript has blanked DeepR entry in Table 3. Due to the high computational requirements of this experiment, it is still running. We will fill in the numbers as soon as they are available.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"Thank you for the review. We have substantially revised the manuscript with the following major changes:\\n\\n1. Contributions in Introduction (also the rest of the manuscript) completely rewritten to state novelty accurately \\n2. Inclusion of results of additional performance benchmark against existing methods, DeepR and SET\\n3. Inclusion of results of computational cost benchmarked against existing methods, DeepR and SET\\n4. Revised Experimental Results section and two additional appendices that further expanded the scope of comparison to structured compression methods\\n\\nWe hope the improved manuscript is worthy of publication now.\", \"our_response_to_your_comments_on_weaknesses\": \"1. The revised manuscript now includes direct quantitative comparisons to all direct sparse training techniques with a strict parameter budget (i.e. DeepR and SET), for the deep residual net experiments (on CIFAR10 and Imagenet, see Figure 2 and Table 2). We did not include NeST because NeST does not impose a strict parameter budget during training--it grows a small network to a large one and then prunes it down. Our claim here is that our method yielded the best accuracy given a strictly fixed parameter budget throughout training so only DeepR and SET are relevant to this claim. We further explain in full detail the relationships between our method and numerous others in a new Appendix C. \\n\\n2. We apologize for the confusion. The claim was indeed worded incorrectly. The correct claim is that we are the first to apply sparse dynamic reparameterization to training of large CNNs (such as Resnets) on large datasets, because previous methods of the same kind were demonstrated only on small networks. We have completely rewritten the contributions with this claim removed. \\n\\n3. Per your suggestion, we added a last paragraph to the Experimental Results section and included a new Table 3 with numbers to support our claim on efficiency. Our scalability argument is supported by the fact that our method discovers layer-wise sparsity automatically during training without the need to predefine sparsity per layer by manual configuration as required by other methods, so that the cost of hyperparameter tuning is constant instead of scaling with network depth.\", \"our_response_to_your_suggestions\": \"1. Per your suggestion, we revised the Introduction section and included the follwing sentence: \\\"... a dynamic sparse reparameterization technique able to train sparse models de novo without the need to compress a large model, a desirable feature for training on memory- and power-constrained devices.\\\" Furthermore, a related, more nuanced point on hardware-efficiency was discussed in the last two paragraphs of the Discussion section. \\n\\n2. We rectified the unnecessary use of color and made these panels grayscale with specific text labels. \\n\\n3. We have included new results (see the new Figure 2, Table 2 and Table 3) and revised the text to provide concrete support of the claims. \\n\\n4. We now defined these parameters in the text (in the revised Methods section) in addition to in Algorithm 1. \\n\\n* Note: Due to the high computational requirements of DeepR, the results for DeepR on resnet50 were not available in time for the revision submission. We include the results (top-1, top-5 accuracy) below. The accuracy of DeepR lags behind our method and behind SET.\\n\\n+----------------------------------------------------------------------------------------------+\\n| Method | Sparsity = 0.9 | Sparsity = 0.8 |\\n+----------------------------------------------------------------------------------------------+\\n| Mocanu et al. 2018 (SET) | 70.4, 90.1 | 72.6, 91.2 |\\n+----------------------------------------------------------------------------------------------+\\n| Bellec et al. 2017 (DeepR) | 70.2, 90.0 | 71.7, 90.6 |\\n+----------------------------------------------------------------------------------------------+\\n| Ours | 71.6, 90.5 | 73.3, 92.4 |\\n+----------------------------------------------------------------------------------------------+\"}", "{\"title\": \"The authors designed a dynamic reparameterization method to apply model compression in deep neural architectures. They compared their proposed framework with three baseline methods in terms of test accuracy and sparsity. The comparison to the existing works is lacking.\", \"review\": \"Weaknesses:\", \"1_the_authors_claim_that\": \"\\\"Compared to other dynamic reparameterization methods that reallocate non-zero parameters during training, our approach broke free from a few key limitations and achieved much better performance at lower computational cost.\\\" => However, there is no quantitative experiments related to other dynamic reparameterization methods. There should be at least sparsity-accuracy comparison to claim achieving better performance. I expect authors compare their work at-least with with DEEP R, and NeST even if it is clear for them that they produce better results.\", \"2_the_second_and_fourth_contributions_are_inconsistent\": \"In the second one, authors claimed that they are the first who designed the dynamic reparameterization method. In the fourth contribution, they claimed they outperformed existing dynamic sparse reparameterization. Moreover, it seems DEEP R also is a dynamic reparameterization method because DEEP R authors claimed: \\\"DEEP R automatically rewires the network during supervised training so that connections are there where they are most needed for the task, while its total number is all the time strictly bounded.\\\"\\n3- The authors claimed their proposed method has much lower computational costs, however, there is no running time or scalability comparison.\", \"suggestions\": \"1-Authors need to motivate the applications of their work. For instance, are they able to run their proposed method on mobile devices?\\n2-For Figure 2 (c,d) you need to specify what each color is.\\n3-In general, if you claim that your method is more accurate or more scalable you need to provide quantitative experiments. Claiming is not enough.\\n4-It is better to define all parameters definition before you jump into the proposed section. Otherwise, it makes paper hard to follow. For instance, you didn't define the R_l directly (It is just in the Algorithm 1).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"In its present form, this paper seems more like engineered modifications of existing pipelines with insufficient validation, rather than a mature research contribution.\", \"review\": \"This paper presents a method for training neural networks where an efficient sparse/compressed representation is enforced throughout the training process, as opposed to starting with are large model and pruning down to a smaller size. For this purpose a dynamic sparse reparameterization heuristic is proposed and validated using data from MNIST, CIFAR-10, and ImageNet.\\n\\nMy concerns with this work in its present form are two-fold. First, from a novelty standpoint, the proposed pipeline can largely be viewed as introducing a couple heuristic modifications to the SET procedure from reference (Mocanu, et al., 2018), e.g., substituting an approximate threshold instead of sorting for removing weights, changing how new weights are redistributed, etc. The considerable similarity was pointed out by anonymous commenters and I believe somewhat understated by the submission. Regardless, even if practically effective, these changes seem more like reasonable engineering decisions to improve the speed/performance rather than research contributions that provide any real insights. Moreover, there is no attendant analysis regarding convergence and/or stability of what is otherwise a sequence of iterates untethered to a specific energy function being minimized.\\n\\nOf course all of this could potentially be overcome with a compelling series of experiments demonstrating the unequivocal utility of the proposed modifications. But it is here that unfortunately the paper falls well short. Despite its close kinship with SET, there are surprisingly no comparisons presented whatsoever. Likewise only a single footnote mentions comparative results with DeepR (Bellec et al., 2017), which represents another related dynamic reparameterization method. In a follow up response to anonymous public comments, some new tests using CIFAR-10 data are presented, but to me, proper evaluation requires full experimental details/settings and another round of review.\\n\\nMoreover, the improvement over SET in these new results, e.g., from a 93.42 to 93.68 accuracy rate at 0.9 sparsity level, seems quite modest. Note that the proposed pipeline has a wide range of tuning hyperparameters (occupying a nearly page-sized Table 3 in the Appendix), and depending on these settings relative to SET, one could easily envision this sort of minor difference evaporating completely. But again, this is why I strongly believe that another round of review with detailed comparisons to SET and DeepR is needed.\\n\\nBeyond this, the paper repeatedly mentions significant improvement over \\\"start-of-the-art sparse compression methods.\\\" But this claim is completely unsupported, because all the tables and figures only report results from a single existing compression baseline, namely, the pruning method from (Zhu and Gupta, 2017) which is ultimately based on (Han et al., 2015). But just in the last year alone there have been countless compression papers published in the top ML and CV conferences, and it is by no means established that the pruning heuristic from (Zhu and Gupta, 2017) is state-of-the-art.\\n\\nNote also that reported results can be quite deceiving on the surface, because unless the network structure, data augmentation, and other experimental design details are exactly the same, specific numbers cannot be directly transferred across papers. Additionally, numerous published results involve pruning at the activation level rather than specific weights. This definitively sacrifices the overall compression rate/model size to achieve structured pruning that is more naturally advantageous to implementation in practical hardware (e.g., reducing FLOPs, run-time memory, etc.). One quick example is Luo et al., \\\"ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression,\\\" ICCV 2017, but there are many many others.\\n\\nAnd as a final critique of the empirical section, why not report the full computational cost of training the proposed model relative to others? For an engineered algorithmic proposal emphasizing training efficiency, this seems like an essential component.\\n\\n\\nIn aggregate then, my feeling is that while the proposed pipeline may eventually prove to be practically useful, presently this paper does not contain a sufficient aggregation of novel research contribution and empirical validation.\", \"other_comments\": [\"In Table 2, what is the baseline accuracy with no pruning?\", \"Can this method be easily extended to prune entire filters/activations?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Efficient dynamic reparameterization but the claims need to be revisited\", \"review\": \"The paper provides a dynamic sparse reparameterization method allowing small networks to be trained at a comparable accuracy as pruned network with (initially) large parameter spaces. Improper initialization along with a fewer number of parameters requires a large parameter model, to begin with (Frankle and Carbin, 2018). The proposed method which is basically a global pooling followed by a tensorwise growth allocates free parameters using an efficient weight re-distribution scheme, uses an approximate thresholding method and provides automatic parameter re-allocation to achieve its goals efficiently. The authors empirically demonstrate their results on MNIST, CIFAR-10, and Imagenet and show that dynamic sparse provides higher accuracy than compressed sparse (and other) networks.\\n\\nThe paper is addressing an important problem where instead of training and pruning, directly training smaller networks is considered. In that respect, the paper does provide some useful tricks to reparameterize and pick the top filters to prune. I especially enjoyed reading the discussion section.\\n\\nHowever, the hyperbole in claims such as \\\"first dynamic reparameterization method for training convolutional network\\\" makes it hard to judge the paper favorably given previous methods that have already proposed dynamic reparameterization and explored. This language is consistent throughout the paper and the paper needs a revision that positions this paper appropriately before it is accepted.\\n\\nThe proposed technique provides limited but useful contributions over existing work as in SET and DeepR. However, an empirical comparison against them in your evaluation section can make the paper stronger especially if you claim your methods are superior.\\n\\nHow does your training times compare with the other methods? Re-training times are a big drawback of pruning methods and showing those numbers will be useful.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Authors' response: thank you for the valuable comments!\", \"comment\": \"\", \"we_absolutely_agree_with_you\": \"any _quantitative_ improvements in performance compared to previous methods like SET, whether statistically significant or not, are hard to justify significant achievement. The next method might just improve yet a bit more, so what's special about this one?\\nOur argument is rather that our method's achievement is _qualitatively_ significant. This is manifested in the benchmark against state-of-the-art compression by iterative pruning (Zhu et al. 2017). Note that this is a compression method which first trains a large, dense network and then compresses it down, but our method and SET both train small sparse network from the very beginning of training. Even though SET is just slightly worse than our method, SET actually fared significantly (statistically speaking) worse than state-of-the-art compression in at least some cases. In contrast, even though our method is just slightly better than SET, ours performed at least indistinguishably well or in some cases significantly better (statistically speaking) than state-of-the-art compression; this is true in all experiments we did. Therefore, the difference is _qualitative_: we fully close this gap, showing for the first time one can train a compact sparse model from scratch to reach at least the same performance of the best known post-training compression method. \\n\\nWe appreciate your criticism of the messaging which will be helpful to us when revising the manuscript. We will be cautious when using the term 'first' as it drew criticism from multiple commenters. We will focus instead on just stating our contributions in a clear and unambiguous manner. \\n\\nYes, we agree with your comment on run time. Both SET and ours, unlike DeepR, perform parameter reallocation rather infrequently during training (once per hundreds of batch iterations), thusly the overhead is relatively negligible despite the fact that our replacement of sorting by comparison makes ours even cheaper than SET. We wish to point out two things in this response: \\na) Computational cost is a secondary argument in our paper, a much more important advantage of our method over SET is its ability to produce better sparse models, ones that matched state-of-the-art compression.\\nb) As the number of parameters increase, sorting scales worse (n*log[n]) than comparison (n). So in case parameter reallocation has to be done more frequently and in much larger models, there might be a substantial difference in computational overhead. (This is just another secondary argument that does not make a major point of the paper.)\"}", "{\"comment\": \"Does an improvement of .02 seconds per step on ImageNet matter? I think this is a small fraction of the overall step time... Reporting time relative to the overall step time would be far more useful. Reporting total training time would be the most useful...\\n\\nWe will have to agree to disagree about whether to use the term \\\"significant\\\" or \\\"small\\\" for the performance increases which are [.2%, .4%, 1.0%, 1.7%] with the exception of the 99% MNIST experiment. It was not intended in the sense of statistically \\\"significant\\\"; merely as a description of magnitude.\\n\\nIf the 99% results are general then _that_ would be a most interesting realm to explore. I'm guessing this is where the re-parameterization really helps by moving more weights to the final layer?\\n\\nWe find these improvements interesting and valuable; as we said before we were mostly concerned with the messaging.\", \"title\": \"reply\"}", "{\"title\": \"Authors' response (2/2)\", \"comment\": \"\", \"this_superior_accuracy_achieved_by_our_method_was_observed_in_all_networks_and_benchmarks_that_we_tested\": \"a) MNIST test accuracies (%):\\n+----------------------------------------------------------------------------------------------+\\n| Method | Sparsity = 0.99 | Sparsity = 0.98 |\\n+----------------------------------------------------------------------------------------------+\\n| Mocanu et al. 2018 (SET) | 70.00 \\u00b1 13.37 | 97.85 \\u00b1 0.11 |\\n+----------------------------------------------------------------------------------------------+\\n| Ours | 97.78 \\u00b1 0.098 | 98.08 \\u00b1 0.061 |\\n+----------------------------------------------------------------------------------------------+\\n\\nb) We did further Imagenet experiments to address your concerns, here we report single-run test accuracies (% of top-1, top-5) pending the completion of the rest of the 5 runs:\\n+----------------------------------------------------------------------------------------------+\\n| Method | Sparsity = 0.9 | Sparsity = 0.8 |\\n+----------------------------------------------------------------------------------------------+\\n| Mocanu et al. 2018 (SET) | 70.4, 90.1 | 72.6, 91.2 |\\n+----------------------------------------------------------------------------------------------+\\n| Ours | 71.6, 90.5 | 73.3, 92.4 |\\n+----------------------------------------------------------------------------------------------+\\n\\n(5) The two mechanistic differences from SET that gave rise to the significantly better performance of our method were indeed important, and we gave particular attention to them in the manuscript, pointing out that they made our method more performant, more efficient and more scalable. \\n\\n(6) In response to your criticism \\\"they have yet to provide any actual runtime numbers to show that the theoretical improvement matters in practice\\\", here we give specific wall-clock time (in seconds) for reparameterization costs of our method and SET for Resnet-50 on Imagenet (we have already given the anonymous commenter below the cost of DeepR being roughly 5x of ours):\\n+------------------------------------------------------------------------------------------+\\n| Method | CPU (Xeon) | GPU (Titan-XP) |\\n+------------------------------------------------------------------------------------------+\\n| Mocanu et al. 2018 (SET) | 0.59 | 0.064 |\\n+------------------------------------------------------------------------------------------+\\n| Ours | 0.29 | 0.045 |\\n+------------------------------------------------------------------------------------------+\\n\\nAs you can see, our reparameterization procedure is significantly cheaper than SET. Remember that, when applied according to the same schedule, our reparameterization method produced significantly better sparse models than SET (see above).\\n\\nAs such, we believe we have presented concrete facts to support our claims. \\n\\nFinally, we wish to point out that we have publicized all source code in this interactive forum (see below), and this paper has been selected for the ICLR 2019 reproducibility challenge and is being validated by a third-party team. We encourage you to validate our results as well and to check the specific claims we make. \\n\\nThank you again and we are happy to address your further concerns.\"}", "{\"title\": \"Authors' response (1/2)\", \"comment\": \"Thank you for your comments. Please see our responses below. \\n\\n(1) We did not intend to be \\\"sensationalist\\\". In writing the manuscript, we clearly described our method and objectively stated claims of contributions. We believe they are factually correct. We are happy to address any of your specific confusions on our stated contributions that are not \\\"easy to understand\\\". \\n\\n(2) We agree with you that \\\"nothing prevented any of the previously proposed approaches from being applied to a similar network\\\", but the fact that they did not do so leaves open the question of whether those proposed approaches were applicable to large-scale networks in practice, a question we answered in this work. We fully recognize the contributions of previous approaches, i.e. DeepR, NeST and SET, and we explicitly stated in our paper that we were inspired by them (see Discussion on Page 8). As we replied to another anonymous commenter below, our claim of contribution is factually correct, and we believe that our demonstration of the ability to scale up dynamic sparse training from a single layer to a deep network, and from toy-sized models to real-world applications, is rather nontrivial.\\n\\n(3) The choice of baseline method we benchmarked against (i.e. sparse compression by iterative training and pruning, Zhu et al. 2017) is not arbitrary for the following reasons:\\na) It was the strongest baseline performance known to us.\\nb) It, unlike DeepR, SET and ours, is a compression method which does not impose a reduced parameter budget during the entire course, but only at the end, of training. Matching this baseline has significant implications for direct training of compact sparse deep CNNs without compression--equally effective training can now be done under strict parameter constraints without the need to train a large model first and then followed by compression. Closing this gap was not achieved by previous methods like DeepR and SET, but by ours in this work. \\n\\n(4) Further, as we stated in response to the commenter below, the key advancement of our method as compared to SET is its ability to produce sparse models that generalize better, matching the best compression benchmark known to us so far. To argue against your statement on our improvement \\\"exceed some relatively arbitrary level of performance by a tiny margin\\\", and \\\"two modifications to SET that result in a small improvement in accuracy\\\", we did further statistical analysis (p-values of T-tests) on the extra experiments presented in the table of our reply to the commenter below (WRN-28-2 on CIFAR10), the improvements were highly significant statistically. \\n+----------------------------------------------------------------------------------------------+\\n| Hypothesis test | Sparsity = 0.9 | Sparsity = 0.8 |\\n+----------------------------------------------------------------------------------------------+\\n| Ours vs. Bellec et al. 2017 (DeepR) | *0.000002 | *0.000103 |\\n+----------------------------------------------------------------------------------------------+\\n| Ours vs. Mocanu et al. 2018 (SET) | 0.420649 | *0.006790 |\\n+----------------------------------------------------------------------------------------------+\\n| Ours vs. Zhu et al. 2017 | 0.163627 | 0.152581 |\\n+----------------------------------------------------------------------------------------------+\"}", "{\"comment\": \"We believe it would be easier to understand the paper\\u2019s contribution if the writing style of the paper was less sensationalist. Rather than focusing on being first to exceed some relatively arbitrary level of performance by a tiny margin and the first to apply this technique to a \\u201cmodern all convolutional network\\u201d (conceptually, nothing prevented any of the previously proposed approaches from being applied to a similar network), the paper could instead make more clear its contributions are two modifications to SET that result in a small improvement in accuracy. The authors have stated multiple times that their modifications are more efficient, yet they have yet to provide any actual runtime numbers to show that the theoretical improvement matters in practice.\", \"title\": \"Appropriate Messaging\"}", "{\"title\": \"Authors response\", \"comment\": \"Thank you for your response. Having no affiliation is unconventional, but in all cases, we are happy to address any questions you have regarding the code or the experiments.\\nKind regards,\\nAuthors\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for sharing your code and relevant comments. \\n\\nAs for our academic affiliations, we currently do not have any. We regard ourselves as independent research trainees who are enthusiastic about Machine Learning research and are willing to contribute to the field. Moreover, it is stated on the challenge page: \\\"Participation by other researchers or research trainees with adequate machine learning experience is also encouraged\\\" (https://github.com/reproducibility-challenge/iclr_2019).\\n\\nRegards,\\n\\nDeepSparse Team\", \"title\": \"Response to affiliations\"}", "{\"title\": \"Authors' response, link to code, and request for your response\", \"comment\": \"Dear DeepSparse team,\\n\\nThank you for taking the effort to assess our results as part of the 2019 ICLR reproducibility challenge.\\n\\nSince there is also an anonymous commenter (see below) raising questions on the details of our technique and its claimed superior performance over previous methods, we choose to publicize the source code for reproducing all results in our paper here in this interactive forum, so that all commenters and reviewers, including you, will be able to validate our results. Since the paper is still under double-blind review, we set up an anonymous repo to host the code ( https://gitlab.com/anon-dynamic-reparam/iclr2019-dynamic-reparam ).\\n\\nAlong with the source code, we wish to make the following comments/requests to you:\\n\\n(1) Please disclose the academic affiliation of your team, which is a common practice exercised by all other teams of the reproducibility challenge ( https://github.com/reproducibility-challenge/iclr_2019/issues ).\\n\\n(2) In spirit of the reproducibility challenge, as instructed on its main page ( https://github.com/reproducibility-challenge/iclr_2019 ): \\\"You are encouraged to contact the authors in private to clarify doubts regarding the paper but you should maintain your anonymity in the issue section before your report submission\\\", please contact us for any questions you might have during your validation. We would prefer that we communicate in this interactive forum so that (a) anonymity is guaranteed per requirements of the reproducibility challenge, and (b) other commenters/reviewers could also see our communications since our code is shared with all participants of this forum.\\n\\n(3) In the repo, you will find YAML files with specific commands and arguments to reproduce the experiments presented in the paper. In response to specific comments by an earlier anonymous commenter, we also provided extra experiments in a separate YAML file for comparison with earlier methods such as DeepR, which is designed to address questions raised by the commenter, thus not an original part of the manuscript. Though you are more than welcome to run these experiments as well (our code implements a variety of dynamic reparameterization methods including DeepR and SET; detailed results of our own runs will also be presented in our response to the anonymous commenter below), we would like to caution you that (a) they are not part of the original paper for the reproducibility study, and (b) since earlier methods like DeepR were not published with source code for the large models/datasets we tested here, we did our own implementation based on the original DeepR paper and on discussions with its authors, so in case you have questions on these baseline methods please contact the authors of the original paper for details.\\n\\nThank you again and please kindly reply to our request for your affiliation.\\n\\nKind regards,\\n\\nAuthors\"}", "{\"title\": \"Authors' response (2/2)\", \"comment\": \"**Claim 4:**\\nOur observation of the 5x slowdown of DeepR compared to ours was intended to demonstrate the computational efficiency of our method as well as why we did not include DeepR in the comparison as it is computationally expensive for large dataset/model. \\nThough you correctly pointed out two mechanistic differences between our method and SET, we believe a much more consequential difference is downplayed--our method produced sparse models that generalize better. Just the superior performance should, in our opinion, warrant a closer look at the underlying mechanisms that gave rise to these advantages, however simple these mechanisms might seem.\\nIn fact, the mechanistic differences between our method and SET are not trivial. \\nFirst, the higher computational efficiency of our method over SET stems from the fact that SET uses a sorting operation over all the weights in a layer whereas ours uses a comparison operation against a threshold for pruning. This was stated in the manuscript, see second bullet point at the end of page 2 and the third line in page 9. \\nSecond, automatic parameter reallocation across layers is a key feature of our algorithm that is entirely novel from SET or DeepR and directly contributed to its superior scalability (eliminating the need to configure sparsity for different layers manually) and superior performance (see Appendix C). \\nPer your suggestion, we did further experiments using the exact form (i.e. cubic) of the sparsity schedule in Zhu et al. 2017 (tensorflow.contrib.model_pruning). The difference from our choice (i.e. exponential) is inconsequential. In the table below we list further experimental results (in test accuracy%) for WRN-28-2 on CIFAR10, for a direct comparison of our method, Zhu et al. 2017, Bellec et al. 2017 (DeepR), and Mocanu et al. 2018 (SET), source code for reproduction also publicized. \\nBecause the DeepR paper did not provide code or hyperparameter settings for the larger-scale experiments we did, we ran a systematic sweep on the parameters of DeepR (as well as on its temperature annealing schedule) and reported the best results here. \\nAs you mentioned, demonstrating the superiority of our method over SET and DeepR in terms of accuracy is a valuable contribution. We intend to include the following results as well as further results comparing against SET and DeepR in Imagenet experiments in our paper.\\n+-----------------------------------------------------------------------------+\\n| Sparsity | 0.9 | 0.8 |\\n+-----------------------------------------------------------------------------+\\n| Bellec et al. 2017 (DeepR) | 90.81 \\u00b1 0.07 | 91.76 \\u00b1 0.22 |\\n+-----------------------------------------------------------------------------+\\n| Mocanu et al. 2018 (SET) | 93.42 \\u00b1 0.24 | 94.02 \\u00b1 0.09 |\\n+-----------------------------------------------------------------------------+\\n| Zhu et al. 2017 | 93.76 \\u00b1 0.08 | 94.16 \\u00b1 0.12 |\\n+-----------------------------------------------------------------------------+\\n| Ours | 93.68 \\u00b1 0.12 | 94.34 \\u00b1 0.16 |\\n+-----------------------------------------------------------------------------+\\n\\nThank you again and we are happy to address your further comments.\"}", "{\"title\": \"Authors' response (1/2)\", \"comment\": \"Thank you for your comments. Please find our full response below.\\n\\n**Claim 1:**\\nWe do not intend to discount the claims of either Zhu et al. 2017 or Bellec et al. 2017 (DeepR), which are two concurrent papers in ICLR 2018. Being a submission to ICLR 2019, our work had the opportunity to benchmark against both techniques. We found that Zhu et al. 2017 happened to be a stronger baseline, and this was the reason underlying our decision of using it as a previous state-of-the-art benchmark in our paper. In fact, we did compare our method to various previous methods, including DeepR and SET; the reason why we did not include the comparison results in the manuscript was because (1) they did not beat the stronger network compression baseline, and (2) code and hyperparameter settings for these methods for the experiments we did were not available publicly, even though we made our own implementation based on the original papers and our discussion with authors (for performance metrics available in original papers we did include side-by-side comparisons in our manuscript, e.g. performance of LeNet-300-100 at 99% sparsity on MNIST reported by the DeepR paper). Techniques like SET that do not use parameter reallocation among layers fared worse than our technique on MNIST as shown by Fig.5 in the appendix. We are ready to include a full comparison to DeepR and SET in an additional appendix. We also publicized source code for these experiments in addition to that required to reproduce all results in the paper (see response above to the DeepSparse team).\\nHence, by the above facts, we stand by our claim that \\\"we are the first dynamic sparse reparameterization method to perform on par with or better than pruning-based compression techniques such as Zhu et al. 2017.\\\"\\n\\n**Claim 2:**\\nThank you for acknowledgement of our claim of contribution. Your criticism is rather on whether our contribution is valuable or trivial, on which any reader of our paper may have a slightly different opinion from the next--not a factual error in our claim. \\nWe believe the demonstration of the ability to scale up dynamic sparse training from a single layer to a deep network, and from toy-sized models to real-world applications, is rather nontrivial. This is what previous work such as DeepR and SET did not show, and a key consequence of the improved efficiency and scalability achieved by our method.\"}", "{\"comment\": \"I'm not sure what the suspicion is. The link to our registration is provided [1] and the challenge guidelines say: \\\"If available, the authors' code can and should be used; authors of ICLR submissions are encouraged to release their code to facilitate this challenge.\\\" [2]. Also, the authors mention regarding their code (Page 12, Appendix A): \\\"Link suppressed for the sake of anonymity during review process.\\\" hence the request for the code.\\n\\n[1] - https://github.com/reproducibility-challenge/iclr_2019/issues/31\\n[2] - https://github.com/reproducibility-challenge/iclr_2019\", \"title\": \"What's the suspicion\"}", "{\"comment\": \"suspicious\", \"title\": \"\\ud83e\\udd14\\ud83e\\udd14\"}", "{\"comment\": \"Dear Authors,\\n\\nAs part of the of the ICLR reproducibility challenge, our team has selected this paper for replication.\\nIn order to facilitate the process, we kindly request you send the link to your code to deepsparse(at)gmail(dot)com. \\n\\nLooking forward to your response.\", \"title\": \"Provide link to code\"}", "{\"comment\": \"\", \"claim_1\": \"Zhu et al. 2017 and DeepR were both submissions to ICLR 2018, thus it is not possible for DeepR to have compared to their pruning technique. They compared to a strong pruning baseline and beat it, claiming that you are the first because your performance exceeds a baseline they could not have compared to doesn\\u2019t make sense. \\n\\nIf your claim is that DeepR or SET cannot achieve comparable performance to the technique of Zhu et al. you should demonstrate this experimentally and include the results in your paper. A single data point of comparison in a footnote is not sufficient to establish superiority. If you were to demonstrate that DeepR and SET cannot achieve performance that your method can, this is a very valuable contribution and you can explain this and include it as a claim.\\n\\nAlso, 5x slowdown is not so absurd that it precludes comparison given that ImageNet can be trained in under a day. You also pointed out that you have already re-implemented their technique. The authors of DeepR do provide code (see the very first line of the appendix). It is not clear why you should have any issue applying it to a deep all-convolutional network, the technique is very straightforward and is agnostic to model architecture. You also provide no comparison to SET, which your technique mirrors very closely.\", \"claim_2\": \"If your claim is that this is the first application of a \\u201cdynamic sparse reparameterization\\u201d to an entire convolutional network, it is not clear why this is a valuable contribution. DeepR and SET can both be trivially applied to an entire convolutional network.\", \"claim_4\": \"As stated above, a 5x slowdown is not so large that it justifies not comparing to a technique when claiming superiority. And a single data point in a footnote is not a sufficient comparison.\\n\\nFor your computational complexity claim, you need to also compare to SET. It is not surprising that DeepR is very slow, and SET will almost certainly be much faster than it. Your technique only differs from SET in your use of an approximate threshold and your weight redistribution scheme. You claim that your technique is faster than SET because of this, but you provide no data to back up this claim. \\n\\nWhen you do measure the performance of these techniques, it is important to note that the number of iterations between pruning steps is a trivial hyperparameter that can be adjusted for any pruning technique (and is commonly; see TensorFlow model pruning), and that you should compare to these techniques with the same number of iterations between pruning steps.\\n\\nAlso, you claim to compare to the technique of Zhu et al., but the sparsity function you use (listed in the appendix) is not the same as theirs. If you want to compare to their technique, you should use TensorFlow model pruning, which is what was used in their experiments.\\n\\nWe do not dispute the potential value of your approximate thresholding technique and weight redistribution technique. Our issues are that a) you do not properly compare to existing techniques to establish that either of these modifications has provided an improvement and b) you claim significant novelty relative to these techniques, in particular to SET, which is extremely close to your method and to which you do not compare at all.\", \"title\": \"Response\"}", "{\"title\": \"Authors response\", \"comment\": \"*On Claim 1:*\\nIndeed, DeepR showed that it performs better than L1-shrinkage and magnitude-based pruning. However, we use as our compression baseline the iterative pruning technique introduced in Zhu et al. 2017 that gives a stronger baseline and outperforms DeepR as shown by the accuracies in Fig.1a (compare to Fig. 3A in the DeepR paper). Our method closely matches the performance of this stronger baseline (see Fig.1a). Our claim is thus accurate given the stronger compression baseline we used. We will rewrite the claim and make it more accurate by highlighting that it refers to the compression baseline obtained using the method in Zhu et al. 2017, instead of using the more general term \\\"post-training compression\\\". \\n\\nFor the MNIST network, we do indeed compare to one of the few numbers presented in the DeepR paper (see footnote 6 on page 5) and show better performance. We rolled our own implementation of DeepR and found that evaluating DeepR on large imagenet-class networks was very computationally expensive (5x slower than our approach). Moreover, DeepR has more hyper-parameters than our approach involving, for example, an annealing schedule for the parameter update noise, layer-wise regularization coefficients, and hand-tuned layer sparsities. The large number of hyper-parameters, together with the slowness of DeepR, make a well-tuned evaluation on large networks extremely challenging. The authors of the DeepR paper do not provide code, nor guidelines on how to use DeepR in deep all-convolutional networks. We thus limit our comparison to the MNIST case. \\n\\n\\n*On Claim 2:*\\nWe apologize that the wording of this claim is not specific enough. In this claim, we were referring to modern all-convolutional networks such as residual networks. Our experiments showed that DeepR was very slow for these networks in the imagenet case. In the DeepR paper, DeepR was also not applied to the entire convolutional network, but only to a specific layer while other layers were left dense. Ours is the first application of dynamic reparameterization to an entire convolutional network. We fully acknowledge, however, the fact that DeepR was the first dynamic method of the kind applied to a convolutional layer. In the next revision, we will make the wording sufficiently clear to reflect the above facts.\\n\\n*On Claim 4:*\\nWe achieve better performance than DeepR on the small MNIST network. For the bigger convolutional networks, DeepR was very slow (see below). We will make the accuracy claim more precise by limiting it to the MNIST case (which is the case in which we can feasibly run and compare to DeepR). We make our claim regarding computational cost based on 2 observations:\\n1)DeepR runs the re-wiring step every iteration while we re-allocate/rewire every few hundreds/thousands of iterations. DeepR also needs to generate a Gaussian random number at each parameter update which incurs extra MAC operations. We will add numbers to the paper to exactly quantify the extra operation needed by DeepR (due to increased rewire frequency and the need for Gaussian random number generation for each parameter update)\\n2)We implemented DeepR ourselves. For imagenet training on 4 Titanxp GPUs, training using DeepR was 5x slower than our approach. We will include this DeepR implementation with our code release. \\n\\nWe did compare against sorting-based pruning methods, because Tensorflow model pruning [3] was based on the sparse compression technique described by Zhu et al. 2017, which was the strongest baseline (called \\\"compressed sparse\\\" in the manuscript) we benchmarked against in the paper.\\n\\nIndeed, runtime is an important metric. We did not observe significant slow-down when using our parameter-reallocation method (since it is only applied sporadically during training). We will include the wall-time runtime of our approach compared to training without parameter-reallocation to make this observation more precise. We will also include a comment on how the runtime of our method compares to DeepR. \\n\\nEarlier methods like DeepR and SET provided key inspirations for this work. However, we have gone further than these previous algorithms and presented concrete results addressing several of their limitations (the computational inefficiency of DeepR, and its need for pre-specified layer sparsities; and the computational inefficiency of SET involving sorting, and the need for pre-specified layer sparsities). We illustrated for the first time the applicability of dynamic re-parameterization to practical large-scale convolutional networks, which earlier dynamic reparameterization approaches were not scalable enough to handle. Thank you again and we are happy to address your further comments.\"}", "{\"comment\": \"We disagree with some of the claims made by this paper.\", \"claim_1\": \"\\u201cOurs is the first systematic method able to train sparse models directly without an increased parameter footprint during the entire course of training, and still achieve performance on par with post-training compression of dense models, the best result at a given sparsity.\\u201d\\n\\nThe authors of [2], which introduces DeepR, compare their technique to l1-shrinkage and magnitude-based pruning and demonstrate on-par or better performance than each for a given sparsity.\\n\\nDeepR achieves the same bounded parameter footprint as the technique presented here, and does not appear to have been evaluated beyond the experiments in the original publication, yet the authors do not compare to this technique. Given this, it seems premature for this work to claim that they have achieved something that existing techniques cannot, especially considering [2] demonstrates that they achieve performance on par or better than the compression techniques they compare to.\", \"claim_2\": \"\\u201cWe described the first dynamic reparameterization method for training convolutional network.\\u201d\\n\\nThe original DeepR paper demonstrates results on a convolutional neural network. This paper cites the original DeepR paper and refers to it as a \\u201cdynamic sparse reparameterization\\u201d technique.\", \"claim_4\": \"\\u201cOur method not only outperformed existing dynamic sparse reparameterization techniques, but also incurred much lower computational costs\\u201d\\n\\nThis work does not compare to any existing dynamic sparse reparameterization technique. They also do not measure the runtime of their technique or compare to the baseline sorting-based pruning (e.g., TensorFlow model pruning [3]).\\n\\nIn addition to these issues with the claimed contributions of this paper, the introduced \\u201cdynamic sparse reparameterization\\u201d technique only differs from Sparse Evolutionary Training (SET) [1] in its use of an approximate threshold for removing weights and in how it redistributes weights after pruning. Both of these modifications are potentially valuable contributions, but the authors make very broad claims rather than list these modifications and demonstrate their value over the existing methods.\\n\\nReferences\\n1. https://www.nature.com/articles/s41467-018-04316-3.pdf\\n2. https://arxiv.org/pdf/1711.05136.pdf\\n3. https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/model_pruning\", \"title\": \"Issues with claimed contributions\"}" ] }
H1ersoRqtm
Structured Neural Summarization
[ "Patrick Fernandes", "Miltiadis Allamanis", "Marc Brockschmidt" ]
Summarization of long sequences into a concise statement is a core problem in natural language processing, requiring non-trivial understanding of the input. Based on the promising results of graph neural networks on highly structured data, we develop a framework to extend existing sequence encoders with a graph component that can reason about long-distance relationships in weakly structured data such as text. In an extensive evaluation, we show that the resulting hybrid sequence-graph models outperform both pure sequence models as well as pure graph models on a range of summarization tasks.
[ "Summarization", "Graphs", "Source Code" ]
https://openreview.net/pdf?id=H1ersoRqtm
https://openreview.net/forum?id=H1ersoRqtm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BylTcDexlV", "S1e6m3190m", "HygQkgOY07", "HkexklOFRX", "rylTptBSR7", "BkxA0BqW0m", "rygnAxKlRQ", "BkehgP1qaQ", "B1g-t4TSTX", "SJlxsmaBpm", "ByxzV76S6X", "BJgtlQaBTm", "S1xZrF4J6m", "HygI4PN52m", "Sygyxl0IhX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544714133325, 1543269412538, 1543237595317, 1543237591693, 1542965700621, 1542723030077, 1542652115701, 1542219507730, 1541948536580, 1541948311744, 1541948202338, 1541948144637, 1541519673461, 1541191470208, 1540968423478 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper621/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper621/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper621/Authors" ], [ "ICLR.cc/2019/Conference/Paper621/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper621/Authors" ], [ "ICLR.cc/2019/Conference/Paper621/Authors" ], [ "ICLR.cc/2019/Conference/Paper621/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper621/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper621/Authors" ], [ "ICLR.cc/2019/Conference/Paper621/Authors" ], [ "ICLR.cc/2019/Conference/Paper621/Authors" ], [ "ICLR.cc/2019/Conference/Paper621/Authors" ], [ "ICLR.cc/2019/Conference/Paper621/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper621/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper621/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This paper examines ways of encoding structured input such as source code or parsed natural language into representations that are conducive for summarization. Specifically, the innovation is to not use only a sequence model, nor only a tree model, but both. Empirical evaluation is extensive, and it is exhaustively demonstrated that combining both models provides the best results.\\n\\nThe major perceived issue of the paper is the lack of methodological novelty, which the authors acknowledge. In addition, there are other existing graph-based architectures that have not been compared to.\\n\\nHowever, given that the experimental results are informative and convincing, I think that the paper is a reasonable candidate to be accepted to the conference.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Minor novelty, extensive and informative empirical comparison\"}", "{\"title\": \"Useful additions have been made to the experiments\", \"comment\": \"In light of the extensive new experiments and their conclusions, I indeed think that this paper is now much stronger. I have changed my original score from 4 to 7.\"}", "{\"title\": \"Results of additional experiments\", \"comment\": \"We have added the results of a number of additional experiments to more clearly evaluate the effect of our contribution. On natural language summarization, our new Table 2 shows that (a) additional semantic information seem to not help sequence-based models; (b) using only syntactical, but not semantical information in the Sequence GNN setting is helpful, but larger gains are made when semantic information is included; and (c) these results become even starker when considering a fairer baseline (using the same codebase as ours), instead of results from another paper (with its own specialized hyperparameter tuning).\\n\\nWhile we plan to provide these results eventually, we identified another issue in our implementation of the coverage mechanism (due to which loss was not correctly normalized), and so this may take some more days. However, we believe that while these additional results will further improve the experimental evaluation, they are not crucial to document the value of our contribution.\\n\\nOverall, we would kindly request that you reconsider your rating given the additional experimental results, or provide further guidance on how to improve the paper.\"}", "{\"title\": \"Please review experiments\", \"comment\": \"The authors have posted new experimental results. Do you think that these have addressed some of your concerns?\"}", "{\"title\": \"Updates to Paper and Results of Experiments\", \"comment\": \"We have updated the paper with additional experiments that the reviewers showed interest in (Table 2). Unfortunately, due to a problem with integrating the \\u201ccoverage\\u201d idea of See et al. into our codebase and the long training times for these models, we were unable to update the paper with these results so far. However, we expect to provide these results over the next few days. We note that the time since the start of the rebuttal period was too short to do hyperparameter optimization on the CNN/DailyMail summarization dataset, and we instead used the same hyperparameters we used before (without coverage).\\n\\nConcretely, we provide the following additional experimental results:\\n\\n\\u2022\\tWe have added the GNN->LSTM+pointer model for the Method Naming task as requested by Reviewer 3. The results show that removing the biLSTM encoder worsens the results compared to our biLSTM+GNN->LSTM+pointer network.\\n\\n\\u2022\\tWe ran biLSTM encoder-based baselines for the CNN/DM task using the OpenNMT implementation, to better compare with our extension of that codebase. Despite using the same setup as See et al. our experiments yield slightly worse results. This is most likely due to the fact that we have not performed separate hyperparameter optimization for each task but instead use identical hyperparameters for _all_ our tasks and datasets. Our biLSTM+GNN results are most fairly compared to these baselines.\\n\\n\\u2022\\tAs discussed in our last post, we have performed experiments 1-3 on CNN/DM to analyze the influence of the extra information provided by the CoreNLP parser. The results can be summarized as follows:\\n\\n o\\t[Experiment 1] We ran a biLSTM encoder with access to the CoreNLP parse information. Concretely, we extended the token embedding with an embedding of the per-token information provided by the parser, and additionally added tags marking references using fresh \\u201c<REF1>\\u201d, \\u201c<REF2>\\u201d, \\u2026 tokens. Our results indicate that this only minimally improves results compared to the standard biLSTM encoder operating on words, and hence that exposing the structure explicitly by using a GNN encoder provides more advantages.\\n\\n o\\t[Experiment 2] Removing all linguistic structure, i.e. Stanford CoreNLP edges, but retaining the extra sentence nodes and edges, yields a small improvement over the baseline biLSTM-based encoder, increasing ROUGE-2 score by one point and yielding minor differences in the other metrics.\\n\\n o\\t[Experiment 3] When adding edges that connect tokens with identical stemmed string representations, the performance increases a bit but does not reach the performance levels comparable to using the full coreference resolution.\\n\\n\\nWe have clarified the above points in the text. In conclusion, pending on the coverage experiment, the above experiments demonstrate that:\\n\\na)\\tNeither biLSTM, nor GNN encoders alone achieve best performance in summarization tasks and the biLSTM+GNN combination improves on all baselines in all cases.\\n\\nb)\\tEncoding additional linguistic structure is helpful for natural language summarization but cannot be captured adequately using only a standard biLSTM encoder.\\n\\nc)\\tEncoding sentence and connecting long-distance tokens with the same stems only slightly helps performance, while using more advanced resolution of references yields bigger gains.\\n\\nFinally, we would like to emphasize again the broad applicability of our summarization method to both natural language and source code. While the natural language summarization task is clearly the most interesting for the reviewers, our summarization model is also able to compete with (and beat) specialized approaches on the two source code tasks on three datasets.\"}", "{\"title\": \"Answer regarding related work\", \"comment\": \"The overall concept in Marcheggiani and Titov's work is similar, but we generalise it in four ways:\\n (1) We consider a wider range of sequence encoders.\\n (2) We show that the resulting GNN structure is useful for sequence decoding, with attention over the generated inputs.\\n (3) We consider a wider range of different tasks, with different graph structures.\\n (4) We incorporate semantic and across-sentence relationships, instead of only syntactic relationships.\\n \\nWhile this work tackles the same problem as we do (namely, modeling long-distance\\nrelationships in NLP) and uses the same fundamental idea (namely, modeling\\nrelationships in graphs), we feel that our work provides the empirical evidence\\nthat the idea is widely applicable, both across diverse modelling choices and\\ntask choices.\\n \\nBastings et al. provide a follow-up on that work, focusing on aspect (2), adding\\na sequence decoder. Similarly, De Cao et al. build on a similar idea, but focus\\non aspect (4), but do not introduce intra-document relationships, but instead use\\nthe graph structure to reflect an entity graph. This does not use end-to-end training\\nfor the sequential structure of the natural language (they use pre-trained, fixed\\nELMo).\\n \\n\\nOverall, we believe our contribution to generalise in all dimensions (1)-(4), hopefully\\nproviding enough experimental evidence so that all researchers working on sequential\\ndata with some inherent structure will consider mixed sequence/graph models in the\\nfuture. This is why we included non-natural language tasks (but with obvious graph\\nstructure), showing the wide applicability of the idea.\"}", "{\"title\": \"Follow up question regarding related work\", \"comment\": \"Hello!\", \"i_had_a_follow_up_question_regarding_related_work\": \"even given the response it still wasn't clear to me the differences and advantages of the proposed method, both theoretically and empirically, compared to previous work incorporating graph structures on the input side of sequence-to-sequence models. Even if the task is different, the methodology seems like it would be largely similar, so these methods would be reasonable baselines. Without a comparison it makes it a bit difficult to tell the merit of this particular work. Would you mind elaborating?\"}", "{\"title\": \"The rebuttal add some useful clarifications and proposed experiments\", \"comment\": \"Thank you for your reply and useful clarifications. The additional experiments you proposed may greatly enhance the quality of your paper indeed. My rating is subject to change depending on the outcome of these experiments.\"}", "{\"title\": \"Response & Additional Experiments\", \"comment\": \"Thanks for your detailed comments, which we will integrate your comments in the next version of our paper.\", \"on_novelty\": \"We agree that we are not contributing fundamentally new models here \\u2013 indeed, we refrained from introducing a more complex architecture to make it easy to adopt this modeling approach. We believe that our work introduces a simple way to fuse state-of-the-art sequence (not only LSTMs, but /any/ sequence encoder) learning with reasoning enabled by domain-specific graph constructions. We have not found this idea in prior work, and our experiments show the value across three different tasks from different domains. We hope that other researchers can profit from our work by integrating similar techniques into their own architectures and believe that this deserves publication and wider dissemination.\\n\\nAs discussed in our reply to all reviewers, we will run additional experiments on the CNN/DM to analyze the influence of different graph constructions.\\n\\n\\nOn GNN->LSTM+pointer on MethodNaming:\\n\\nWe decided to show this ablation experiment only on the MethodDoc task for presentation reasons, but we will rerun the model and provide additional results on the MethodNaming task in our next revision.\\n\\n\\nOn comparison with Alon et al. 2018 on the Java-Large corpus:\\n\\nWe did run these experiments but realized that we could obtain best results by models that \\u201cfelt\\u201d like they had too much capacity. Further analysis of this behavior traced this to a problem with a duplication of samples in the dataset. For example, about 30.7% of files in the Java-Large are near-duplicates of other files in the corpus (across all folds), indicating that results on these datasets primarily measure overfitting to the data. We managed to train competitive models, but only by choosing very large sizes for the hidden dimensions (>1000) and removing dropout. In contrast, Java-Small only has 3.0% duplicates. We will clarify this in the next version of our paper. [This is similar to our experiences with the Barone & Sennrich dataset discussed in Sect. 4.1.2.]\", \"on_nl_summarization_and_additional_information\": \"We agree that our model uses additional information that is not available to the pure sequence models \\u2013 indeed, we believe that the ability to use this information is the core contribution of our work. Indeed, it is unclear how to add information from the CoreNLP parser to a standard sequence model (how, for example, are coreference connections represented?). As discussed in our reply to all reviews, we will run additional experiments to further elucidate this effect. Primarily, we will run an LSTM baseline that uses additional per-token information in the embedding of words, and additionally will introduce fresh tokens (\\u201c<REF1>\\u201d, \\u2026) to mark points at which references are made. If you had other comparisons in mind, please do react quickly, as these experiments do take a bit of time...\\n\\n\\nOn comparison with Nallapati et al. 2016:\\n\\nThe structure of the \\u201cNext\\u201d tokens in the graph model resembles that of Nallapati et al. (2016). However, the core difference is in how message-passing GNNs work. In Nallapati et al. (2016) computing the representations this is truly hierarchical, I.e. information flows in one direction: sentence representations are computed, then these are combined into a document representation. In a GNN, messages are passed in both directions, and thus our per-sentence nodes also allow the exchange of information between different tokens in the same sentence. Hence, our model is more comparable to a hierarchical setting in which information can flow both up and down.\", \"on_using_coverage\": \"We wanted to avoid the additional work for this experiment, since we believe that the improvements from adding a coverage mechanism are orthogonal to the ones provided by our model but will now run this and provide the results once the experiments have finished.\", \"on_weighted_averaging\": \"In past experiments on a variety of datasets and tasks, we have found that weighted averaging helps compared to uniform averaging. We believe that this is due to the fact that weighted averaging acts as an attention-like mechanism that allows the model to pick the salient information from the graph while allowing the message-passing to \\u201cfreely\\u201d transfer information. Since this is also the accepted method in the GNN literature (e.g. Gilmer et al. 2017) we did not further experiment with this design decision. As our compute resources are limited, we want to avoid rerunning this ablation on the CNN/DM dataset, but will provide additional experiments on the two smaller tasks. \\n\\n\\nPlease, let us know if these do not sufficiently address the concerns you raise in your review and what alternative experiments are missing.\"}", "{\"title\": \"Response & Additional Experiments\", \"comment\": \"Thanks for your thoughtful review and your time. As discussed in our reply to all reviews, we will run four additional experiments covering points raised by the different reviewers.\", \"on_related_work_in_nlp_with_graphs\": \"Thank you for bringing up additional related work. The cited works handle quite different tasks, and so drawing a direct comparison to our work is hard. Marcheggiani et al. (2017) uses their model, with a single GCN propagation, for classification not sequence prediction, whereas Bastings et al. (2017) does sentence-to-sentence translation. Both employ purely syntactic graphs and thus lack the advantages that additional semantic information can provide. Our additional experiments 2 and 3 are designed to show the effect of this. The short paper of De Cao et al. (2018) uses a GCN over entities in multiple documents. Finally, we want to highlight that we propose to use graphs for longer documents, whereas the approaches above are primarily concerned with single sentences. On average the CNN/DM documents lead to graphs with 900 nodes and 2.5k edges.\\n\\nRegarding the question of SequentialGNN vs GCN, we believe that there are no substantial differences between the use of GCNs and GGNNs. The core contribution proposed in our paper is the idea to fuse information obtained from state-of-the-art sequence models with a form of structured reasoning that can integrate domain knowledge.\\nWe will clarify the above in the related work section.\\n\\n\\nOn the performance of SelfAtt vs. SelfAtt+GNN on MethodDoc C#:\\n\\nIn the paper, we discuss this result explicitly in the third paragraph of 4.1.4. The core reason for the decrease in ROUGE scores is that the SelfAtt+GNN model produces substantially longer outputs, which tends to impact ROUGE scores. This causes the substantial improvement in the BLEU score. We will extend the appendix to include examples of outputs of the SelfAtt/SelfAtt+GNN models that illustrate how the longer output improves the information content of the results. Overall, we want to note that ROUGE and BLEU are problematic measures for these tasks, but we are not aware of any other metrics that can be computed at scale.\", \"on_randomness_of_shown_samples\": \"The sample in Figure 2 is one appearing in See et al. For Figure 1, we had to pick a sample that would fit within the given space, so it\\u2019s not randomly sampled. All other examples are randomly selected.\"}", "{\"title\": \"Response & Additional Experiments\", \"comment\": \"Thanks for your time and helpful comments. As discussed in our reply to all reviews, we will run four additional experiments covering points raised by the different reviewers. However, while we believe that a human evaluation of generated summaries would be helpful, setting this up during the rebuttal period seems to be impossible. Do let us know if you want us to run more experiments / provide more results.\"}", "{\"title\": \"Response Summary\", \"comment\": \"Thank you for all your comments we respond to your comments individually. Below you can find a summary for all the reviewers.\", \"we_plan_to_run_the_following_experiments\": \"\\u2022\\t[Experiment 1] BiLSTM on natural language inputs using Stanford CoreNLP information. For this, we will extend token embeddings by information from the CoreNLP parser, and introduce special tokens (\\u201c<REF1>\\u201d, \\u2026) to mark co-references.\\n\\u2022\\t[Experiment 2] BiLSTM+GNN on natural language inputs using only syntactic information. Concretely, each token will be represented by one node and we introduce one node per sentence. The only edges will be \\u201cNextToken\\u201d and \\u201cNextSentence\\u201d. This experiment tests the performance of our model using only syntactic information used by other models (e.g., hierarchical representations that split sentences).\\n\\u2022\\t[Experiment 3] BiLSTM+GNN on natural language input using syntactic and equality information. This is like experiment 2, but will also add edges between non-stopword nodes corresponding to tokens that have identical string representations when stemmed. \\n\\u2022\\t[Experiment 4] BilSTM+GNN -> LSTM+Pointer+Coverage. We will extend the full model by See et al. with additional graph information.\\n\\nPlease, do let us know if these sufficiently address the concerns you mention in your review or if you would like to see other experiments.\\n\\nWe also want to emphasize again the broad applicability of our method. While the natural language summarization task is clearly the most interesting one, we do want to remark that our very general model is able to compete with (and beat) specialized approaches on the source code tasks. We have spent very little optimizing our models to the different tasks, and strongly believe that intensive tuning of hyperparameters to each of these tasks could further improve our results.\"}", "{\"title\": \"Interesting idea and promising results\", \"review\": \"This paper presents a structural summarization model with a graph-based encoder extended from RNN. Experiments are conducted on three tasks, including generating names for methods, generating descriptions for a function, and generating text summaries for news articles. Experimental results show that the proposed usage of GNN can improve performance by the models without GNN. I think the method is reasonable and results are promising, but I'd like to see more focused evaluation on the semantics captured by the proposed model (compared to the models without GNN).\", \"here_are_some_questions_and_suggestions\": [\"Overall, I think additional evaluation should be done to evaluate on the semantic understanding aspects of the methods. Concretely, the Graph-based encoder has access to semantic information, such as entities. In order to better understand how this helps with the overall improvement, the authors should consider automatic evaluation and human evaluation to measure its contribution. Also from fig. 3, we can see that all methods get the \\\"utf8 string\\\" part right, but it's hard to say the proposed method generates better description.\", \"In the last table in Tab. 1, why the authors don't have results for adding GNN for the pointer-generator model with coverage?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A straightforward improvement for abstractive summarization\", \"review\": \"STRUCTURED NEURAL SUMMARIZATION\", \"summary\": \"This work combines Graph Neural Networks with a sequential approach to abstractive summarization across both natural and programming language datasets. The extension of GNNs is simple, but effective across all datasets in comparison to external baselines for CNN/DailyMail, internal baselines for C#, and a combination of both for Java. The idea of applying a more structured approach to summarization is well motivated given that current summarization methods tend to lack the consistency that a structured approach can provide. The chosen examples (which I hope are randomly sampled; are they?) do seem to suggest the efficacy of this approach with that intuition.\", \"comments\": \"Should probably cite CNN/DailyMail when it is first introduced as NLSummarization in Section 2 like you do the other datasets.\\n\\nCan you further elaborate on how your approach is similar to and differs from that in Marcheggiani et al 2017 on Graph CNNs for Semantic Role Labeling, Bastings et al 2017 on Graph Convolutional Encoders for Syntax-aware Machine Translation, and De Cao et al 2018? Why should one elect to go the direction of sequential GNNs over the GCNs of those other works, and how might you compare against them? I would like to see some kind of ablation analysis or direct comparison with similar methods if possible.\\n\\nWhy would GNNs hurt SelfAtt performance on MethodDoc C# SelfAtt+GNN / SelfAtt?\\n\\nWhy not add the coverage mechanism from See et al 2017 in order to demonstrate that the method does in fact surpass that prior work? I'm left wondering whether the proposed method's returns diminish once coverage is added.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Limited novelty and missing some key experiments\", \"review\": [\"Note: I changed my original score from 4 to 7 based on the new experiments that answer many of the questions I had about the relative performance of each part of the model. The review below is the original one I wrote before the paper changes.\", \"# Positive aspects of this submission\", \"The intuition and motivation behind the proposed model are well explained.\", \"The empirical results on the MethodNaming and MethodDoc tasks are very promising.\", \"# Criticism\", \"The novelty of the proposed model is limited since it is essentially adding an existing GGNN layer, introduced by Li et al. (2015), on top of an existing LSTM encoder. The most important novelty seems to be the custom graph representation for these sequence inputs to make them compatible with the GGNN, which should then deserve a more in-depth study (i.e. ablation study with different graph representations, etc).\", \"Since you compare your model performance against Alon et al. on Java-small, it should be fair to report the numbers on Java-med and Java-large as well.\", \"The \\\"GNN -> LSTM+POINTER\\\" experiment results are reported on the MethodDoc task, but not for MethodNaming. Reporting this number for MethodNaming is essential to show the claimed empirical superiority of the hybrid encoder compared to GNN only.\", \"I have doubts about the usefulness of the proposed model for natural language summarization, for the following reasons:\", \"The comparison of the proposed model for NLSummarization against See et al. is a bit unfair, since it uses additional information through the CoreNLP named entity recognizer and coreference models. With the experiments listed in Table 1, there is no way to know whether the increased performance is due to the hybrid encoder design or due the additional named entity and coreference information. Adding the entity and coreference data in a simpler way (i.e. at the token embedding level with a basic sequence encoder) in the ablation study would very useful to answer that question.\", \"In NLSummarization, connecting sentence nodes using a NEXT edge can be analogous to using a hierarchical encoder, as used by Nallapati et al. (\\\"Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond\\\", 2016). Ignoring the other edges of the GNN graph, what are the theoretical and empirical advantages of your method compared to this sentence-level hierarchical encoder?\", \"Adding the coverage decoder introduced by See et al. to your model would have been very useful to prove that the current performance gap is indeed due to the simplistic decoder and not something else.\", \"How essential is the weighted averaging for graph-level document representation (Gilmer et al. 2017) compared to uniform averaging?\", \"A few minor comments about writing:\", \"In Table 1, please put the highest numbers in bold to improve readability\", \"On page 7, the word \\\"summaries\\\" is missing in \\\"the model produces natural-looking with no noticeable negative impact\\\"\", \"On page 9, \\\"cove content\\\" should be \\\"core content\\\"\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rJxHsjRqFQ
Hyperbolic Attention Networks
[ "Caglar Gulcehre", "Misha Denil", "Mateusz Malinowski", "Ali Razavi", "Razvan Pascanu", "Karl Moritz Hermann", "Peter Battaglia", "Victor Bapst", "David Raposo", "Adam Santoro", "Nando de Freitas" ]
Recent approaches have successfully demonstrated the benefits of learning the parameters of shallow networks in hyperbolic space. We extend this line of work by imposing hyperbolic geometry on the embeddings used to compute the ubiquitous attention mechanisms for different neural networks architectures. By only changing the geometry of embedding of object representations, we can use the embedding space more efficiently without increasing the number of parameters of the model. Mainly as the number of objects grows exponentially for any semantic distance from the query, hyperbolic geometry --as opposed to Euclidean geometry-- can encode those objects without having any interference. Our method shows improvements in generalization on neural machine translation on WMT'14 (English to German), learning on graphs (both on synthetic and real-world graph tasks) and visual question answering (CLEVR) tasks while keeping the neural representations compact.
[ "Hyperbolic Geometry", "Attention Methods", "Reasoning on Graphs", "Relation Learning", "Scale Free Graphs", "Transformers", "Power Law" ]
https://openreview.net/pdf?id=rJxHsjRqFQ
https://openreview.net/forum?id=rJxHsjRqFQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "B1extQeflE", "H1lSV55B0m", "HyeEJ59H0Q", "S1xA9YqSAQ", "rJe2zKqBA7", "B1eVP5WS6m", "H1g2vUL9nQ", "Hylg-SL927", "ryl7AYUH3Q", "BygxP5uf9Q", "BylT_-kTY7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1544844151688, 1542986285250, 1542986203779, 1542986134244, 1542986004396, 1541900892097, 1541199459542, 1541199095721, 1540872650733, 1538587224230, 1538220405280 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper620/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper620/Authors" ], [ "ICLR.cc/2019/Conference/Paper620/Authors" ], [ "ICLR.cc/2019/Conference/Paper620/Authors" ], [ "ICLR.cc/2019/Conference/Paper620/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper620/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper620/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper620/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper620/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"Reviewers all agree that this is a strong submission.\\nI also believe it is interesting that only by changing the geometry of embeddings, they can use the space more efficiently without increasing the number of parameters.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Strong and interesting paper\"}", "{\"title\": \"On the Experiments with the Synthetic Graph Datasets\", \"comment\": \"Thank you for your remarkable feedback and comments about our paper.\\n\\n> Question: In Figure 3 (Center), the number of nodes 1000 and 1200 are pretty close. How about the results on 500 nodes and 2000 nodes? It seems the accuracy difference increases as the number of nodes increases. Is this true? \\n\\nThe difference in performance between the hyperbolic and Euclidean models is negligible if the graph is small enough (e.g. 200 nodes) and we would expect this trend to continue for graphs of size 2000 or larger, with the gap between hyperbolic and Euclidean models growing as the size of the graph increases. This is mainly because learning on larger graphs would require more capacity. We will add the experiments on graphs with 2000 nodes or larger to the camera-ready version of the paper.\"}", "{\"title\": \"On Experiments with other VQA Models and Datasets\", \"comment\": \"We really appreciate your feedback and comments about our paper.\\n\\n> Baselines: The authors main contribution is the matching and aggregation operator. It always feels like the multi-modal community is divided between VQA and CLEVR datasets, but there should be a lot in common between them. Specifically, what is called here the matching operator, had several variants in VQA, such as Multimodal Compact Bilinear Pooling by Fukui et al., or Multi-modal Factorized Bilinear Pooling by You et al. etc. I think the paper would benefit from adding other variants of matching functions.\", \"datasets\": \"I think the approach might work as well in VQA dataset, which I find more interesting than clever because of the real-world nature of it. You can plug it into methods like MFB, or as pairwise potentials in Structured Attentions by Zhu et al, or High-Order attention by Schwartz et al.\\n\\nIn our work, we show the results of our hyperbolic module on a wide variety of different problems -- NMT, CLEVR, and graph problems. We used CLEVR because it has many relational questions that we hypothesize may benefit from representing in hyperbolic space. In contrast, the real-world VQA consists of somewhat shorter, and non-relational questions. Moreover, most of the challenges in the VQA dataset are about better visual representation that, in our work, we would like to abstract that away. Additionally, existing highly-performing architectures on CLEVR such as RN, also build upon the relational inductive biases, and hence there is a direct link between our module and these works. Nonetheless, we agree with the reviewers that the highly-performing VQA architectures may also benefit from our module, which we leave as possible future work.\"}", "{\"title\": \"A Response to Improves small models but not large ones, reasonable but not very strong experimental comparisons\", \"comment\": \"Thank you for your constructive feedback and comments about our paper.\\n\\n> The novelty of this paper is replacing the Euclidean metric with another existing metric, which has already been used in previous ML models. So the contribution is limited.\\n\\nWe present a method to ensure that the activations of a neural network can be interpreted as points in hyperbolic space, which is not guaranteed apriori. We show that by use of hyperbolic geometry to compute the attention, it is possible to reason over relations and graphs more efficiently and with more compact representations. \\n\\nWe are among the first to adopt the inductive biases from hyperbolic geometry to improve the attention mechanisms of neural networks, and we have done so in a general and modular way that can be used directly in any existing attention based architecture.\\n\\n> As explicitly claimed in the paper and also reflected by the experimental results (e.g., Transformer-Big in Table 2). The hyperbolic metric only brings noticeable improvement to small neural nets with limited compacity on relatively small datasets. When applied it to most SOTA models (which are usually large/deep/wide neural networks) on larger datasets, it loses the advantage. This fact might seriously limit the application of the proposed technique.\\n\\nA comparison between network compression techniques and hyperbolic attention is an exciting direction for future work. Especially since in principle they are orthogonal approaches, and perhaps combining them could be more effective than either in isolation.\\n\\n> Although hyperbolic metric reflects the power-law distribution, which is a very natural assumption verified on many kinds of real data (social networks and physical statistics), I am not fully convinced that it still holds on an embedding space produced by a neural net (since attention are usually applied to the outputs of a neural net). \\n\\nIt is important to note that the use of hyperbolic geometry is not an assumption about how the activations behave, it is a structure that is imposed on the activations as a modelling choice. We use hyperbolic geometry to provide power law structure as an inductive bias to the model, not as an assumption about how the activations would behave in the absence of this imposed structure. The improvements we have observed on the graph tasks also indicate this performance gain.\\n\\n> In the experiments, does the model with the proposed metric cost similar training/inference time comparing to the baselines? What is the trade-off between improvements and extra time costs?\\n\\n Indeed the costs are similar. All the operations that we use are simple element-wise scalar operations. These operations have a negligible contribution to the total computational cost.\\n\\n> I notice that the results of the proposed attention in Table 2 are ~0.5% higher than the results from the earlier arXiv version of this paper. What is the reason for the improvements? Did you increase the training steps?\\n\\nWe observed that mentioned improvements mainly after training our model longer. In the latest version we trained our models for as many steps as the original Transformer [1]. We made this change after speaking with the authors of \\u201cAttention is All You Need\\u201d paper, who suggested this as a way to improve the performance of the model. This accounts for the increased performance compared to the Arxiv version of the paper.\\n\\n[1] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems(pp. 5998-6008).\"}", "{\"title\": \"Relevant Work - We will cite this paper\", \"comment\": \"> This is quite an interesting work, utilizing hyperbolic geometry for more efficient representation. It reminds me of a previous work \\\"Lie Access Neural Turing Machines\\\" that proposed to use general manifolds as the \\\"index space\\\" of memory items, which are attended to like in standard attention. Could you comment on the relation of your paper to that work?\\n\\nThanks for pointing out this paper, we were not aware of it. \\n\\nIt indeed seems to be related to our method but with important distinctions. Yang et al introduces a memory access mechanism for NTM model where key and values of the memory are parametrized on a differentiable Lie-group manifold. To compute the matching function Yang et al also uses the distances between the key and value. Both Lie Access NTMs (LA-NTM) and our paper are closely related to each other in the sense of introducing attention-mechanisms by using certain geometric tools. However, the goals and the motivation of each work are quite different. LA-NTM represents uses the manifold of Lie-group actions to be able to learn better access mechanisms for the memory. Our paper focuses on improving the learning of relations in the data by combining attention mechanism with the hyperbolic inductive biases. It would be interesting to combine both approaches.\\n\\n We will cite this work and relate it to our method in our revised manuscript.\"}", "{\"comment\": \"Dear authors,\\n\\nThis is quite an interesting work, utilizing hyperbolic geometry for more efficient representation. It reminds me of a previous work \\\"Lie Access Neural Turing Machines\\\" that proposed to use general manifolds as the \\\"index space\\\" of memory items, which are attended to like in standard attention. Could you comment on the relation of your paper to that work?\\n\\nG. Yang and A. Rush. Lie-Access Neural Turing Machines. https://openreview.net/forum?id=Byiy-Pqlx&noteId=Byiy-Pqlx\", \"title\": \"Reminiscent of \\\"Lie Access Neural Turing Machines\\\"\"}", "{\"title\": \"Applying a new metric to attention mechanism, improves small models but not large ones, reasonable but not very strong experimental comparisons.\", \"review\": \"This paper replaces the dot-product similarity used in attention mechanisms with the negative hyperbolic distance, and applies this new attention to the existing Transformer model, graph attention networks (GAT), and Relation Networks (RN). Accordingly, they use Einstein midpoint to compute the aggregation weights of attention in hyperbolic space. The idea of using hyperbolic rather than Euclidean space is based on the assumption that the input embeddings (neural net activations) are on the hyperbolic manifold, which follows power-law distribution and can be seem as a smooth description of tree-like hierarchy of data points. This assumption might hold for small neural networks with relatively low dimensional output. One main reason why this paper adopts the hyperbolic space is that the volume of hyperbolic space grows exponentially with the increase of radius while that of Euclidean space grows only polynomially. Using hyperbolic distance can increase the capacity of networks and handle the complexity of data. Experiments on Transformer and relation network show that Transformer, GAT and RN with the new attention metric produce better performance than Euclidean distance.\", \"pros\": \"1. Comparing to the existing methods using representations for shallow models in hyperbolic geometry, this paper extends the idea to deep neural networks. \\n2. The proposed attention mechanism can be easily applied to many of existing networks to enhance their capacity.\\n3. The experiments show several interesting results: 1) hyperbolic recursive transformer (RT) is consistently superior to Euclidean RT across the tasks in this paper; 2) hyperbolic space substantially benefits the low-capacity networks (i.e., low-dimensionality hidden state); 3) Einstein midpoint is better than Euclidean aggregation in hyperbolic space; 4) using sigmoid rather than softmax to compute attention weight may achieve better effectiveness on some tasks for the reason that the attention weights over different entities ay do not compete with each other.\", \"cons\": \"1. The novelty of this paper is replacing the Euclidean metric with another existing metric, which has already been used in previous ML models. So the contribution is limited.\\n2. As explicitly claimed in the paper and also reflected by the experimental results (e.g., Transformer-Big in Table 2). The hyperbolic metric only brings noticeable improvement to small neural nets with limited compacity on relatively small datasets. When applied it to most SOTA models (which are usually large/deep/wide neural networks) on larger datasets, it loses the advantage. This fact might seriously limit the application of the proposed technique.\\n3. Small models are preferred for inference especially on edge devices. But model compression and knowledge distillation can make small models having similar performance as large models, which might be much better than directly training a small model with the proposed metric.\\n4. Although hyperbolic metric reflects the power-law distribution, which is a very natural assumption verified on many kinds of real data (social networks and physical statistics), I am not fully convinced that it still holds on an embedding space produced by a neural net (since attention are usually applied to the outputs of a neural net). \\n5. In the experiments, does the model with the proposed metric cost similar training/inference time comparing to the baselines? What is the trade-off between improvements and extra time costs? I notice that the results of the proposed attention in Table 2 are ~0.5% higher than the results from the earlier arXiv version of this paper. What is the reason for the improvements? Did you increase the training steps?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Refreshing approach for matching and aggregating\", \"review\": \"The authors propose a novel approach to improve relational-attention by changing the matching and aggregation functions to use hyperbolic geometric. By doing so the network can exploit the metric structure the functions live on. Method was evaluated and showed improvements over baselines on wide range of tasks including translation, graph learning, and visual question answering.\", \"pros\": [\"High quality paper.\", \"Hyperbolic matching function is novel and interesting.\", \"Even though the subject isn\\u2019t trivial, the intuition was described well.\", \"The evaluation is comprehensive on several relational related tasks.\"], \"cons\": [\"Baselines: The authors main contribution is the matching and aggregation operator. It always feels like the multi-modal community is divided between VQA and CLEVR datasets, but there should be a lot in common between them. Specifically, what is called here the matching operator, had several variants in VQA, such as Multimodal Compact Bilinear Pooling by Fukui et al., or Multi-modal Factorized Bilinear Pooling\\u00a0by You et al. etc. I think the paper would benefit from adding other variants of matching functions.\", \"Datasets: I think the approach might work as well in VQA dataset, which I find more interesting than clever because of the real-world nature of it. You can plug it into methods like MFB, or as pairwise potentials in Structured Attentions\\u00a0by Zhu et al, or High-Order attention by Schwartz et al\"], \"conclusion\": \"A better matching and aggregating operations are always important, it can potentially improve performance in many challenges. The proposed method is novel and interesting, therefore I will be happy to see this paper as part of ICLR.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"ACCEPTABLE\", \"review\": \"The authors proposed to exploit hyperbolic geometry in computing the attention mechanisms for neural networks. Specifically, they break the attention read operation into two parts: matching and aggregation. In matching step, they use the hyperbolic distance to quantify the macthing between a query and a key; in the aggregation step, they use the Einstein midpoint. Their experiments results based on synthetic and real-world data shows the new method outperforms the traditional method based on Euclidean distance. This paper is acceptable.\", \"question\": \"In Figure 3(Center), the number of nodes 1000 and 1200 are pretty close. How about the results on 500 nodes and 2000 nodes? It seems the accuracy difference increases as the number of nodes increases. Is this true?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Results on Pubmed and PPI\", \"comment\": \"Hi,\\n\\nThank you for your interest in our paper.\\n\\nWe have provided results on VQA (CLEVR and Sort-of-CLEVR), Neural Machine Translation (WMT'14 En-De), Graph Classification Tasks (synthetic with different sizes), finally on transductive graph tasks such as Citeseer and Cora tasks in our paper. We covered a wide range of possible tasks that an attention mechanism with different architectures can be applied to. We have provided both extensive analysis and promising results on the tasks that we explored in our paper. Our goal in this paper was to show the generality of our approach on a wide range of tasks.\\n\\nFor the time being, we do not have any plans to provide more experiments on other datasets besides the ones that are already presented in the paper.\\n\\nBest,\"}", "{\"comment\": \"I would like to ask you whether you are still working on these two datasets or not tending to compare with other models on the two datasets?\", \"title\": \"Not have results of Pubmed and PPI\"}" ] }
B1gHjoRqYQ
An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack
[ "Yang Zhang", "Shiyu Chang", "Mo Yu", "Kaizhi Qian" ]
There are two major paradigms of white-box adversarial attacks that attempt to impose input perturbations. The first paradigm, called the fix-perturbation attack, crafts adversarial samples within a given perturbation level. The second paradigm, called the zero-confidence attack, finds the smallest perturbation needed to cause misclassification, also known as the margin of an input feature. While the former paradigm is well-resolved, the latter is not. Existing zero-confidence attacks either introduce significant approximation errors, or are too time-consuming. We therefore propose MarginAttack, a zero-confidence attack framework that is able to compute the margin with improved accuracy and efficiency. Our experiments show that MarginAttack is able to compute a smaller margin than the state-of-the-art zero-confidence attacks, and matches the state-of-the-art fix-perturbation attacks. In addition, it runs significantly faster than the Carlini-Wagner attack, currently the most accurate zero-confidence attack algorithm.
[ "adversarial attack", "zero-confidence attack" ]
https://openreview.net/pdf?id=B1gHjoRqYQ
https://openreview.net/forum?id=B1gHjoRqYQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ByxWNcoHgE", "SylK5GMm1V", "H1gqfh0GkV", "ryeWe50MkV", "rklgf6ffyN", "HJlxfhp1J4", "S1loX4IlA7", "SJldNGLl07", "SyxZ5bIxRX", "ryxVfwf567", "HJlyTzMqpQ", "ryl9jbfcpm", "SkxZXWf9pm", "B1ljclMcpm", "r1eYIxfqT7", "SygnM6x5aQ", "B1xOut6DpQ", "ByxzpzkyT7", "S1xcRXhFnQ", "r1lSn2GUjm", "r1x5mtzSiX", "BkgN7aTzjQ", "Bkgn9VnC5Q", "S1gn4AtA5Q", "SJxqX98Ccm", "SkgEYamC5m", "SJxJd84T9m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review", "official_comment", "comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "comment" ], "note_created": [ 1545087528550, 1543869072607, 1543855122173, 1543854569143, 1543806216394, 1543654407979, 1542640675167, 1542640175874, 1542640008930, 1542231819668, 1542230711360, 1542230433890, 1542230296597, 1542230163166, 1542230096820, 1542225172375, 1542080879607, 1541497529919, 1541157841606, 1539873965172, 1539807521638, 1539656988193, 1539388564053, 1539378739839, 1539365409586, 1539353979578, 1539290727297 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper619/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper619/Authors" ], [ "ICLR.cc/2019/Conference/Paper619/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper619/Authors" ], [ "ICLR.cc/2019/Conference/Paper619/Authors" ], [ "ICLR.cc/2019/Conference/Paper619/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper619/Authors" ], [ "ICLR.cc/2019/Conference/Paper619/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper619/Authors" ], [ "~Nicholas_Carlini1" ], [ "ICLR.cc/2019/Conference/Paper619/Authors" ], [ "ICLR.cc/2019/Conference/Paper619/Authors" ], [ "ICLR.cc/2019/Conference/Paper619/Authors" ], [ "ICLR.cc/2019/Conference/Paper619/Authors" ], [ "ICLR.cc/2019/Conference/Paper619/Authors" ], [ "~Nicholas_Carlini1" ], [ "ICLR.cc/2019/Conference/Paper619/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper619/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper619/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper619/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper619/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper619/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper619/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a new method for adversarial attacks, MarginAttack, which finds adversarial examples with small distortion and runs faster than the CW baseline, but slower than other methods. The authors provide theoretical guarantees and a broad set of experiments.\\n\\nIn the discussion, a consistent concern has been that, experimentally, the method does not perform noticeably better than previous approaches. The authors mention that the lines are too thick to reveal the difference. It has been pointed out that this might be related to the way the experiments are conducted, but the proposed method still does better than other methods. AnonReviewer1 mentions that the assumptions needed for the theoretical part might be too strong, meaning that the main contribution of the paper is in the experimental side. \\n\\nThe comparisons with other methods and the assumptions made in the theorems seem to have caused quite some confusion and there was a fair amount of discussion. Following the discussion session, AnonReviewer1 updated his rating from 5 to 6 with high confidence. \\n\\nThe referees all rate the paper as not very strong, with one marginally above acceptance threshold and two marginally below the acceptance threshold. \\n\\nAlthough the paper seems to propose valuable ideas, and it appears that the discussion has clarified many questions from the initial submission, the paper has not provided a clear, convincing, selling point at this time.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Many questions - not convincing enough at this time\"}", "{\"title\": \"Regarding updated results and running time comparison\", \"comment\": \"Thank you for following up. Regarding your two questions\\n\\n1. The updated results in this sub-thread are on CIFAR-10 and MNIST. The Dec 3rd comment is on ImageNet. Previously there have been discussions on the ImageNet results in the thread entitled 'Minor comments'. The Dec 3rd is the general response to these discussions.\\n\\n2. Here are more details on the runtime settings of both algorithms. CW comes with an option 'abort_early' and it has been turned on. This option will abort the iterations when the algorithm converges. This option will accelerate the algorithm without hurting the performance. On the other hand, we didn't implement a similar mechanism in MarginAttack, so MarginAttack will run all the way through the end even if it already converges. This places an advantage on CW. In spite of this, as shown in the results in this thread, MarginAttack still runs faster and more accurate than most of the settings in CW.\\n\\nIf the above discussion is not cogent enough, we have some results where the number of iteration is cut down to 200 (1/10 of the original number of iterations; binary search step set to 10):\", \"mnist\": \"Perturb. Lev.\\tMargin2000\\t CW2000 \\tMargin200 CW200\\t \\n1\\t\\t 25.69\\t\\t\\t24.86\\t\\t25.17\\t\\t23.87\\n1.41\\t\\t 66.34\\t\\t\\t63.23\\t\\t64.99\\t\\t60.60\\n1.73\\t\\t 88.40\\t\\t\\t85.94\\t\\t87.28\\t\\t84.25\\n2\\t\\t 97.11\\t\\t\\t95.42\\t\\t96.51\\t\\t94.11\", \"cifar10\": \"Perturb. Lev.\\tMargin2000 \\tCW2000 \\tMargin200 CW200\\n8\\t\\t 24.27\\t\\t\\t24.04\\t\\t24.31\\t\\t23.86 \\n15\\t\\t 46.37\\t\\t\\t45.56\\t\\t45.63\\t\\t44.98\\n25\\t\\t 73.82\\t\\t\\t71.80\\t\\t71.91\\t\\t70.95\\n40\\t\\t 93.29\\t\\t\\t92.10\\t\\t91.71\\t\\t91.02\\n\\nAs can be seen, even Margin200 can outperform CW2000 in most of the scenarios (except for CIFAR10 perturbation level 40). Hope this is cogent enough to show the improved accuracy-efficiency tradeoff of MarginAttack.\"}", "{\"title\": \"Two more questions\", \"comment\": \"That is good to know, thank you.\", \"i_have_two_more_questions\": [\"Do the CIFAR-10 results mentioned in this sub-thread already include the improved CW baseline mentioned in your comment from Dec 3rd \\\"Updated CW results on ImageNet\\\"?\", \"How are the running times determined in your comparison above? Both MarginAttack and CW are iterative methods that can (in principle) be stopped early to improve the running time. So ideally, the two algorithms would be compared in a plot of running time vs attack success rate (with intermediate results from the algorithms after each iteration giving a trade-off curve).\"]}", "{\"title\": \"Updated CW results on ImageNet\", \"comment\": \"With the help of the useful discussions in https://github.com/tensorflow/cleverhans/issues/813, we are able to get the CW ImageNet results right. We would like to update the results as follows:\\n\\nPerturb. Lev.\\tMarginAttack\\tCW bin5\\n10\\t\\t 40.42\\t\\t\\t40.36\\n32\\t\\t 60.59\\t\\t\\t58.71\\n50\\t\\t 74.89\\t\\t\\t70.99\\n80\\t\\t 89.43\\t\\t\\t85.64\\n\\nThis table and a continuous curve will replace the original results in the paper.\"}", "{\"title\": \"Yes\", \"comment\": \"Yes. Each binary search step setting comes with a separate step size tuning.\"}", "{\"title\": \"Further tuning?\", \"comment\": \"I would like to thank the authors for running this second set of experiments.\\n\\nDid you also re-tune the step sizes after changing the binary search in the CW attack?\"}", "{\"title\": \"Comparison with CW & We did not assume convexity\", \"comment\": \"Regarding your first concern on the comparison with CW: In short, MarginAttack is able to achieve a higher attack success rate than CW AND a shorter running time. The paper may not make this point obvious enough probably because the curves are too thick to reveal the difference. To show this point clearly, we would like to refer you to the results in our response to reviewer 3, where we scanned through the number of binary search steps and measure the success rate and running time.\\n\\nAs can be seen, MarginAttack has a higher success rate than all the versions of CW. There is a success-rate-efficiency tradeoff in CW, as a smaller binary search step number leads to a lower success rate. However, even with 10 binary search steps, CW is still unable to outperform MarginAttack in terms of success rate. On the other hand, with very small numbers of binary search steps, CW still runs slower than MarginAttack. Hope these results will clarify your major concern.\", \"regarding_your_minor_concern\": \"In the theorem, we did not assume convexity. The assumption with the name 'convexity' is saying that the constraint set should not be 'too concave'. Please check the following figure where we listed what decision boundaries are permitted by our theorem and what not.\", \"https\": \"//docs.google.com/viewer?url=https://raw.githubusercontent.com/anon181018/iclr2019_rebuttal/master/figure2.pdf\\n\\nAs can be seen, the convexity assumption permits a wide variety of decision boundaries. Among the few cases that it does not permit is the case where the decision boundary bends more than the L2 ball does. In this case, the critical point becomes a local maximum rather than a local minimum.\"}", "{\"title\": \"Thanks - reading through these\", \"comment\": \"Thank you for posting all these detailed results and explanations. I have a much better understanding of the motivations and am reading through all your responses at present.\"}", "{\"title\": \"Updated results on CiFAR10\", \"comment\": \"Thanks to the hyperparameter tuning suggestions, we are able to achieve a set of better results on CiFAR10. The updated results are as follows:\", \"attack_success_rate\": \"Perturb. Lev.\\tMarginAttack\\tCW bin10\\tCW bin5\\t CW bin3\\t CW bin1\\n8\\t\\t 24.27\\t\\t\\t24.04\\t\\t23.99\\t\\t23.89 15.14\\n15\\t\\t 46.37\\t\\t\\t45.56\\t\\t45.39\\t\\t45.21 20.53\\n25\\t\\t 73.82\\t\\t\\t71.80\\t\\t71.66\\t\\t71.56 20.57\\n40\\t\\t 93.29\\t\\t\\t92.10\\t\\t91.86\\t\\t91.49 20.57\", \"running_time\": \"MarginAttack\\tCW bin10\\tCW bin5\\t CW bin3\\t CW bin1\\n51.03\\t\\t\\t350.10\\t\\t168.88\\t\\t100.10 24.30\\n\\nCompared to the previous results posted, the attack success rate of CWbin10 is almost the same, but the results of CW with fewer binary steps are improved. The basic conclusions do not change though. Notice that increasing the number of binary search steps does help to improve the success rate, but even compared with 10 binary search steps, MarginAttack still maintains a higher success rate at all levels. In the meantime, MarginAttack has a much lower running time, and thus strikes a better success-rate-balance-tradeoff.\"}", "{\"comment\": \"Just a quick comment on selecting the number of binary search steps: if you make your initial guess reasonable, then usually you don't need more than two or three steps. I have no theoretical basis for these numbers, but 1.0 for MNIST and 0.1 for CIFAR typically works well when using a range of [0,1] for the input image.\\n\\nIf you're using [0,255] then you'll want to change your initial guess to be something that fits that range.\", \"title\": \"Selecting the number of binary search steps\"}", "{\"title\": \"Regarding comparison with CW attack\", \"comment\": \"Thank you for bringing up the binary search issue. We would like to clarify that the binary search is an integral part of the CW attack and that it cannot be replaced with hyperparameter tuning beforehand. This is because the purpose of the binary search is to find the Lagrange multiplier for the Lagrangian, which is specific to *each individual token*. In other words, each different input sample comes with a different optimal Lagrange multiplier. Therefore, it is impossible to tune a universal Lagrange multiplier and remove the binary search. The CW algorithm can be regarded as a two-way optimization problem. For each sample, it first optimizes over the Lagrange multiplier via binary search, and then optimizes over the adversarial sample via gradient descent. In short, the Lagrange multiplier is technically not a hyperparameter, but an optimization variable just like the adversarial sample itself.\\n\\nThe CW implementation does come with a set of hyperparameters that it asks the users to tune, including the initial Lagrange multiplier guess and the initial step size, both of which are already tuned to its best performance.\\n\\nNevertheless, although the binary search cannot be removed, we are interested to see what will happen is it is reduced. For this we perform an additional experiment where the number of binary search steps is reduced to 5 (named CW bin5), 3 (named CW bin3) and 1(named CW bin1) on MNIST and CIFAR10. Below are the attack success rates under different perturbation levels.\", \"mnist\": \"MarginAttack\\tCW bin10\\tCW bin5\\tCW bin3\\tCW bin1\\n3.01\\t\\t\\t 16.02\\t\\t8.99\\t\\t5.77\\t\\t1.37\", \"cifar10\": \"MarginAttack\\tCW bin10\\tCW bin5\\t CW bin3\\n51.03\\t\\t\\t234.75\\t\\t102.68\\t\\t33.98\\n\\nAs can be seen, the performance does drop as the number binary search steps decreases. In particular, the algorithm completely fails when the binary search step number drops below a certain threshold (1 for MNIST and 3 for CIFAR10). Upon failure threshold, there is an unproportional drop in running time, which is probably due to the early stop mechanism in CW. We conjecture that the threshold is higher when the dataset has greater variations. These results provide more complete evidence on how MarginAttack is able to achieve a much better accuracy-efficiency tradeoff than CW. We will add these results to the paper.\\n\\nHope this clarification helps.\"}", "{\"title\": \"Regarding the comparison with PGD\", \"comment\": \"First we would like to clarify that the learning rate of PGD is tuned the same way as for CW. We somehow missed this statement in the paper. We will add this statement back to the paper.\\n\\nSecond, yes it is entirely possible to convert PGD to a zero-confident attack. In our response to another reviewer, we estimated the computational overhead. We will copy the analysis here. Consider, for example, the CIFAR-10 dataset. Since for our model, most margins fall within 10, so let\\u2019s assume the binary search range is 10 (for adversarially trained models this number will be much higher). If we want to achieve an accuracy of 0.1, then we need at least 7 binary search steps. In other words, the computation complexity increases by 7 times. The above discussion is not saying that it is impossible to convert PGD to a zero-confidence attack efficiently, but it at least provides a perspective on why the complexity reduction as well as accuracy improvement of MarginAttack is valuable.\\n\\nFinally, we would like to point out that while PGD is a state-of-the-art in L-infinity attack. It is not in L2 attack. One of the reasons is that PGD alternatively projects onto the constraint box and L2 ball, which is not equivalent to projecting onto the intersect of both. In L-infinity attack they are equivalent. The following link is a figure that provides an illustration on this.\", \"https\": \"//docs.google.com/viewer?url=https://raw.githubusercontent.com/anon181018/iclr2019_rebuttal/master/figure3.pdf\\n\\nThat is the reason why we did not incorporate PGD L2 in our comparison. However, we would like to provide the results here.\", \"mnist\": \"Perturb. Lev.\\tMarginAttack\\tPGD L2\\n1\\t\\t 25.69\\t\\t\\t12.53\\n1.41\\t\\t 66.34\\t\\t\\t37.11\\n1.73\\t\\t 88.40\\t\\t\\t64.13\\n2\\t\\t 97.11\\t\\t\\t81.68\", \"cifar10\": \"Perturb. Lev.\\tMarginAttack\\tPGD L2\\n8\\t\\t 24.27\\t\\t\\t12.94\\n15\\t\\t 46.37\\t\\t\\t25.30\\n25\\t\\t 73.82\\t\\t\\t46.90\\n40\\t\\t 93.29\\t\\t\\t57.13\", \"imagenet\": \"Perturb. Lev.\\tMarginAttack\\tPGD L2\\n10\\t\\t 40.42\\t\\t\\t29.45\\n32\\t\\t 60.59\\t\\t\\t40.73\\n50\\t\\t 74.89\\t\\t\\t50.80\\n80\\t\\t 89.73\\t\\t\\t66.35\\n\\nHope the above clarifications help.\"}", "{\"title\": \"Regarding your other reviews\", \"comment\": \"First, regarding the white box attack definition. Yes, the white box attack is understood as having access to all network information, including structure and parameters. So it is possible to compute gradient information. Black box attacks only have access to logits or only decision, so it is not possible to accurately compute the gradient information. We will make this distinction clearer.\\n\\nSecond, the PGD convergence guarantee we meant is only about local convergence. Under mild assumptions, PGD is able to converge to a critical point of the PGD loss function, where no feasible direction can increase the loss function. We will clarify this in our updated version.\\n\\nThird, by a \\u2018more realistic attack\\u2019, we meant that under a true attack setting, an attacker would not confine himself to a fix perturbation, but is more likely to keep attacking until success, while minimizing perturbation.\\n\\nFourth, we will correct our statement about the earliest work that incorporates gradient information into adversarial attack.\\n\\nFinally, we will change the norm notation.\"}", "{\"title\": \"Regarding the theorem assumptions\", \"comment\": \"Although this is not the major focus of your comment, we would like to revisit the theorem assumptions. While there are nine assumptions, these assumptions are in fact more realistic than expected. Take the convexity assumption, which you mentioned in your review, as an example. This assumption does not say that the constraint has to be convex. It only says that the constraint should not be \\u2018too concave\\u2019. In particular, the curvature of the of the decision should not exceed that of the L2 or L-infinity ball. For better illustration, we plotted some decision boundaries that are allowed by the assumption, and some that are not.\", \"please_check_the_following_link\": \"\", \"https\": \"//docs.google.com/viewer?url=https://raw.githubusercontent.com/anon181018/iclr2019_rebuttal/master/figure2.pdf\\n\\nAs can be seen, the convexity assumption permits a wide variety of decision boundaries. Among the few cases that it does not permit is the case where the decision boundary bends more than the L2 ball does. In this case, the critical point becomes a local maximum rather than a local minimum.\\n\\nThe other assumptions are also more realistic than their names sound. The differentiability assumption does not stipulate that the constraint has to be differentiable. It actually permits countably infinite jump discontinuities. The Lipchitz continuous assumption does not assume Lipchitz continuity everywhere, but only at x*. We are not saying that the assumptions are very loose, but they are realistic enough to shed some light on the actual convergence property of MarginAttack. Nevertheless, we are considering adding a 2D toy example as you suggested. We will post further responses if there are further updates.\"}", "{\"title\": \"Zero-Confidence vs Fix-Perturbation\", \"comment\": \"The following link is a figure that explains the difference between zero-confidence attack and fix-perturbation attack.\", \"https\": \"//docs.google.com/viewer?url=https://raw.githubusercontent.com/anon181018/iclr2019_rebuttal/master/figure1.pdf\\n\\nAs can be seen, the zero-confidence attack finds the closest point on the decision boundary; while fix perturbation-attack finds adversarial samples within a fix perturbation. Both attacks are equivalent if we only want to compute the attack success rate under a given perturbation level. However, we will be better off with zero-confidence attacks if we want to\\n\\n1) Compute the margin of each individual example; and\\n2) Probe and study the decision boundary of a classifier\\n\\nOf course, we can also measure the margin of each example using a fix-perturbation attack, for example PGD, by binary searching over the perturbation levels. However, the computation cost will significantly increase. Consider, for example, the CIFAR-10 dataset. Since for our model, most margins fall within 10, so let\\u2019s assume the binary search range is 10 (for adversarially trained models this number will be much higher). If we want to achieve a accuracy of 0.1, then we need at least 7 binary search steps. In other words, the computation complexity increases by 7 times. In fact, CW applies a similar binary search idea to achieve zero-confidence attack, and that is why its computation cost is high. The above discussion is not saying that it is impossible to convert PGD to a zero-confidence attack efficiently, but it at least provides a perspective on why zero-confidence attack is challenging, and why the complexity reduction as well as accuracy improvement of MarginAttack is valuable.\"}", "{\"comment\": \"This review seems to focus mainly on one graph in Figure 3. While I agree that this is confusing, I suspect that it's just a problem with how the authors are running the attack on ImageNet. The proposed attack still does better than DeepFool by the same margin as on CIFAR-10 and MNIST. This is supported by the reproduction script from ( https://github.com/tensorflow/cleverhans/issues/813 ), which produces an improved curve that is nearly identical to the authors attack.\\n\\nSo I probably wouldn't look unfavorably on the paper for that reason alone. This is something that can be probably explained and/or fixed. I'm trying to look at the code that's provided on the github issue to see if I can figure out what's going on. \\n\\n[I am intentionally avoiding commenting on the content of the paper in other way.]\", \"title\": \"Working on figuring out what's different\"}", "{\"title\": \"I cannot see why the proposed method is better than CW attack\", \"review\": \"This paper proposes an efficient zero-confidence attack algorithm, MARGINATTACK, which uses the modified Rosen's algorithm to optimize the same objective as CW attack. Under a set of conditions, the authors proved convergence of the proposed attack algorithm. My main concern about this paper is why this algorithm has a better performance than CW attack? I would suggest comparing with CW attack under different sets of hyper-parameters.\", \"minor_comment\": \"The theoretical proof depends on the convexity assumption, I would also suggest comparing the proposed attack with CW and other benchmarks on some simple models that satisfy the assumptions.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Claims to be significantly faster than the CW attack, but I have some questions about the experiments\", \"review\": \"The authors propose a new method for constructing adversarial examples called MarginAttack. The method is inspired by Rosen's algorithm, a classical algorithm in constrained optimization. At its core, Rosen's algorithm (instantiated for adversarial examples) alternates between moving towards the set of misclassified points and moving towards the original data point (while ensuring that we do not move too far away from the set of misclassified points). The authors provide theoretical guarantees (local convergence) and a broad set of experiments. The experiments show that MarginAttack finds adversarial examples with small distortion (as good as the baselines or slightly better), and that the algorithm runs faster than the Carlini-Wagner (CW) baseline (but slower than other methods).\\n\\nThe authors make a distinction between \\\"fixed perturbation\\\" attacks and \\\"zero confidence\\\" attacks. The former finds the strongest attack within a given constrained set, while the latter finds the smallest perturbation that leads to a misclassification. Method such as projected gradient descent fall into the \\\"fixed perturbation\\\" category, while MarginAttack and CW belong to the \\\"zero confidence\\\" category. The authors claim that zero confidence attacks pose a harder problem and hence mainly compare their experimental results to the CW attack. Indeed, their results show that MarginAttack is 3x - 5x faster than CW and sometimes achieves smaller perturbations.\\n\\nFirst of all, I would like to emphasize that the authors conducted a thorough experimental study on multiple datasets using multiple baseline algorithms. Unfortunately, the comparison to CW and PGD still leaves some questions in my opinion:\\n\\n- The authors state that CW does an internal binary search over the Lagrangian multiplier, and that this search goes for up to 10 steps. As a result, it is not clear whether the running time benchmarks are a fair comparison since MarginAttack does not automatically tune its parameters. To the best of my knowledge, the CW implementation in Cleverhans is specifically set up so that the user does not need to tune a large number of hyperparameters (the implementation accepts a running time overhead to achieve this). Since MarginAttack also contains multiple hyperparameters (see Table 4), it would be interesting to see how the running time of MarginAttack compares to that of a tuned CW implementation without the binary search.\\n\\n- The authors explicitly state that the step sizes for CW were tuned for best performance, but do not mention this for PGD. For a fair comparison, the step sizes used for PGD should also be (approximately) tuned. Moreover, it is not clear why PGD is only used for an l_inf comparison and not a l_2 comparison.\\n\\n- In the introduction, the authors emphasize the distinction between fixed perturbation attacks and zero confidence attacks. However, from an optimization point of view, these two notions are clearly related and a fixed perturbation attack can be converted to a small perturbation / zero confidence attack via a binary search over the perturbation size. While one would indeed expect an overhead due to the binary search, it is not clear a priori how large this overhead needs to be to achieve a competitive zero confidence attack with PGD (especially with a tuned step size for PGD, see above).\\n\\nI would be grateful if the authors could provide their view on these points. Until then, I will assign a rating of 5 since tuning the parameters of optimization algorithms is crucial for a fair comparison.\", \"additional_comments\": [\"In the introduction, the authors equate white-box attacks with access to gradient information. But generally a white-box attack is understood as an attack that has arbitrary access to the target network. It may be helpful for the reader to clarify this.\", \"In the second paragraph of the introduction, the authors claim that fixed perturbation attacks and zero confidence attacks differ significantly. But as pointed out above, it is possible to convert a fixed perturbation attack to a zero confidence attack via a binary search. So it is not clear that there is a large gap in difficulty. Moreover, the authors state that fixed perturbation attacks often come with theoretical guarantees. But to the best of my knowledge, there is no comprehensive theory that describes when a fixed perturbation attack should be expected to succeed in attacking a commonly used neural network.\", \"On top of Page 2, the authors claim that zero-confidence attacks are a more realistic attack setting. Why is that?\", \"The authors state that JSMA (Papernot et al., 2016) is one of the earliest works that use gradient information for constructing adversarial examples. However, L-BFGS as employed by Szegedy et al., 2013 also uses gradient information. Moreover, the authors may want to cite the work of Biggio et al. from 2013 (see the survey https://arxiv.org/abs/1712.03141).\", \"Since all distances referred to by d(x, y) seem to be norms (and the paper relies on the existence of dual norms), it may be more clear for the reader to use the norm notation || . || from the beginning.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Unclear problem statement; mixed results\", \"review\": \"i have change my rating from 5 to 6 after reading the numerous and thorough rebuttals from the authors. I hope they will incorporate these clarifications and additional experiments into the final version of the paper if accepted.\\n\\nThe purpose of this paper is presumably to approximate the margin of a sample as accurately as possible. This is clearly an intractable problem. Thus all attacks make some kind of approximation, including this paper. I am still a bit confused about the difference between \\\"zero-confidence attacks\\\" and those that don't fall into that category such as PGD. Since all of these are approximations, and we cannot know how far we are from the true margin, I don't see why these categories help. The authors spend two paragraphs in the introduction trying to draw a distinction but I am still not convinced. \\n\\nThe proofs provided by the authors assume that convexity and many assumptions, which makes it not very useful for the real world case. What would have been helpful is to show the accuracy of their margin for simple binary toy 2D problems, where the true margin and their approximation can be visualized. This was not done. This reduces the paper to an empirical exercise rather than a true understanding of their method's advantages and limitations.\\n\\nFinally, the experimental results do not show any significant advantage over PGD, either in running time (they are slower) or norm perturbation. Thus their novelty rests on the definition of zero confidence attack, and of the importance of such a attack. So clarifying the above question will help to judge the paper's novelty.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Code posted\", \"comment\": \"Just posted the code in the issue.\"}", "{\"comment\": \"Raised a github issue on CleverHans to see what's going wrong. My code is given there which does 2-3x better than your figure on ImageNet.\", \"https\": \"//github.com/tensorflow/cleverhans/issues/813\\n\\nIt would be helpful to see your code if you could share it anonymously.\", \"title\": \"Releasing Code for CW ImageNet\"}", "{\"title\": \"We used pre-softmax logits\", \"comment\": \"Thanks for the reminder! Upon checking we confirm that we used the pre-softmax logits.\\n\\nWe will keep on our efforts to ensure the results are right. If you would like to share your configuration details at any time, it is always welcomed. Thanks again for initiating the discussion!\"}", "{\"comment\": \"One final check: did you run CW on the pre-softmax logits of the ResNet? This takes a few extra lines of code for Keras to do, because by default the ResNet50 gives a post-softmax probability distribution. You will have to remove the softmax operation from the resnet, or as a quick check you can add a tf.log() around the output of the model.\\n\\nCW is known to perform very poorly on the post-softmax probability outputs. On my code, if I incorrectly use the post-softmax probability values instead, I can reproduce similar (higher distortion) results to yours and the prior paper.\", \"title\": \"... one more thing\"}", "{\"title\": \"CW settings\", \"comment\": \"No worries. We appreciate your comments and would like to ensure our results are right.\\n\\nBefore we discuss the CW settings, we would like to first clarify that the '2.2e-7' I mentioned is not our results. It is the results reported in the decision attack paper (https://openreview.net/forum?id=SyZI0GWCZ from ICLR last year) that you previously mentioned. They also find CW performs worse than DeepFool. According to my understanding and guess, this should be the per-pixel squared distance and the pixel range is [0, 1]. Therefore converting it to the regular L2 distortion should yield around 50. Again, this is only our interpretation of their results. We would need to consult the authors of that paper to confirm it.\\n\\nWe are interested in your CW results and would like to know more about how you configured the attack. Did you use CleverHans? If yes, how did you set the configuration parameters?\\n\\nTo be transparent, here are our CleverHans settings for CW:\\n\\ncw_params = {'binary_search_steps': 10,\\n 'y': l,\\n 'max_iterations': 2000,\\n 'learning_rate': 0.01 (also tried 0.001, 0.05 and 0.1),\\n 'batch_size': 100,\\n 'initial_const': 0.1 (also tried 0.01),\\n 'clip_min': 0,\\n 'clip_max': 255,\\n 'abort_early': False (also tried True)}\\n\\nWe scanned over the candidate settings and the results we reported were the best. Please let us know if you find anything problematic. We will be happy to make it right.\"}", "{\"comment\": \"Thank you for explaining the difference.\\n\\nIt still surprises me that 100+ iterations of gradient descent with CW will do worse than DeepFool. Are you performing CW untargeted, to compare to DeepFool? On ImageNet with a ResNet50 I can reach 95% adversarial success with a L2 distortion of 150 (when on a scale of 0-255) with 100 iterations of CW. Can you clarify how you are getting \\\"2.2e-7\\\"? The numbers in the paper are between 0 and 200.\\n\\nSorry to be picky about this, but given that your curve matches CW exactly for MNIST and CIFAR and is significantly better on ImageNet, I would like to make sure this is accurate.\", \"title\": \"Still unsure about CW ImageNet results\"}", "{\"title\": \"Regarding your comments\", \"comment\": \"Thank you for your interests in our work! Regarding your comments:\\n\\n1. Yes, the idea of 'crawling along the decision boundary' is related to the L2 version of MarginAttack (Eq. (8)), and serves as a good reference if a black-box version of MarginAttack is to be developed. So we will add this paper to our reference. Partly because it is a white-box attack, MarginAttack does not have to wait until it reaches the decision boundary before it moves along the decision boundary, which is shown to significantly improve convergence both empirically and theoretically. Also, MarginAttack encompasses much richer attack schemes, because the L-infinity version of MarginAttack (Eq. (10)), as well as other valid settings of a_k and b_k, follows a different projection move direction from along the decision boundary. Nevertheless, we appreciate that you point out this relevant paper, and we will update our reference list accordingly.\\n\\n2. In fact, the decision attack paper also finds that CW performs worse than DeepFool on ImageNet. In the table at the bottom of page 6, CW gets larger median perturbation norms than DeepFool does for all of the three architectures on ImageNet. In particular, for the ResNet50 architecture (which we also used), the median perturbation norm of CW is 2.2e-7, and that of DeepFool is 7.5e-8.\\n\\nThe original paper of CW attack may shed some light on this. According to the paper, CW does perform better than DeepFool on ImageNet (Table V), but that is for the best case only, which refers to choosing 100 randomly chosen adversarial classes to perform the targeted attack, and then finding the easiest case. We can also try this for CW, but considering the computation cost of CW is so high already, multiplying it by 100 would really make this accuracy-efficiency tradeoff not worthwhile. On the other hand, as our paper intends to show, MarginAttack does a much better accuracy-efficiency tradeoff.\\n\\n3. Thank you for pointing out the ordering issue in table 3. It is not meant to create any false perceptions -- the differences in the numbers are quite distinct. But you are right, for better formality, we will adjust it in our updated version.\\n\\nThank you again for your comments!\"}", "{\"comment\": \"This is a very nice attack. It appears simple to implement and is formally well-justified.\\n\\nA few minor questions and comments\\n\\nThe attack approach seems related to the Decision Attack (https://openreview.net/forum?id=SyZI0GWCZ from ICLR last year), am I right in this understanding?\\n\\nDo you know why CW performs worse than DeepFool on Imagenet (Figure 3 upper right)? The Decision Attack paper finds that CW performs better than DeepFool on ImageNet (as does the original CW attack paper).\\n\\nTable 3 is mildly deceptive that the the \\\"Ours\\\" row is on the bottom but it is not the fastest, whereas in Table 2 it is in the correct (ordered) position.\", \"title\": \"Minor comments\"}" ] }
ryxHii09KQ
In Your Pace: Learning the Right Example at the Right Time
[ "Guy Hacohen", "Daphna Weinshall" ]
Training neural networks is traditionally done by sequentially providing random mini-batches sampled uniformly from the entire dataset. In our work, we show that sampling mini-batches non-uniformly can both enhance the speed of learning and improve the final accuracy of the trained network. Specifically, we decompose the problem using the principles of curriculum learning: first, we sort the data by some difficulty measure; second, we sample mini-batches with a gradually increasing level of difficulty. We focus on CNNs trained on image recognition. Initially, we define the difficulty of a training image using transfer learning from some competitive "teacher" network trained on the Imagenet database, showing improvement in learning speed and final performance for both small and competitive networks, using the CIFAR-10 and the CIFAR-100 datasets. We then suggest a bootstrap alternative to evaluate the difficulty of points using the same network without relying on a "teacher" network, thus increasing the applicability of our suggested method. We compare this approach to a related version of Self-Paced Learning, showing that our method benefits learning while SPL impairs it.
[ "Curriculum Learning", "Transfer Learning", "Self-Paced Learning", "Image Recognition" ]
https://openreview.net/pdf?id=ryxHii09KQ
https://openreview.net/forum?id=ryxHii09KQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1lqyTzegV", "HJle-twYRQ", "rJxxa4IY3m", "BklIbzwd3X", "rkgoETj43m", "SylJfSAKsQ", "HJl8pfYo9Q", "r1xQui-t97" ], "note_type": [ "meta_review", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "comment" ], "note_created": [ 1544723682428, 1543235832189, 1541133496323, 1541071357972, 1540828466907, 1540117766652, 1539179198465, 1539017579168 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper618/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper618/Authors" ], [ "ICLR.cc/2019/Conference/Paper618/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper618/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper618/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper618/Authors" ], [ "ICLR.cc/2019/Conference/Paper618/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"This paper presents an interesting strategy of curriculum learning for training neural networks, where mini-batches of samples are formed with a gradually increasing level of difficulty.\\nWhile reviewers acknowledge the importance of studying the curriculum learning and the potential usefulness of the proposed approach for training neural networks, they raised several important concerns that place this paper bellow the acceptance bar: (1) empirical results are not convincing (R2, R3); comparisons on other datasets (large-scale) and with state-of-the-art methods would substantially strengthen the evaluation (R3); see also R2\\u2019s concerns regarding the comprehensive study; (2) important references and baseline methods are missing \\u2013 see R2\\u2019s suggestions how to improve; (3) limited technical novelty -- R1 has provided a very detailed review questioning novelty of the proposed approach w.r.t. Weinshall et al, 2018. \\nAnother suggestions to further strengthen and extend the manuscript is to consider curriculum and anti-curriculum learning for increasing performance (R1). \\nThe authors provided additional experiment on a subset of 7 classes from the ImageNet dataset, but this does not show the advantage of the proposed model in a large-scale learning setting. \\nThe AC decided that addressing (1)-(3) is indeed important for understanding the contribution in this work, and it is difficult to assess the scope of the contribution without addressing them.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review\"}", "{\"title\": \"Thank you for your reviews!\", \"comment\": \"Following the reviews, we've added a section showing that curriculum by transfer achieves similar qualitative improvements to network generalization also when trained on a subset of the popular ImageNet dataset.\\nWe've included a broader review of the relevant literature, emphasizing the difference between previous works and ours.\"}", "{\"title\": \"a good start\", \"review\": \"In my opinion this paper is generally of good quality and clarity, modest originality and significance.\", \"strengths\": [\"The experiments are very thorough. Hyperparameters were honestly optimized. The method does show some modest improvements in the experiments provided by the authors.\", \"The analysis of the results is quite insightful.\"], \"weaknesses\": [\"The experiments are done on CIFAR-10, CIFAR-100 and subsets of CIFAR-100. These were good data sets a few years ago and still are good data sets to test the code and sanity of the idea, but concluding anything strong based on the results obtained with them is not a good idea.\", \"The authors claim the formalization of the problem to be one of their contributions. It is difficult for me to accept it. The formalization that the authors proposed is basically the definition of curriculum learning. There is no novelty about this.\", \"The proposed method introduces a lot of complexity for very small gains. While these results are scientifically interesting, I don't expect it to be of practical use.\", \"The results in Figure 3 are very far from the state of the art. I realize that they were obtained with a simple network, however, showing improvements in this regime is not that convincing. Even the results with the VGG network are very far from the best available models.\", \"I suggest checking the papers citing Bengio et al. (2009) to find lots of closely related papers.\", \"In summary, it is not a bad paper, but the experimental results are not sufficient to conclude that much. Experiments with ImageNet or some other large data set would be advisable to increase significance of this work.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"lacking convincing comparison\", \"review\": \"This paper studies an interesting and meaningful topic that what is the potential of curriculum learning (CL) in training dnn. The authors decompose CL into two main parts: scoring function and pacing function. Towards both parts, several candidate functions are proposed and verified. The paper is presented quite clear and gives contribution to better understand CL in the literature of DNN.\\n\\nHowever, I have several concerns towards the status of this paper.\\n\\nFirst, quite a few important related works are missing by the authors. Just name a few, [1] studies designing data curriculum by predictive uncertainty. [2,3] studies how to derive data driven curriculum along NN training. In particular, the objective of [2] is exactly \\u201clearning the right examples at the right time\\u201d. All these three papers focus on, or at least talk about, neural network training. Unfortunately, none of them are compared with, or even referenced. \\n\\nSecond, although comprehensive study towards different curriculum strategy are given, I found it largely unconvincing. I tried hard to discover a *detailed accuracy number on a benchmark dataset with unchanged setting* but found only case 4. By \\u2018unchanged\\u2019 I mean it is not a subpart of the whole dataset, or using a rarely seen nn architecture. If it is such `changed\\u2019 settings, the results are largely unconvincing since we do not know what the exact baseline is. For the only \\u2018unchanged\\u2019 setting 4 including VGG on CIFAR100, unfortunately the results seem not good (Fig 4a). I understand that some previous work such as the cited [Weinshall et.all 2018] also used the same setting: however it does not mean such settings give *clear and convincing* results of whether CL plays significant role in training DNN. Furthermore, I also expect the results of comparing in terms of wall clock time (including all your bootstrapping training time) but not merely batch numbers. \\n\\n[1] Chang, Haw-Shiuan, Erik Learned-Miller, and Andrew McCallum. \\\"Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples.\\\" NIPS. 2017.\\n\\n[2] Fan, Y., Tian, F., Qin, T., Li, X. Y., & Liu, T. Y. Learning to Teach. ICLR 2018\\n\\n[3] Jiang, Lu, et al. \\\"MentorNet: Learning data-driven curriculum for very deep neural networks on corrupted labels.\\\" ICML. 2018.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Investigates an interesting problem but has limited novelty and presents limited insights\", \"review\": \"This problem of interest in this paper is Curriculum Learning (CL), in the context of deep learning in particular. CL refers to learning a non-random order of presenting the training examples to the learner, typically with easier examples presented before difficult ones, to guide learning more effectively. This has been shown to both speed up learning and lead to better generalization, especially for more challenging problems. In this paper, they claim that their contribution is to decompose the problem of CL into learning two functions: the scoring function and the pacing function, with the role of the former being to estimate the difficulty of each training example and the latter to moderate the schedule of presenting increasingly more challenging examples throughout training.\\n\\nOverall, I found it hard to understand from reading the paper what exactly is new versus what is borrowed from previous work. In particular, after reading Weinshall et al, I realized that they have already proposed a number of things that are experimented with here: 1) they proposed the approach of transfer learning from a previously-trained network as a means of estimating the \\u2018scoring function\\u2019. 2) they also distinguish between learning to estimate the difficulty of examples, and learning the schedule of decreasing difficulty throughout learning, which is actually stated here as the contribution of this paper. In particular, in Section 3 of Weinshall et al, there is a sub-section named \\u201cscheduling the appearance of training examples\\u201d where they describe what in the terminology of this paper would be called their pacing function. They experiment with two variants: fixed, and adaptive, which are very similar to two of the pacing functions proposed here.\", \"bootstrapping\": \"A component of this work that didn\\u2019t appear in Weinshall et al, is the bootstrapping approach to estimating the scoring function. In general, this involves using the same network that is being trained on the task to estimate the difficulty of the training examples. The authors explain that there are two ways to do this: estimate how easy each training example is with respect to the \\u2018current hypothesis\\u2019 (the weights of the network at the current step), and with respect to the \\u2018final hypothesis\\u2019, which they estimate if I understand correctly as the network at the end of training. The latter would necessitate first training the network in the standard way, and then using it to estimate how easy or hard each example is, and using those estimates to re-train the network from scratch using that curriculum. They refer to the former as self-paced learning and to the latter as self-taught learning. I find these names confusing in that they don\\u2019t really convey what the difference is between the two. Further, while self-paced learning has been studied before (e.g. Kuman et al), I\\u2019m not sure about self-taught learning. Is this a term that the authors here coined? If not, it would be useful to add a reference. \\n\\nUsing easy / hard examples as judged by the current / final hypothesis:\\nWhen using the current hypothesis, under some conditions, Weinshall et al showed that choosing harder examples is actually more beneficial than easy examples, similar in spirit to hard negative mining. On the other hand, when using the final hypothesis to estimate examples\\u2019 difficulty, using a schedule of increasing difficulty is beneficial. Based on this, I have two comments: 1) It would therefore be useful to implement a version that uses the current hypothesis to estimate how easy each example is (like the self-paced scoring function) but then invert these estimates, in effect choosing the most challenging instead of the easiest ones as is done for anti-curriculum learning. This would be a hybrid between the current self-paced scoring function and anti-curriculum scoring function that would essentially implement the hard negative mining technique in this context. 2) It would be useful to comment on the differences between the self-paced scoring function used here, and that in Kumar et al. In particular, in this case using a curriculum based on this scoring function seems to harm training but in Kumar et al, they showed it actually increased performance in a number of different cases. Why does one work but the other doesn\\u2019t?\", \"experiments\": \"The experiments are presented in a subset of 5 classes from CIFAR-10 (also used by Weinshall et al.), but also in the full CIFAR-10 and CIFAR-100 datasets. They used both a small CNN (same as in Weinshall et al) as well as a VGG architecture. Overall, their results are comparable to what was previously known: using a curriculum computed by transfer leads to improved learning speed and final performance (though sometimes very slightly) compared to the standard training, and the training with a random curriculum. Further, the benefit is larger when the task is harder (as measured by the final vanilla-trained performance). By computing the distances between the gradients obtained from using a curriculum (via the transfer scoring function) and no curriculum confirms that these two training setups indeed drive the learning in different directions; an analysis similar to Weinshall et al. Also, since, as was previously known and they also observe, the benefit of CL is larger at the beginning of training, they propose a single-step pacing function that performs similarly to other pacing functions while is simpler and more computationally effective. The idea is to decrease only once the proportion of easy examples used in mini-batches, via a step function. Therefore at the start many easy examples are used, and after this threshold is surpassed, few easy examples are used.\\n \\nOverall, I don\\u2019t feel the contribution of this paper is large enough to recommend acceptance. The main points that guided this decision are: \\n1) The relationship with previous work is not clear. In particular, Weinshall et al seems to have already proposed a few components that are claimed to be the contribution of this paper, as elaborated on above. The authors should mention that the transfer scoring function was borrowed from Weinshall et al, clarify the differences between their pacing functions from those in Weinshall et al., etc. \\n2) The usefulness of using easy or hard experiments when consulting the current or final hypothesis is discussed but not explored sufficiently. An additional experiment is proposed above to add another \\u2018data point\\u2019 to this discussion. \\n3) self-paced learning is presented as something that doesn\\u2019t work and wasn\\u2019t expected to work. However, in the past successes were shown with this method, so it would be useful to clarify the difference in setup, and justify this difference.\\n4) It seems that the experiments resulted to similar conclusions to what was already known. While it\\u2019s useful to confirm these findings on additional datasets, I didn\\u2019t feel that there was a significant insight gained from them.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Addiotional empirical results on ImageNet\", \"comment\": \"We've added empirical results on a subset of 7 classes from Imagenet, using the same architecture as cases 1, 2, 3 for both the curriculum by transfer and vanilla test cases.\", \"the_results_show_same_qualitative_behavior_as_the_previous_cases\": \"curriculum by transfer reaches higher accuracy faster and converges to a better final solution.\\n\\nThe paper will be updated once we will be allowed to modify it.\"}", "{\"title\": \"Answers\", \"comment\": \"Thank you for your comment.\\n\\n1. Case 2 and Case 3 are performed on the same moderate size network described in Case 1, which was not constructed with cifar10 and cifar100 in mind - hence the low performance. In case 4, we used a competitive architecture for CIFAR-100, with a much better performance (results can be seen in Fig. 4.). No data augmentation was used in any of the experiments described in the paper.\\n\\n2. In all cases, the learning rate was decreased exponentially every fixed number of iterations (similarly to the way we increase the data size, depicted in Fig. 1.). The use of cyclic learning rate (as in Fig. 10 in the appendix) was done as a control measure only, with a cyclic period calculated as suggested in Smith (2017). We will add to the appendix graphs that show qualitatively these learning rate scheduling functions, for clarification.\"}", "{\"comment\": \"1. The results of cifar10 and cifar100 are very low as shown in Case 2 and Case 3. Do you conduct a data augmentation in your experiments? If you did, is the score of one image changing?\\n2. The setting of learning rate should provide more detailed explanation. How long is the cyclic period? If you can show the learning rare and your pace function in one fig, this will be great.\", \"title\": \"It is a meaningful work, but there are a few questions\"}" ] }
rkeSiiA5Fm
Deep Learning 3D Shapes Using Alt-az Anisotropic 2-Sphere Convolution
[ "Min Liu", "Fupin Yao", "Chiho Choi", "Ayan Sinha", "Karthik Ramani" ]
The ground-breaking performance obtained by deep convolutional neural networks (CNNs) for image processing tasks is inspiring research efforts attempting to extend it for 3D geometric tasks. One of the main challenge in applying CNNs to 3D shape analysis is how to define a natural convolution operator on non-euclidean surfaces. In this paper, we present a method for applying deep learning to 3D surfaces using their spherical descriptors and alt-az anisotropic convolution on 2-sphere. A cascade set of geodesic disk filters rotate on the 2-sphere and collect spherical patterns and so to extract geometric features for various 3D shape analysis tasks. We demonstrate theoretically and experimentally that our proposed method has the possibility to bridge the gap between 2D images and 3D shapes with the desired rotation equivariance/invariance, and its effectiveness is evaluated in applications of non-rigid/ rigid shape classification and shape retrieval.
[ "Spherical Convolution", "Geometric deep learning", "3D shape analysis" ]
https://openreview.net/pdf?id=rkeSiiA5Fm
https://openreview.net/forum?id=rkeSiiA5Fm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1g_1AiaJV", "Hyea8SUc0Q", "H1g9cRrcRm", "rJesVzvgAm", "HJegyc6c6m", "r1lHQJ1cTQ", "HyV6CRt6m", "BkgnVCCK6Q", "S1x-pTCtTX", "B1ebDaAFTQ", "rkl_6hAY6X", "HygPsSJLpQ", "HylYyr91T7", "rJeeoDOinQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544564191667, 1543296341327, 1543294610139, 1542644275214, 1542277591523, 1542217501420, 1542217403516, 1542217267803, 1542217144707, 1542217049069, 1542216896084, 1541957022670, 1541543137507, 1541273496348 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper617/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper617/Authors" ], [ "ICLR.cc/2019/Conference/Paper617/Authors" ], [ "ICLR.cc/2019/Conference/Paper617/Authors" ], [ "ICLR.cc/2019/Conference/Paper617/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper617/Authors" ], [ "ICLR.cc/2019/Conference/Paper617/Authors" ], [ "ICLR.cc/2019/Conference/Paper617/Authors" ], [ "ICLR.cc/2019/Conference/Paper617/Authors" ], [ "ICLR.cc/2019/Conference/Paper617/Authors" ], [ "ICLR.cc/2019/Conference/Paper617/Authors" ], [ "ICLR.cc/2019/Conference/Paper617/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper617/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper617/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"Strengths:\\nWell written paper on a new kind of spherical convolution for use in spherical CNNs.\\nEvaluated on rigid and non-rigid 3D shape recognition and retrieval problems.\\nPaper provides solid strategy for efficient GPU implementation.\", \"weaknesses\": \"There was some misunderstanding about the properties of the alt-az convolution detected by one of the reviewers along with some points needing clarifications. However, discussion of these issues appears to have led to a resolution of the issues.\", \"contention\": \"The weaknesses above were discussed in some detail, but the procedure was not particularly contentious and the discussion unfolded well.\\n\\nAll reviewers rate the paper as accept, the paper clearly provides value to the community and therefore should be accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Solid technical contributions and valuable insights for spherical convolutions\"}", "{\"title\": \"comparison experiments\", \"comment\": \"We couldn't finish the direct comparison using Fourier convolution, will probably work in the future for comparing three spherical convolution methods in both spatial domain and Fourier domain. We did adjust our network structure with smaller number of parameters (1.38M) and use Cohen's 6 channels as input, and we ended up with very similar result as we reported in Table 3. One advantage is that our network converges very fast, it takes less than one day to train the Shrec'17 dataset.\"}", "{\"title\": \"Summary of changes made in the new version\", \"comment\": \"We thank the reviewers again for the good comments and suggestions, which certainly helped us in improving the paper.\", \"here_we_list_the_main_revisions_made_in_the_paper\": \"\", \"page_3\": \"alt-az rotation: we changed the original term of \\u201calt-az rotation group\\u2019 into alt-az rotation set $\\\\mathfrak{A}$. A footnote is added to to treat the ill definition at poles. Eqn (5) is deleted since it is not very relevant.\", \"page_4\": \"rotation equivariance property of alt-az convolution: we revised this property, alt-az spherical convolution is azimuth rotation equivariant. See last paragraph and eqn. (10) The proof is added in Appendix A.\", \"page_5\": \"We move the previous paragraph on Global max pooling and rotation invariance to Sec 4: definitions for local spherical max pooling and global spherical max pooling are added. We showed that with SO(2) rotation augmentation about an arbitrary axis, our network can be generalized to arbitrary SO(3) unseen orientations. Appendix B is added for a detailed discussion.\", \"experiments\": \"Table 1, We add a column SO(2)(x), corresponding to data augmentation by rotating about X-axis to 36 angels. New results is analyzed.\\nTable 2, we reinterpreted the result of perturbation test.\", \"added_references\": \"\", \"manifold_learning\": \"Monti et. al (2017)\", \"equivariant_networks\": \"Weiler et al. (2018) Qiu et al. (2018) Kondor & Trivedi (2018)\", \"icosahedron_based_spherical_image_analysis\": \"Shroder and Sweldens (1995)\"}", "{\"title\": \"Thanks for the comments\", \"comment\": \"Q1: yes, we have confirmed with a topologist, the set of alt-az rotations (with or without extra constraints) are topologically equivalent to the sphere. We will use \\\\mathfrak{A} to denote the set of alt-az rotations.\", \"q2\": \"We did make a mistake in the previous proof. Q^{-1}R is not an Alt-Az rotation. It is only when Q is an azimuth rotation (about Z axis). This confirms that your previous comment that alt-az spherical convolution \\\"*is* equivariant to rotations in the subgroup SO(2) of rotations around the Z-axis. We'll update the paper accordingly.\", \"q5\": \"Thank you for the good suggestion, we are trying very hard to see if we can finish the experiment.\"}", "{\"title\": \"Response\", \"comment\": \"Q1\\nMaybe with your proposed constraint we can say that the Alt-Az rotations are topologically equivalent to the sphere, but I'm not sure. You'd have to ask a topologist to be sure. Or try mathoverflow or similar websites. Another alternative which is safer and just as good, is to just refer to \\\"the set of Alt-Az rotations\\\". You could also do something like define this set as \\\\mathfrak{A} and then say \\\"let R in \\\\mathfrak{A}\\\".\\n\\nQ2\", \"i_think_there_is_an_error_in_the_proposed_proof_of_equivariance\": \"in the second to last step, you assume that the last integral equals (h \\\\star f)(Q^{-1}R). But (h \\\\star f) if a function on the set of Alt-Az rotations, while Q^{-1}R is not an Alt-Az rotation (even though both Q and R are).\\n\\nOne way to see that a proof is impossible, is that equivariance to a single Alt-Az rotation implies equivariance to any composition of Alt-Az rotations, i.e. any element in SO(3). Suppose we have a map f that is equivariant to Alt-Az rotations R and R'. Then we have f(RR'x) = Rf(R'x) = RR'f(x), so f is also equivariant to RR', which is not in Alt-Az.\\n\\nQ5\\nThis is a good point; there are multiple differences between the methods: grid type, Fourier/spatial, and isotropic, alt-az, or SO(3) convolution. Properly executing this experiment would be a lot of work, so I think it would be too much to ask for. But perhaps you can just show how your method compares to the previous methods as they were presented, ie with lat-lon grid and Fourier convolution. If your method works better (which I would expect), we won't know how important each of the changes are, but at least then we know how the methods compare.\"}", "{\"title\": \"Response to AnonReviewer3 (2/2)\", \"comment\": \"Q5: It would be nice to see a more direct comparison between the three definitions of spherical convolution (general SO3, isotropic S2, and anisotropic S2)\", \"a5\": \"The two related papers (Cohen et al 2018 for general SO3 and Esteves, and Esteves et al 2018 for isotropic S2) both use lat-lon grid and Fourier domain convolution, while ours uses a icosahedron-sphere grid and direct spherical domain convolution. The use of different sampling in the input spherical image, and the use of filters are totally different. We think a direct comparison should be done in one of the following ways : (a) perform the three types of spherical convolution all using icosahedron-sphere grid and then convolve in the spherical domain. (b) perform the three types of spherical convolution all using lat-lon grid and convolve in the Fourier domain.\\n\\nFor the first type of direct comparison, to implement isotropic spherical convolution (Type II), we should make the geodesic disc filter share an identical weight along the angular direction. To implement a general SO(3) spherical convolution, we should add a rotation degree of freedom into our disc filter. We are conducting this experiment and if the time and paper page limit are allowed, we will report the comparison result in the revised version. Otherwise, we will put it into our future work.\\n\\nFor the second type of direct comparison, we need to conduct alt-az spherical convolution in the Fourier domain, this is possible by determining the spherical harmonic coefficient, $<g_0, Y_l^m> $ for the alt-az convolution in terms of the spherical harmonic coefficient of input spherical signal $f$ and the filter $h$. This comparison needs re-designing of our network and we can not finish it within the rebuttal period, we\\u2019ll leave it for future work.\", \"q6\": \"Initially, I was a bit puzzled about why SO(3) augmentation seems to reduce accuracy in table 1. I think this is because SO(3) augmentation actually makes the classification problem harder if the input is initially aligned. Some more explanation / discussion would be good.\", \"a6\": \"Theoretically, our method will be rotation invariant with only AZ rotation, it will be full rotation invariant with SO(2) rotation augmentation about an arbitrary axis . In table I, we believe the reason alt-az augmentation performs better because it contains more training data. SO(3) augmentation underperforms the AZ augmentation because several random SO(3) rotation augmentation might not be able to cover all the relative rotation wrt the filter's orientation (see appendix).\", \"q7\": \"It would be nice to explain the spherical parameterization in more detail. Is this operation itself rotation equivariant?\", \"a7\": \"Due to the page limit of the conference paper, we could not explain the spherical parameterization method in detail. This operation is theoretically rotation equivariant. Spherical parameterization establishes a map that transforms the points of a closed surface into the points on the unit sphere. A good spherical mapping for a closed surface should satisfy the following properties:bijective mapping and least distortion. Bijective mapping is the most important but most difficult in this process which implies that the resulting map is one-to-one, fold-free, and therefore feature preserving (information lossless). Least distortion seeks a good sampling rate such that interesting features of the model receive enough real estate on the sphere in order to be accurately sampled. We achieved the bijective mapping by adapting a coarse-to-fine strategy with minor modifications (See http://hhoppe.com/proj/sphereparam/). The minimizing of the map distortion is obtained using the authalic parameterization proposed in Sinha et al 2016. This process is rotation equivariant because the initial bijective mapping is depends on the object orientation and the authalic remeshing does not change the orientation of the spherical embeddings.\\n\\nSpherical parameterization is a good way to retain geometric and topological information of original shapes (compared to the spherical projection method), but currently it works only for genus-0 closed object, extending it to 3d shapes with arbitrary topology is still an unsolved problem, that is why we could not adapt this method for dataset such as ModelNET and Shrec\\u201917, they contain 3D objects with arbitrary topology.\\n \\n(Q8) Other typos and minor issues\\n \\nWe will correct all the typos and other minor issues in the revised paper, thank you again for the detailed review. WE really appreciate your help.\"}", "{\"title\": \"Response to AnonReviewer3 (1/2)\", \"comment\": \"Thank you very much for your encouraging review and helpful comments. We will make revisions to address the several points you have raised in your review. Below we first address the main concerns.\", \"q1\": \"\\u201cAlt-az\\u201d rotation is not a group.\", \"a1\": \"Thank you for pointing this out. You are correct. The Alt-az rotation, according to our definition, is not a group. SO(3) is a group which can be parametrized by a 3-sphere . But when we reduce one parameter from it, it is not a group anymore mathematically; the composition of two alt-az rotations becomes a general rotation in SO(3). In the new revision, we will use the term alt-az rotation in \\u201cquotient SO(3)/SO(2)\\u201d instead of alt-az rotation group.\\nMoreover, the quotient SO(3)/SO(2) is isomorphic to $S^2$ and to avoid the ill-definition on the two poles (the two degenerate points), we will add a constraint to the alt-az rotation, i.e. $\\\\phi=0, if \\\\theta=0 or \\\\theta=\\\\pi$. This is because, when the altitude rotation is zero or PI, the azimuth rotation is meaningless in a alt-az rotation and is therefore fixed as zero. If $\\\\theta=0 or \\\\theta=pi, and \\u201c\\\\phi \\\\neq 0$, this rotation belongs to the azimuthal rotation in SO(2) group.\\n\\n(Q2) Equivariance property of the Alt-az convolution\\nWe think we can still have the equivariance property but only for single alt-az rotation. Notice the definition of alt-az convolution do not use any composite rotation. Here is our tentative proof:\\n\\nUnder the definition of alt-azimuth anisotropic convolution and using the unitary property (5) of rotation operators, we have (assume the number of channels K=1 for simplicity, assume Q and R be both alt-az rotations):\\n************************************************\\n\\\\begin{equation}\\n\\\\begin{aligned}\\n& (h \\\\star D_{Q} f) (R) \\\\\\\\\\n& = \\\\int_{S^2}(D_Rh)(\\\\hat{u})f(Q^{-1}\\\\hat{u})ds(\\\\hat{u}) \\\\\\\\\\n& =\\\\int_{S^2}h(R^{-1}\\\\hat{u})f(Q^{-1}\\\\hat{u})ds(\\\\hat{u}) \\\\\\\\\\n& =\\\\int_{S^2}h(R^{-1}Q\\\\hat{u})f(\\\\hat{u})ds(\\\\hat{u}) \\\\\\\\\\n& =\\\\int_{S^2}h((Q^{-1}R)^{-1}\\\\hat{u})f(\\\\hat{u})ds(\\\\hat{u}) \\\\\\\\\\n& =(h \\\\star f)(Q^{-1}R) = D_{Q}( h \\\\star f)(R) \\n\\\\end{aligned}\\n\\\\end{equation}\\n**************************************************\\nThis means that for a single alt-az rotation of input spherical image; the output of a convolution layer will rotate in the same way. Although the property doesn\\u2019t hold if one performs multiple alt-az rotations to the input spherical image, it is still valuable because we assume the different SO(3) orientation of an input 3D shape is from a composite of an azimuthal rotation and an alt-az rotation, the azimuthal rotation is treated by data augmentation and the single alt-az rotation is treated by the network equivariance and invariance.\", \"q3\": \"alt-az convolution is not well defined on the south pole\", \"a3\": \"Yes, we agree that our original definition of alt-az convolution is not well defined on both north and south poles. Therefore, in the new revision, we will add the constraints to the definition of alt-az rotation and make it one-to-one corresponds to the set points on $S^2$. See A1.\", \"q4\": \"The paragraph motivating the alt-az convolution on page 4 is not very clear.\", \"a4\": \"Thanks for the comments, as you suggested, we will rewrite this paragraph in the new version, and acknowledge the importance and effectiveness of the recent work on the group equivariance and rotation invariant networks.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your positive review, we address your reasonable concern for the applications of the proposed method in below:\", \"q1\": \"It would be nice to see a better case made for spherical convolutions within the experimental section.\", \"a1\": \"We believe the best case is the non-rigid shape classification and retrieval. Our method achieves a state-of-the-art classification and retrieval performance on Shrec\\u201911. The good performance on non-rigid shape is mainly contributed by the use of bijective spherical parameterization method, which obtains the input spherical image without topological information losses. When using spherical projection method to represent 3D shape, there will be information loss if the object is non-convex. The lossy input affect the performance of rigid shape analysis to some extent.\", \"q2\": \"The experiments on SHREC17 show all three spherical methods under-performing other approaches. It leaves it unclear to the reader when someone should choose to utilize a spherical method or when the proposed method would then be preferred compared to other spherical methods. Is there a task that this representation significantly outperforms other spherical methods and non-spherical method?\", \"a2\": \"The experiments on SHREC17 does show all three spherical methods slightly under-perform some other state-of-the art approaches, We believe this is mainly due to the information losses introduced in the spherical projection process which retains only the convex portion geometric information. Future improvements can be added by using: (1) less lossy input in spherical projection methods (e.g. on top of SEF, and EDF, we can also add other statistic information such as the minimum distance of intersection, mean distance of intersections or standard deviation of the intersection and so on to reduce the information losses); (2) extend the spherical parameterization method on to the general 3D shapes. Currently, the spherical parameterization method only works for genus-0 closed object. The 3D models presented on ModelNet and Shrec\\u201917 are of arbitrary genus which prevents us from using spherical parameterization method. Generalization of spherical parameterization methods to objects with arbitrary topology will be one of the future work.\\n\\nCompared to non-spherical method, spherical image is one of the most compact representation for 3D shape analysis, the spherical convolution methods rely on no data augmentation (for Type I and Type II) or reduced data augmentation (for Type III, only SO(2) rotation augmentation is required). Other non-spherical method (such as volumetric or multi-view based methods) can only be generalized into unknown orientations using SO(3) rotation augmentations, their representation of 3D shapes are either too sparse (voxel model) or too redundant (multi-view projections).\\n\\nCompared to SO(3) spherical convolution method (Cohen et al. 2018), our network is computationally efficient (in terms of network model), but we have to admit that this is at the price of required data augmentation. Another advantage is that our network allows local filters and local-to-global multi-level spherical features extraction. The other two spherical convolution methods (Cohen et al 2018 and Esteves et al 2018) use a lat-lon grid and conduct convolution in the Fourier space, which has a common disadvantage: the Fourier transform does not support local spherical filters. Another disadvantage is the use of lat-lon grid which introduces unevenness of the perception field (due to the high resolution near the poles, and low resolution at the equator).\\n \\nExperimentally, we compared the three spherical convolution methods in Table 3 using Shrec\\u201917 perturbed shape retrieval experiment. Cohen et al 2018 and ours obtain similar performances because both of the methods used anisotropic filters, the former achieves rotational invariant using SO(3) rotations for filters, while ours achieves rotational invariant using alt-az rotation of filter and SO(2) rotation augmentation of input shapes. As expected, anisotropic filter perform better than the isotropic filter proposed in Esteves et al 2018 which limits the model capacity.\", \"q3\": \"Is there a specific useful application where spherical methods in general outperform other approaches?\", \"a3\": \"As mentioned in Cohen et al 2018, perhaps the most exciting future application of the Spherical CNN is in omnidirectional vision. Although very little omnidirectional image data is currently available in public repositories, the increasing prevalence of omnidirectional sensors in drones, robots, and autonomous cars makes this a very compelling application of our work. Omnidirectional vision is a better application to show the strength of the spherical convolution method.\"}", "{\"title\": \"response to AnonReviewer2 (3/3)\", \"comment\": \"Q14: Comparison with other spherical methods (Cohen et al 2018), or manifold-based methods (Monti et al 2018)? Illustrating the pros and cons with these respective state-of-the-art?\", \"a14\": \"Compared to SO(3) spherical convolution method (Cohen et al. 2018), our network is computationally efficient (in terms of network model), but we have to admit that this is at the price of required data augmentation. Another advantage is that our network allows local filters and local-to-global multi-level spherical features extraction. The other two spherical convolution methods (Cohen et al 2018 and Esteves et al 2018) use a lat-lon grid and conduct convolution in the Fourier space, which has a common disadvantage: the Fourier transform does not support local spherical filters. Another disadvantage is the use of lat-lon grid which introduces unevenness of the perception field (due to the high resolution near the poles, and low resolution at the equator).\\n\\nCompared to the manifold-based methods, such as Masci et al 2015, Monti et al 2017 [1] and Boscaini et al 2016 [2 and , the manifold-based methods rely on the local re-parameterization with polar coordinates. The patch operator is topologically similar to the geodesic disc filter we used in our paper. Again, there is an decision to made on how to handle the rotation of the patch operator. Masci et al 2015 allows the rotation, and uses a per layer max pooling, while Boscaini et al 2016 aligns the patch operators to the fixed direction (max curvature). This is similar to the alt-az spherical convolution we proposed in the paper. The spherical convolution-based methods convert a 3D shape into a spherical image and do the convolution in the spherical domain. The manifold-based methods do the convolution directly on the original manifold surface. Comparing the two methods, spherical method does not require a local reparameterization before each convolution layers thus is more computationally efficient. The limitation of the spherical method is that, as you pointed earlier, the distortion and information loss is unavoidable introduced for 3D shapes.\\n\\nExperimentally, we compared the three spherical convolution methods in Table 3 using Shrec\\u201917 perturbed shape retrieval experiment. Cohen et al 2018 and ours obtain similar performances because both of the methods used anisotropic filters, the former achieves rotational invariant using SO(3) rotations for filters, while ours achieves rotational invariant using alt-az rotation of filter plus SO(2) rotation augmentation of input shapes. We confirm that anisotropic filter perform better than the isotropic filter proposed in Esteves et al 2018 which limits the model capacity.\\n\\nThe manifold-based methods target the non-rigid shape analysis applications. It is hard for us at this stage to do a benchmark because our spherical parameterization, in the current implementation, only works for genus-0 objects. We\\u2019ll leave the benchmark to other general manifold-based methods in our future work.\", \"q15\": \"Improvements could be better emphasized in Fig 6, Table 3 - how is the method better than others?\", \"a15\": \"For non-rigid shapes, our method achieved a state-of-the-art classification and retrieval performance on Shrec\\u201911. For rigid shapes, we slightly underperform some other non-spherical methods. We believe this is mainly due to the lossy input with spherical projection. But we argue that a spherical image is the most compact representation for a 3D shape which rely on no data augmentation (in Type I and Type II spherical convolution) or reduced data augmentation (for Type III). Other non-spherical method (such as volumetric or multi-view based methods) can only be generalized into unknown orientations using SO(3) rotation augmentations. Moreover, their representation of 3D shapes are either too sparse (voxel model) or too redundant (multi-view projections).\\n\\nPerhaps the most exciting future application of our work is in omnidirectional vision.\", \"references\": \"[1] F. Monti, D. Boscaini, J. Masci, E. Rodol\\u00e0, J. Svoboda, M. M. Bronstein, Geometric deep learning on graphs and manifolds using mixture model CNNs, CVPR 2017\\n[2] D. Boscaini, J. Masci, E. Rodol\\u00e0, M. M. Bronstein, Learning shape correspondence with anisotropic convolutional neural networks, NIPS 2016\"}", "{\"title\": \"Response to AnonReviewer2 (2/3)\", \"comment\": \"Q5: How robust is the convolution scheme to topological defects, such as holes, noise?\", \"a5\": \"For non-rigid shapes, we use spherical parameterization to obtain its spherical image. The current spherical parameterization method is sensitive to topological defects and it works only for genus-0 objects, extending spherical parameterization from genus-0 to higher genus is very difficult and is an active research field. For spherical projection-based spherical images, they are more robust to topological defects. Small variations introduced by topological or geometric noises won\\u2019t affect the spherical projection much.\", \"q6\": \"Spherical images may induce parameterization distortion if using a lat-lon grid, which would require complex variable filters on the spherical image. Are these variable filters burdening the computational complexity?\", \"a6\": \"To clarify, we do not use lat-lon grid. To avoid distortion using lat-lon grid and the variable filters required in lat-lon grid, we use icosahedron-sphere grid which is more homogeneous.\", \"q7\": \"How to handle the distortion induced by the spherization process?\", \"a7\": \"To represent a 3D shape onto a sphere, distortion can not be avoided. We can only assume similar objects distort in similar ways when mapped onto the sphere which are then learned by the network.\", \"q8\": \"How to handle discontinuities around the sphere poles?\", \"a8\": \"We use icosahedron-sphere grid, there are discontinuities (singularity) on the original 12 vertices of the icosahedron (including the two poles) where a vertex has only 5 neighbors. In the implementation, we repeat the center point twice and make the filters size identical when \\u201cshifted\\u201d onto any point on the sphere. There are only 12 of such singular points, we believe this effect can be ignored.\", \"q9\": \"Computing geodesic may be costly - how does impact performance?\", \"a9\": \"to clarify, we do not compute any geodesic in our method.\", \"q10\": \"Does this rely on data augmentation to cover rotation invariance of filters?\", \"a10\": \"Yes. Theoretically, our method is only azimuthal rotation invariant. An arbitrary rotation of an input shape can only be recognized with SO(2) rotation augmentation.\", \"q11\": \"Now icosahedrons are used - could the convolution work on an arbitrary mesh discretization, ranging from an ideal isoparametric sphere to a highly irregularly-triangulated mesh?\", \"a11\": \"Yes, since a resampling based on icosahedron subdivision is always conducted. This is to account for the original irregular meshing.\", \"q12\": \"The remeshing strategy to a sphere also loses information from the original mesh connectivity - For instance, links between mesh nodes on the original surface may convey important information (e.g., brain connectivity in neuroscience), remesing to a sphere would lose such connectivity information.\", \"a12\": \"Yes, you're right. The essence of spherical parameterization method is to retain the local connectivity information using a bijective one-to-one mapping (each vertex of the original mesh is mapped into a spherical point with the same set of neighbors. Spherical parameterization method is less lossy compared with the spherical projection method, the latter will many cases, lose the connectivity information. This is why for Shrec\\u201911, we achieved a state-of-the-art performance, while for Shrec\\u201917, all three spherical convolution-based method is a little below the state-of-the-art.\", \"q13\": \"The experiments show the proposed method with several augmented approaches - How exactly are data augmented?\", \"a3\": \"Apologies for the confusion. In table 1, the data are tested with three types of training data augmentations, (1) Azimuthal rotation on (SO(2) ), (2) alt-az rotations (SO(3)/SO(2)) and (3)SO(3) rotation. In table 2, the original dataset is aligned, no augmentation is conducted in this experiment. We perturb the testing data using three different types of rotation. This is to test the rotation invariance property of the proposed network. In table 3, our result is obtained using SO(2) rotation augmentation, i.e. each model is augmented by rotating about an arbitrary axis per 60 degrees.\"}", "{\"title\": \"Response to AnonReviewer2 (1/3)\", \"comment\": \"Thank you very much for your positive feedback and constructive comments. We first make a clarification and then answer your questions.\", \"clarification\": \"We did not propose an angular max-pooling scheme, this is the major difference between ours and others which allow SO(3) rotations of filters (e.g. Cohen et al 2018, Masci et al 2015, Bronstein et al 2017 ). We constrain the self-rotation of a filter and enable only the alt-az rotation (\\u201cshift\\u201d) of the filter on the sphere, therefore, no angular pooling is required.\", \"q1\": \"Can this be extended to unit 2-balls?\", \"a1\": \"Yes, spherical convolution can be extended to unit 3-balls (we guess you mean 3-ball bounded by 2-sphere). Similar to the convolution extended from $R^2$ to $R^3$, we just need to add a radial dimension to the filter.\", \"q2\": \"Isn't the \\\"alt-az rotation group\\\" the same as SO(3)? If orientation is removed, what quotient group would this be?\", \"a2\": \"SO(3) is the group of arbitrary rotation in $R^3$, It can be described as a successive extrinsic rotation (about the fixed axes). E.g. we use the ZYZ Euler angles: from an original orientation, first rotate by $\\\\omega$ about z-axis, followed by $\\\\theta$ about the y-axis and then $\\\\phi$ about the z-axis. The space of all arbitrary rotations is isomorphic to the hyper sphere $S^3$ with three rotation parameters. The alt-az rotation removes the first rotation DOF about z-axis, which reduces the set of rotation into the quotient $ SO(3)/SO(2)$ (isomorphic to $S^2$) . As suggested by the Reviewer 3, the set of alt-az rotation defined on $S^2 = SO(3)/SO(2)$ is not a mathematical group, we will revise the term accordingly in the paper.\", \"q3\": \"What is the benefit of containing a filter on this quotient group rather than using convolution filters within the full rotation group? Could a simple experiment convince the reader that the proposed approach is better than using convolutions in SO(3)?\", \"a3\": \"The benefit of using al-az rotation instead of SO(3) rotation is the domain consistency and model simplicity. The input image is defined on $S^2$, the output image from an alt-az spherical convolution layer is still on $S^2$. If we use SO(3) rotation, the output image will be augmented onto $S^3$. For the successive layers, there should be some special treatments for the increased dimension, e.g. (a) use a max pooling along the added dimension axis (Masci et al 2015) right after each convolution layers and pull the output image back to the original domain; (b) use a filter with higher dimensions for all the successive layers (Cohen et al 2018), and max pool it only for the last convolution layer. Intuitively, we believe the first method will introduce too many local rotation degrees of freedom which will ruin the stability of the network (further theoretic analysis needs to be done for this assertion). The second strategy is theoretically sound, but it will increase the model complexity.\\n\\nIn image convolution, the standard strategy is to allow filter\\u2019s translation (\\u201calt-az rotation\\u201d in our case) while fixing its rotation (azimuthal rotation in our case). The rotation invariance is obtained using data augmentation. Many recent papers on rotation invariant CNN or equivariant networks in $R^2$ allowing the rotation of filters in $R^2$, which avoid data augmentation at the price of increased model size and computational burden. The two strategies are trading off between the increased training data size or the increase network model size. \\nWe do not claim that our method is better than the SO(3) spherical convolution in term of 3D shape recognition performance. But our method offers an alternative, simple, efficient computation of spherical CNNs and it is a standard extension from $R^2$ to $S^2$ convolution. We claim the major contribution of this paper is an alternative way to conduct spherical convolution which avoid the expensive Fourier Transform, enabled a simple GPU implementation of spherical convolution, which also supports local-to-global spherical feature extractions.\", \"q4\": \"Is there a dependence created by the spherical parameterization strategy?\", \"a4\": \"We do not fully understand this question. Can you please describe \\u201cdependence\\u201d in more details? Do you mean the dependence on the initial triangulation of a mesh? If yes, the spherical parametrization is dependent on the initial triangulation of an object. We are seeking a bijective mapping from the initial triangulation to a spherical triangulation with least distortion using authalic mapping. After a spherical parameterization is obtained, the icosahedron subdivision based resampling process makes the input image to the network independent on the initial triangulation of an object.\"}", "{\"title\": \"Potential impact, but comparison could better highlight improvements in practical applications\", \"review\": [\"Deep Learning 3D Shapes using Alt-Az Anisotropic 2-Sphere Convolution\", \"This paper presents a polar anisotropic convolution scheme on a unit sphere. The known non-shift-invariance problems of current manifold neural nets are avoided by replacing filter translation with filter rotation on a sphere. Spherical convolution are thus enabled and are rotation invariant compared to manifold convolutions. This shift also enables a proposed angular max-pooling scheme. Results are presented on mesh projections, shape classification and shape retrieval.\", \"The paper generally reads well. Tackling the learning problem on a unit sphere has high potential, however, the proposed paper seems to be highly constrained by heuristics on a 2-sphere, such as constraining filters on a reduced rotation group to 2 rotations. This could be fine for many 3D application, but results may lack an exhaustive comparison with other spherical and manifold-based methods on the proposed experiments. Currently, several variants of data augmentations are used, and discussion may lack an explicit comparison with the state-of-the-art of spherical and spectral methods. This may impair understanding in which context the proposed method would work best.\", \"Other comments, possible clarification and improvements:\", \"[Method]\", \"Can this be extended to unit 2-balls?\", \"Isn't the \\\"alt-az rotation group\\\" the same as SO(3)? If orientation is removed, what quotient group would this be?\", \"What is the benefit of containing a filter on this quotient group rather than using convolution filters within the full rotation group? Could a simple experiment convince the reader that the proposed approach is better than using convolutions in SO(3)?\", \"Is there a dependence created by the spherical parameterization strategy?\", \"How robust is the convolution scheme to topological defects, such as holes, noise?\", \"Spherical images may induce parameterization distorsion if using a lat-lon grid, which would require complex variable filters on the spherical image. Are these variable filters burdening the computational complexity?\", \"How to handle the distortion induced by the spherization process?\", \"How to handle discontinuities around the sphere poles?\", \"Computing geodesic may be costly - how does impact performance?\", \"Does this rely on data augmentation to cover rotation invariance of filters?\", \"Now icosahedrons are used - could the convolution work on an arbitrary mesh discretization, ranging from an ideal isoparametric sphere to a highly irregularly-triangulated mesh?\", \"The remeshing strategy to a sphere also looses information from the original mesh connectivity - For instance, links between mesh nodes on the original surface may convey important information (e.g., brain connectivity in neuroscience), remising to a sphere would loose such connectivity information.\", \"[Results]\", \"The experiments shows the proposed method with several augmented approaches - How exactly are data augmented?\", \"Comparison with other spherical methods (Cohen et al 2018), or manifold-based methods (Monti et al 2018)? Illustrating the pros and cons with these respective state-of-the-art?\", \"Improvements could be better emphasized in Fig 6, Table 3 - how is the method better than others?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Deep Learning 3D Shapes Using Alt-az Anisotropic 2-Sphere Convolution\", \"review\": \"# Weaknesses\\nApplications are a bit unclear.\\nIt would be nice to see a better case made for spherical convolutions within the experimental section. The experiments on SHREC17 show all three spherical methods under-performing other approaches. It leaves it unclear to the reader when someone should choose to utilize a spherical method or when the proposed method would then be preferred compared to other spherical methods. Is there a task that this representation significantly outperforms other spherical methods and non-spherical methods? Or a specific useful application where spherical methods in general outperform other approaches? \\n\\n# Strengths:\\nThe method is well developed and explained. \\nAbility to implement in a straight-forward manner on GPU.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Deep Learning 3D Shapes Using Alt-az Anisotropic 2-Sphere Convolution\", \"review\": \"# Summary\\nThis paper proposes a new kind of spherical convolution for use in spherical CNNs, and evaluates it on rigid and non-rigid 3D shape recognition and retrieval problems. Previous work has either used general anisotropic convolution or azimuthally isotropic convolution. The former produces feature maps on SO(3), which is deemed undesirable because processing 3-dimensional feature maps is costly. The latter produces feature maps on the sphere, but requires that filters be circularly symmetric / azimuthally isotropic, which limits modeling capacity. This paper proposes an anisotropic spherical convolution that produces 2D spherical feature maps. The paper also introduces an efficient way of processing geodesic / icosahedral spherical grids, avoiding complicated spectral algorithms.\\n\\n\\n# Strengths\\nThe paper has several strong points. It is well written, clearly structured, and the mathematics is clear and precise while avoiding unnecessary complexity. Much of the relevant related work is discussed, and this is done in a balanced way. Although it is not directly measured, it does seem highly likely that the alt-az convolution is more computationally efficient than SO(3) convolution, and more expressive than isotropic S2 convolution. The most important contribution in my opinion is the efficient data structure presented in section 4, which allows the spherical convolution to be computed efficiently on GPUs for a grid that is much more homogeneous than the lat/lon grids used in previous works (which have very high resolution near the poles, and low resolution at the equator). The idea of carving up the icosahedral grid in just the right way, so that the spherical convolution can be computed as a planar convolution with funny boundary conditions, is very clever, elegant, and practical.\\n\\n\\n# Weaknesses\\nThere is however a misunderstanding about the properties of the alt-az convolution that must be cleared up before this paper can be published. To start with, the set of rotations R(phi, nu, 0) called the alt-az group in this paper is not a group in the mathematical sense. This easy to see, because a composition of rotations of the form Rz(phi) Ry(nu) is not generally of that form. For instance we can multiply Rz(phi) Ry(nu) by the element Rz(omega)Ry(0) = Rz(omega), which gives the element Rz(phi) Ry(nu) Rz(omega). As noted in the paper, this is a general element of SO(3) (and hence not in the set of alt-az rotations). So the closure axiom of a group is violated.\\n\\nThis matters, because the notion of equivariance really only makes sense for a group. If a layer l satisfies l R = R l (for R a alt-az rotation), then it automatically satisfies l RR' = RR' l, which means l is equivariant to the whole group generated by the set of alt-az rotations. As we saw before, this is the whole rotation group. This would mean that the layer is actually SO(3)-equivariant, but it has been proven [1], that any rotation equivariant layer between scalar spherical feature maps can be expressed as an azimuthally isotropic convolution. Since the alt-az convolution is not isotropic and maps between scalars on S2, it cannot be equivariant. This also becomes apparent in the experiments section, where rotational data augmentation is found to be necessary. The paper does not contain an attempted proof of equivariance, and if one tries to give one, the impossibility of doing so will become apparent.\\n\\nI note that the alt-az convolution *is* equivariant to rotations in the subgroup SO(2) of rotations around the Z-axis.\\n\\nAnother somewhat jarring fact about the alt-az convolution is that it is not well defined on the south pole. The south pole can be represented by any pair of coordinates of the form phi in [0, 2pi], nu = +/- pi. But it is easy to see that eq. 10 will give different results for each of these coordinates, because they correspond to different rotations of the filter about the Z-axis. This is ultimately due to the fact that the set of alt-az rotations is not the same as the set of points on the sphere, topologically speaking. The set of points on the sphere can only be viewed as the quotient SO(3)/S(2).\\n\\nThe paragraph motivating the alt-az convolution on page 4 is not very clear, and some claims are questionable. I agree that local SO(2) invariance is too limiting. But it is not true that rotating filters is not effective in planar/volumetric CNNs, as shown by many recent papers on equivariant networks. I would suggest rewriting this paragraph to make it clearer and less speculative, and acknowledge that although rotating filters might increase computational complexity, it has often been shown very effective.\\n\\n\\n# Other comments\\n\\nThe experiments show that the method is quite effective. For instance, the SHREC17 results are on par with Cohen et al. and Esteves et al., presumably at a significantly reduced computational cost. That they do not substantially outperform these and other methods is likely due to the input representation, which is lossy, leading to a maximal performance shared by all three methods. An application to omnidirectional vision might more clearly show the strength of the method, but this would be a lot of work so I do not expect the authors to do that for this paper.\\n\\nIt would be nice to see a more direct comparison between the three definitions of spherical convolution (general SO3, isotropic S2, and anisotropic S2). Right now, the numbers reported in Cohen et al. and Esteves et al. are copied over, but there are probably many differences between the precise setup and architectures used in these papers. It would be interesting to see what happens if one uses the same architecture on a number of problems, changing only the convolution in each case.\\n\\nInitially, I was a bit puzzled about why SO(3) augmentation seems to reduce accuracy in table 1. I think this is because SO(3) augmentation actually makes the classification problem harder if the input is initially aligned. Some more explanation / discussion would be good. \\n\\nIt would be nice to explain the spherical parameterization in more detail. Is this operation itself rotation equivariant? \\n\\n\\nTypos & minor issues\\n\\n- Abstract: \\\"to extract non-trivial features\\\". The word non-trivial really doesn't add anything here. Similarly \\\"offers multi-level feature extraction capabilities\\\" is almost meaningless since all DL methods can be said to do so.\\n- Below eq. 5, D_R^{-1} should equal D_R(-omega, -nu, -phi). The order is reversed when inverting.\\n- \\\"Different notations of convolutions\\\" -> notions\\n- \\\"For spherical functions there is no consistent and well defined convolution operators.\\\" As discussed above, the issue is quite a bit more subtle. There are exactly two well-defined convolution operators, but they have some characteristics deemed undesirable by the authors.\\n- \\\"rationally symmetric\\\" -> rotationally\\n- \\\"exact hierarchical spherical patterns\\\" -> extract\\n- It seems quite likely that the unpacking of the icosahedral/hexagonal grid as done in this paper has been studied before in other fields. References would be in order. Similarly, hexagonal convolution has a history in DL and outside.\\n- Bottom of page 7, capitalize \\\"for\\\".\\n- \\\"principle curvatures\\\" -> principal.\\n- \\\"deferent augmentation modes\\\" -> different\\n- \\\"inspite\\\" -> in spite\\n- \\\"reprort\\\" -> report\\n- \\\"utlize\\\" -> utilize\\n- \\\"computer the convolution\\\" -> compute\\n\\n\\n# Conclusion\\n\\nAlthough the alt-az convolution lacks the mathematical elegance of the general anisotropic and azimuthally isotropic spherical convolutions, it still seems like a practically useful operation for some kinds of data, particularly when implemented using the homogeneous icosahedral/hexagonal grid and fast algorithm presented in this paper. Hence, I would wholeheartedly recommend acceptance of this paper if the authors correct the factual errors (e.g. the claim of SO(3)-equivariance) and provide a clear discussion of the issues. For now I will give an intermediate rating to the paper.\\n\\n\\n[1] Kondor, Trivedi, \\\"On the Generalization of Equivariance and Convolution in Neural Networks to the Action of Compact Groups\\\"\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
SkeVsiAcYm
Generative predecessor models for sample-efficient imitation learning
[ "Yannick Schroecker", "Mel Vecerik", "Jon Scholz" ]
We propose Generative Predecessor Models for Imitation Learning (GPRIL), a novel imitation learning algorithm that matches the state-action distribution to the distribution observed in expert demonstrations, using generative models to reason probabilistically about alternative histories of demonstrated states. We show that this approach allows an agent to learn robust policies using only a small number of expert demonstrations and self-supervised interactions with the environment. We derive this approach from first principles and compare it empirically to a state-of-the-art imitation learning method, showing that it outperforms or matches its performance on two simulated robot manipulation tasks and demonstrate significantly higher sample efficiency by applying the algorithm on a real robot.
[ "Imitation Learning", "Generative Models", "Deep Learning" ]
https://openreview.net/pdf?id=SkeVsiAcYm
https://openreview.net/forum?id=SkeVsiAcYm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1eyVECGeE", "SygFjbDuAQ", "SkxG5hUuA7", "S1g5jJWU0m", "BJguJBJIRm", "BJxza4aHRm", "SJlFLGnSRX", "r1gSfAsSRX", "rJedUOWm0Q", "S1xOiCceCQ", "Hyg1Q9I7pQ", "rJlWIFUQpX", "H1lGouImTm", "Skl8Fy36hm", "BkeXEPYKnQ", "B1x9M-TO37" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544901670825, 1543168416919, 1543167113667, 1543012258509, 1543005407546, 1542997177763, 1542992465258, 1542991372656, 1542817872144, 1542659743773, 1541790230606, 1541790024876, 1541789849717, 1541418878081, 1541146411121, 1541095698172 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper616/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper616/Authors" ], [ "ICLR.cc/2019/Conference/Paper616/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper616/Authors" ], [ "ICLR.cc/2019/Conference/Paper616/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper616/Authors" ], [ "ICLR.cc/2019/Conference/Paper616/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper616/Authors" ], [ "ICLR.cc/2019/Conference/Paper616/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper616/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper616/Authors" ], [ "ICLR.cc/2019/Conference/Paper616/Authors" ], [ "ICLR.cc/2019/Conference/Paper616/Authors" ], [ "ICLR.cc/2019/Conference/Paper616/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper616/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper616/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes to estimate the predecessor state dynamics for more sample-efficient imitation learning. While backward models have been used in the past in reinforcement learning, the application to imitation learning has not been previously studied. The paper is well-written and the results are good, showing clear improvements over GAIL in the presented experiments. The primary weakness of the paper is the lack of comparisons to the baselines suggested by reviewer 1 (a jumpy forward model and a single step predecessor model) to fully evaluate the contribution, and to SAIL and AIRL. Despite these weaknesses, the paper slightly exceeds the bar for acceptance at ICLR.\\nThe authors are strongly encouraged to include these comparisons in the final version.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta review\"}", "{\"title\": \"learning a policy as well\", \"comment\": \"To clarify based on my understanding:\\n* In the backward view, we train a model to generate trajectories that end at a demonstrated state and thus train the agent to recover. The model is factorized split into B(s|s') and B(a|s,s'), the latter corresponds to a policy that has been given additional information which enables learning.\\n* For forwards rollouts, we would generate trajectories that start at a demonstrated state. After that, it would simply follow the policy that the forward model has learned. If this policy is trained from expert demonstrations, the model therefore has to solve the behavioral cloning problem. We already compare against BC and show that it doesn't learn a good policy.\\n\\nPlease let me know if this scenario fully describes the suggested baseline.\"}", "{\"title\": \"thanks for your reply\", \"comment\": \"\\\"the forward model would have to learn to predict actions as well as states.\\\"\\n\\nYou would be learning a policy as well...\\n\\n\\\"If the model is trained based on expert data alone, it would therefore have to learn to predict actions for each state based on the demonstrations. This is essentially the behavioural cloning problem which we have found to either overfit given the amount of data provided or to learn a very weak policy due to the amount of regularization required. \\\"\\n\\nI'm not sure, I understand the argument. You would learn a model, and then same traces from the model, and use the generated traces for imitation learning.\\n\\n\\\" Please also note that in a large number of our experiments, including the experiment on the physical robot, we are not using demonstrated actions at all and are training the agent from demonstrated states alone. In such scenarios, a method that trains a model based on expert data alone would not be able to predict actions at all.\\\" \\n\\nYes. I agree with this point. \\n\\nStill, it remains, that the authors have not yet compared the proposed method with a simple baseline of unrolling the backwards model, and generating traces from different starting positions.\"}", "{\"title\": \"Forward models on expert data\", \"comment\": \"I believe I understand. However, in order to generate training samples for supervised learning, the forward model would have to learn to predict actions as well as states.\\nIf the model is trained based on expert data alone, it would therefore have to learn to predict actions for each state based on the demonstrations. This is essentially the behavioral cloning problem which we have found to either overfit given the amount of data provided or to learn a very weak policy due to the amount of regularization required. Methods such as GAIL or GPRIL are able to combat overfitting by learning more about the environment using their own, self-supervised, experience. If the action prediction part of the model is trained using samples generated from the current policy, however, the expectation of the supervised learning gradient using generated training samples will be 0. Please also note that in a large number of our experiments, including the experiment on the physical robot, we are not using demonstrated actions at all and are training the agent from demonstrated states alone. In such scenarios, a method that trains a model based on expert data alone would not be able to predict actions at all.\\n\\nPlease let me know if my assumptions are correct.\"}", "{\"title\": \"Quick reply\", \"comment\": \"\\\"In a forward model, the sampled actions are usually generated by the current policy. Note that E_\\u03c0[\\u2207log \\u03c0(a|s)|s]=0 for all states. Therefore, performing behavioural cloning on rollouts of a well-trained forward model is not changing the policy in a meaningful way. \\\"\\n\\nRight. I was thinking that you can learn a forward model from the expert trajectories. And then using the forward model you can sample traces from every kth state (k ==5/10), \\nand using the sampled traces, you can use those traces for imitation learning (i.e supervised learning). \\n\\nI am just about to board a flight in next 1 min, so would reply to the other part in few hours. Sorry about that.\"}", "{\"title\": \"Regarding forward models and BC\", \"comment\": \"Thank you. At this point, we would like to give a quick reply regarding a single aspect of your response and request further clarification.\\n\\nSpecifically, our reply is regarding a comparison against behavioral cloning, which we presume is meant to be in combination with a forward model: \\n+ In a forward model, the sampled actions are usually generated by the current policy. Note that E_\\u03c0[\\u2207log \\u03c0(a|s)|s]=0 for all states. Therefore, performing behavioral cloning on rollouts of a well-trained forward model is not changing the policy in a meaningful way. \\n+ We have experimented with a variant of this where the states are taken from expert trajectories while the actions are generated using an action model that is conditioned on the target state, similar to B(a|s,s\\u2019) in our work. It is possible that this approach may work in some domains but we found that it leads to a self-reinforcing loop in our experiments. We believe this to be due to the following reason: If the robot is drifting to the right of the expert trajectory at the beginning of training, the conditioned action model will be trained with a strong prior that emphasizes actions that cause the drift. In our approach, we find that this is counteracted by our model generating states that are far to the right of the expert-trajectory, providing samples in the part of the state-space where the conditioned action model is trained as well as a larger discrepancy to the target which forces the action model to correct its course. If we instead use expert states, this effect is missing and the effect obtained by conditioning on the target is much weaker. Thus, we observe the prior taking over and the drift to be reinforced.\\n\\nPlease let us know if either of these variations corresponds to the suggested baseline.\"}", "{\"title\": \"Thanks!\", \"comment\": \"Thanks for your time in engaging in discussions with the reviewer. The reviewer appreciates it.\\nAs of now, I dont think, I'm convinced.\\n\\n\\\" a state-of-the-art imitation learning algorithm\\\" \\n\\nIt could be. But by only comparing the proposed approach on similar envs, I'm not sure it makes sense to call your proposed method (which already been used for RL) a \\\"state of the art imitation learning algorithm\\\"\\n\\n\\\"We agree that a comparison to unrolled forward or backwards models is very interesting.\\\"\\n\\nThis SHOULD be the baselines. Until unless, authors compare to these baselines, the reviewer does not know how novel the proposed method is.....\\n\\n\\\" In our view, novel theoretical work, real-world experiments, and comparison to potential alternatives for model components that lack established baselines is too dense for a single paper.\\\"\\n\\nThis is not DENSE. These SHOULD be the baselines. Asking the right baselines, is my job. I'm not criticizing the method. I have already pointed out that I *REALLY* enjoyed reading the paper, and paper is well written. I'm not really sure, why the authors dont want to do this. Taking your code, and comparing to the unrolled backtracking model should be minimal change. And comparing to DYNA is a natural baseline. \\n\\nEven just compare to BC (behaviour cloning).\\n\\nPseudo Code (Baseline 1)\\n\\nTake the expert trajectories (s, a, s_t+1) \\nLearning a forward model. \\nGenerate traces from every kth state from the expert trajectories. (where k==1/2/5/10)\\n\\nPseuco Code (Baseline 2)\\n\\nUnrolled backward model (or predecessor model)\\n\\n(Edit:1) P.S - It may be possible that the reviewer is missing something, because of which its technically not possible to compare to reviewers, if thats the case, I would appreciate if the authors could point it out.\\n\\nThanks!\"}", "{\"title\": \"Regarding the suggested baselines\", \"comment\": \"Thank you for elaborating on your review. The main contributions of our paper are 1. a state-of-the-art imitation learning algorithm as outlined in Algorithm 1 and 2. an algorithmic framework using predecessor models to estimate the state-distribution-gradient. To the best of our knowledge and including the new references suggested in the review, these contributions are significantly different from prior work.\\nWe agree that a comparison to unrolled forward or backwards models is very interesting. However, there are numerous ways to approach this, as the reviewer points out, and providing one that isn\\u2019t a straw-man is non-trivial for reasons discussed below. We view our work instead as providing the theoretical foundation for an alternative approach that does not rely on unrolling, and focused our efforts on demonstrating viability on a real-world task setting. In our view, novel theoretical work, real-world experiments, and comparison to potential alternatives for model components that lack established baselines is too dense for a single paper. We note that we did perform evaluation against GAIL, which is an established baseline for our task setting.\\n\\n1. Our contribution is a novel imitation learning approach that can work with either, a jumpy or a single-step predecessor model. The choice of model does not matter to the applicability of our algorithm. While we believe that a jumpy model will work better, our specific contribution would remain the same if we had found that a single-step approach works just as well.\\n2. Unlike in reinforcement learning, where a forward-model may trivially be incorporated, the use of a forward-model is non-trivial in imitation learning. We provide some analysis relating the time-reversed approach to the forward view in Appendix C. The reviewer suggests a Dyna-like approach, perhaps referring to a combination of Dyna and GAIL and perhaps incorporating the suggested work by Buesing et al.. We are not aware of successful application of such a combination and it is unclear whether it would be stable. However, work submitted to this conference (https://openreview.net/forum?id=Hk4fpoA5Km) aims to evaluate a very similar proposition by using GAIL with a replay buffer. We are excited to see other investigations into sample efficient imitation learning and believe that adversarial objectives will continue to have their place in the field. However, we would like to point out that while the work in question does significantly increase sample-efficiency, it does not reach the level of real-world applicability and is furthermore clearly concurrent.\\n\\nWe compare against a widely accepted state-of-the-art imitation learning algorithm as a baseline and demonstrate that the approach works on a real robot platform. We believe that those experiments validate our claims but hope to join other researchers future work on the development and evaluation of optimized, reverse-time models as well as novel imitation-learning algorithms utilizing jumpy forward models.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for the feedback and clarification regarding the use of no environment-interaction samples for training the policy.\\nAs emphasised in the original review, the experimental section remains a shortcoming and to demonstrate versatility other tasks would have to be included. If the approach is robust enough this should be fairly little effort and the code for e.g. the original GAIL environments is openly available (https://github.com/openai/imitation). \\nBy missing the comparison to some of the most closest (SAIL) or current state-of-the-art approaches (AIRL, etc) the authors further weaken the experimental section in an otherwise stronger paper. \\nBased on the these shortcomings the paper cannot fully evaluate the proposed approach.\"}", "{\"title\": \"Reply\", \"comment\": \"Sorry for late reply.\\n\\n\\\"the error in naive one-step models grows exponentially \\\" \\n\\nLearning any kind of model (whether forward model or predecessor model) for more than (lets say 10 steps) is challenging task. This paper needs to argue (by showing experiments empirically) that using predecessor models is better as compared to these 2 baselines.\\n\\n- When the agent learns a jumpy forward model. (Also used in learning to query paper [1]) as compared to jumpy predecessor model.\\n- Unrolling the predecessor model step by step.\\n- Lets say you have a trajectory of length 1000, then one can take every 5th state on this expert trajectory, and generate traces (according to the learned forward model) and use these generated traces for imitation learning. As the authors pointed out that unrolling the forward model (or predecessor model ) is prone to compounding errors, but one can easily make predictions for 5/10 steps. And since you have an expert trajectory, you can use every kth state as an input to forward model for generating traces (here k==5/10). This wont be prone to \\\"compounding errors\\\".\\n\\nUnless until authors compare to these baselines, I dont think the contribution is justified. \\n\\n[1] https://arxiv.org/abs/1802.03006\"}", "{\"title\": \"Changes made to the manuscript & addressing questions raised.\", \"comment\": [\"Thank you for your review. We would like to address some of the questions and points raised in your review:\", \"Regarding sample efficiency, our algorithm only uses artificial samples and expert samples when updating the policy. The reported number of environment samples are the samples used to train the generative model. Note that while it is efficient compared to other algorithms of this type, the data efficiency is not extraordinary to train a network of this size on a problem of these dimensions. This indicates that the samples generated from the model are likely to be useful even as the policy changes.\", \"Regarding the chosen domains, we believe that they highlight a type of problem that is difficult for approaches based on adversarial updates (due to sample efficiency as well as the ability to control the state in its entirety) while also being representative for a class of problems one might encounter in practice. While we believe our approach to be widely applicable and would like to see it applied in other domains, existing approaches are already able to achieve very high scores on domains such as the mujoco walkers.\", \"In comparison to SAIL, we aim to address scalability w.r.t. the number of parameters of the policy. We found that larger policies are able to achieve more accurate results on our domains, yet policies of this size are out of reach for SAIL which has to predict the gradient for each parameter of the network.\", \"We added additional steps to Appendix A to make the derivation more self-contained and changed the wording in the related works section based on your suggestion. Unfortunately, we are currently not able to release the code.\"]}", "{\"title\": \"Specifying the scope of our work and additional citations.\", \"comment\": \"Thank you for your review. Predecessor models do indeed have a long history in reinforcement learning and recent work explores the use of deep networks in this context. While our algorithm followed directly from the derivation of the state-distribution gradient in section 3.1, we can see that a comparison to the aforementioned works might be useful to the reader and have added this in section 1. We hope that this addition will adequately specify the scope of our work. In particular, we claim the following two contributions:\\n\\n1. Derivation of the state-distribution gradient based on samples from a predecessor model. While such models have been used in the past, to the best of our knowledge this connection has not been pointed out before. Instead, most work focuses on the use of predecessor models as a more efficient order of bellman backups while the recent Recall Traces uses a justification based on a variational lower bound. We believe that our derivation provides further justification to the approach used in Recall Traces and may furthermore help to guide design decisions when developing such algorithms in the field of reinforcement learning.\\n2. Development of a novel, state-of-the-art imitation learning algorithm. To the best of our knowledge, the use of predecessor models to achieve state-action distribution matching in imitation learning is novel. We believe that predecessor models are a natural fit for imitation learning as, unlike in reinforcement learning, future observations and their accordance with demonstrations are very difficult to evaluate. We demonstrate the effectiveness of such models on traditionally difficult real world imitation learning problems in our evaluation.\\n\\nRegarding our choice of multi-step models and comparison to one-step models of either direction, we note that in the general case, the error in naive one-step models grows exponentially (Venkatraman et al., 2015) thus requiring careful design of such models. Recent work such as Ha and Schmidhuber, 2018 and Gregor and Besse, 2018 achieves impressive predictions on sequential rollouts indicating that it is very likely that a one-step model can be applied in our setting as well. However, these works require a significant effort on the modelling side and we thus decided to side-step the issue by modelling the desired distribution directly. As the contact dynamics in our domain can be complex, we believe that the effort required in our domain would have been significant as well. We note that the main contribution of this paper is the use of samples of a predecessor model to match state-action distributions in a principled way while the choice of model is a design choice that was made to avoid increasing complexity.\"}", "{\"title\": \"Additional discussion added concerning the effect of approximations made in the paper.\", \"comment\": [\"Thank you for your review. The questions you raise about the effect of the approximations made in our derivation are valid and we have added additional discussion to the paper that we hope will answer these questions satisfactorily:\", \"The discount factor \\u03b3 is not only similar to the discount factor used in reinforcement learning but can be seen as identical. This was not immediately apparent in the original submission and we have added Appendix C to the manuscript to explore the connection between our gradient estimate and policy gradients. As a result, we can draw on the understanding of the discount factor in reinforcement learning to gain insight into the behavior of \\u03b3 and conclude that first, the discount factor introduces a trade-off where an agent with lower \\u03b3 prefers to reach demonstrated states more quickly while agents with higher \\u03b3 will aim to reproduce the state-action distribution more closely in the long-term. Second, as \\u03b3 approaches 1, the variance grows. However, in reinforcement learning, it has been empirically shown that lower discount factors can be an accurate, low-variance approximation even when the true objective is more accurately described by the average reward objective (\\u03b3 -> 1). The alternative derivation introduced in Appendix C thus indicates that lower values of \\u03b3 are likely to be reasonable approximations in the imitation learning setting as well.\", \"With regards to stationarity, under the usual ergodicity assumptions the expected distribution of state-action pairs the agent will observe and the stationary distribution should be identical in the infinite horizon case (using the modified MDP with terminal states being treated as transitions to initial states as discussed in section 2.1). In general, matching the joint stationary distribution to the empirical distribution of the expert implies a form of loop in the agents behavior which may be as simple as restarting after reaching a terminal state. This is the case in many practical scenarios as well as our experiments. While handling the finite horizon case explicitly might also be interesting, we are not considering it for the purposes of this work.\", \"The scaling factors \\u03b2 were added to provide more freedom to tune the behavior of the learning algorithm but we agree that additional discussion would be useful and have added it to section 3.3. In particular, the factors are the result of dropping the factor of 1/(1-\\u03b3) in equation 7, this indicates that a sensible starting point would be \\u03b2_\\u03c0=(1-\\u03b3)\\u03b2_d. However, we did not find this to be optimal in all cases. In particular, if behavioral cloning is likely to overfit strongly, lower values of \\u03b2_\\u03c0 may be adequate while in cases where exploring to learn the generative models is more difficult, higher values of \\u03b2_\\u03c0 may provide more guidance.\", \"We hope that we answered your questions to your satisfaction, please let us know if you have further questions or concerns that you would like us to address.\"]}", "{\"title\": \"interesting idea, but some issues need to be clarified.\", \"review\": \"This paper studies the problem of matching the state-action distributions of agent and expert demonstrations. In order to address this problem, the authors consider a likelihood treatment comprising a conditional probability (which is estimated from demonstrations) and a state distribution (which is estimated from sampling approximations).\\n\\nThe authors provide a descent result (i.e., equ. (7)) to estimate the gradient of the logarithmic state distribution. One problem is that it is unclear how the discount factor $\\\\gamma$ influence this result?\\n\\nIn addition, in (12), two scaling factors are used, so how to balance these weights?\\n\\nSpecifically, in (11), it seems the authors are considering the stationary joint state-action distribution, which is different from the state-action distribution generated by the agent on-line, it is suggested to clarify this issue.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good Results But Relevant Literature is missing.\", \"review\": \"The paper proposes to use predecessor models for imitation learning to overcome the issue of only observing expert samples during imitation from expert trajectories.\\n\\nThe paper is very well written. But the proposed method is really not novel. The idea of using predecessor models have already been explored in multiple places [1], [2] (but not in imitation learning scenario!). Hence, the novelty comes from using the predecessor models for imitation learning. The introduction of the paper should mention this to reflect the contribution. \\n\\n[1] Recall Traces: Efficient Backtracking models for efficient RL https://arxiv.org/abs/1804.00379\\n[2] Organizing Experience: A Deeper Look at Replay Mechanisms for Sample-based Planning in Continuous State Domains\", \"https\": \"//arxiv.org/abs/1806.04624\\n\\nBoth of these papers should be cited and discussed.\", \"results\": \"The proposed method outperforms GAIL and behaviour cloning in terms of sample efficiency on simulation-based manipulation tasks.\\n\\nRegarding experiments, I would like to see certain baselines.\\n\\n- What happens when you predict sequentially using predecessor models ? I understand that the sequential generation is prone to accumulating errors, but as [1] points out, using predecessor models you can sample from many states on the expert trajectory. And Hence possible to get good learning signal even while sampling shorter trajectories using predecessor models.\\n\\n- Comparison with Dyna based methods. For this baseline, authors would learn a forward model. And then sample from the forward model, and use the samples from the forward model for imitation learning.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Compelling, sample efficient approach to imitation learning using learned dynamics models. Experiments could be extended.\", \"review\": \"The submission builds up on recent advances in neural density estimation to develop a new algorithm for imitation learning based on a probabilistic model for predecessor state dynamics. In particular, the method trains masked autoregressive flows as a probabilistic model for state action pairs conditioned on future states. This model is used to estimate the gradient of the stationary distribution of a policies visited states. Finally, the proposed objective uses this estimate and the gradient of the log likelihood of expert actions under the policy to maximise the similarity of the expert\\u2019s and agent\\u2019s stationary state-action distributions.\\n\\nThe proposed method outperforms existing imitation learning approach (GAIL & BC) on 2 simulation-based manipulation tasks. It performs particularly well in terms of sample efficiency. \\nThe magnitude of difference between the sample efficiencies of GAIL and the proposed approach seems quite surprising and it would be beneficial if the authors could explicitly state if the measured number of samples include the ones used for training of the probabilistic model as well as the policy (apologies if I have missed a section fulfilling this purpose).\\n\\nWhile the improvements on the presented experiments are clear, the experimental section represents a small shortcoming of the submitted paper. The 2 experiments (clip and peg insertion) are quite similar in type and to not take into account other common domains e.g. locomotion tasks from the original GAIL paper. Furthermore, an additional comparison to SAIL would be recommended since the approaches are closely related as the authors acknowledge. The provided comparison with different types of available expert data is quite interesting and could possibly be extended to test other state-of-the-art methods (action-free versions of GAIL, AIRL,etc.).\\n\\nNonetheless, the paper overall presents a strong submission based on novelty & relevance of the proposed method and is recommended for publication.\", \"minor_issues\": [\"Related work: improve transitions between the section about trajectory tracking and BC.\", \"Ablation studies with less flexible probabilistic models would strengthen the experiment section further.\", \"Add derivation from Eq. 3 to 4 and 5 to appendix to render the paper more self-contained and easier to access.\", \"A release of the code base would further strengthen the contributions of the submission.\"], \"general_recommendation\": [\"The authors are encouraged to further investigate off-policy corrections for improved convergence.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BJgEjiRqYX
A Case for Object Compositionality in Deep Generative Models of Images
[ "Sjoerd van Steenkiste", "Karol Kurach", "Sylvain Gelly" ]
Deep generative models seek to recover the process with which the observed data was generated. They may be used to synthesize new samples or to subsequently extract representations. Successful approaches in the domain of images are driven by several core inductive biases. However, a bias to account for the compositional way in which humans structure a visual scene in terms of objects has frequently been overlooked. In this work we propose to structure the generator of a GAN to consider objects and their relations explicitly, and generate images by means of composition. This provides a way to efficiently learn a more accurate generative model of real-world images, and serves as an initial step towards learning corresponding object representations. We evaluate our approach on several multi-object image datasets, and find that the generator learns to identify and disentangle information corresponding to different objects at a representational level. A human study reveals that the resulting generative model is better at generating images that are more faithful to the reference distribution.
[ "Objects", "Compositionality", "Generative Models", "GAN", "Unsupervised Learning" ]
https://openreview.net/pdf?id=BJgEjiRqYX
https://openreview.net/forum?id=BJgEjiRqYX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJxPqvDZg4", "S1eK2QInaX", "BJlXQQ83am", "HyeRkQI26m", "H1x6ZGI3aX", "SJll1M8h6Q", "ryxjYb8hpX", "BkeddZo23m", "Skx5Nrw5hm", "H1g2wJN93Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544808335041, 1542378416752, 1542378267465, 1542378214242, 1542377988529, 1542377944280, 1542377858924, 1541349744001, 1541203250021, 1541189476078 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper614/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper614/Authors" ], [ "ICLR.cc/2019/Conference/Paper614/Authors" ], [ "ICLR.cc/2019/Conference/Paper614/Authors" ], [ "ICLR.cc/2019/Conference/Paper614/Authors" ], [ "ICLR.cc/2019/Conference/Paper614/Authors" ], [ "ICLR.cc/2019/Conference/Paper614/Authors" ], [ "ICLR.cc/2019/Conference/Paper614/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper614/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper614/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a generative model that generates one object at a time, and uses a relational network to encode cross-object relationships. Similar object-centric generation and object-object relational network is proposed in \\\"sequential attend, infer, repeat\\\" of Kosiorek et al. for video generation, which first appeared on arxiv on June 5th 2018 and was officially accepted in NIPS 2018 before the submission deadline for ICLR 2019. Moreover, several recent generative models have been proposed that consider object-centric biases, which the current paper references but does not compare against, e.g., 'attend, infer, repeat' of Eslami et al., or \\\"DRAW: A Recurrent Neural Network For Image Generation\\\" of Gregor et al. . The CLEVR dataset considered, though it contains real images, the intrinsic image complexity is low because it features a small number of objects against table background. As a result, the novelty of the proposed work may not be sufficient in light of recent literature, despite the fact that the paper presents a reasonable and interesting approach for image generation.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"related literature and evaluations\"}", "{\"title\": \"Reply to Reviewer 3\", \"comment\": \"Thank you for your consideration and feedback.\\n\\nThe primary motivation of this work is to argue for object compositionality in deep generative models (and in particularly GANs), which originates from two key observations. First, real-world images are to a large degree compositional, and a generative model that is suitable equipped with a corresponding inductive bias should be better at capturing this distribution. Second, in disentangling information content corresponding to different objects at a representational level they may be recovered a posteriori unlike in unstructured models.\\n\\n > \\u201c... it is unclear if the proposed method can be generalize to handle more complicated scene such as COCO images as the experiments are all conducted using very toy-like image datasets.\\u201d\\n\\nOur works builds on prior work in purely unsupervised multi-object image generation and representation learning. Whereas prior work has focused primarily on the representation learning part (eg. [3, 4, 5]), here our focus is on scaling these ideas to more complex datasets. In particular, among the multi-object datasets considered in relevant prior work are Multi-MNIST [3, 4, 7], Shapes [4, 5], and Textured MNIST [5]. In this work we consider several more complex datasets, including two relational version of Multi-MNIST (triplet, rgb), a variation on CIFAR10 that has RGB MNIST digits in the foreground, and high-resolution CLEVR images that contain many rendered geometric objects and require lighting and shadows to be modeled.\\n\\nIdeally we would be able to apply our approach to common segmentation datasets (eg. Pascal VOC, COCO) although in practice we find that these are still far out of reach for purely unsupervised approaches. Such datasets have been designed with access to ground-truth labels in mind and the large imbalance between the visual complexity of objects (i.e. intra-class variation) and the number of samples renders them unsuitable for our purpose. We consider CLEVR to be among the more complex multi-object datasets that are balanced in this way, and hence the feasibility of our approach on this dataset is an important step forward compared to prior work.\\n\\n> \\u201c... it misses an investigation of alternative network design for achieving the same compositionality. For example, what would be the performance difference if one replace the MHDPA with LSTM. \\u201c\\n\\nOur proposed framework incorporates MHDPA to model relations between objects. MHDPA is an instance of a graph network [1], which renders it suitable for this task. It would be valid to compare MHDPA to other instances of graph networks, eg. the interaction function from [6] or the relational mechanism from [2]. In prior experiments we have explored several ablations and extensions of the current relational mechanism that approach these configurations. We were unable to obtain significantly better FID scores for any of these variations, and so we settled with the mechanisms proposed in [8] to model relations between objects. As our goal was to demonstrate the feasibility / benefits of incorporating such a mechanisms we did not consider it worth it to dedicate human evaluation to this. However, we do agree that the paper could mention this and we will update it accordingly to make this more clear. \\n\\n[1] Battaglia, Peter W., et al. \\\"Relational inductive biases, deep learning, and graph networks.\\\" arXiv preprint arXiv:1806.01261 (2018).\\n[2] Chang, Michael B., et al. \\\"A compositional object-based approach to learning physical dynamics.\\\" International Conference on Learning Representations. 2016.\\n[3] Eslami, SM Ali, et al. \\\"Attend, infer, repeat: Fast scene understanding with generative models.\\\" Advances in Neural Information Processing Systems. 2016.\\n[4] Greff, Klaus, et al. \\\"Neural expectation maximization.\\\" Advances in Neural Information Processing Systems. 2017.\\n[5] Greff, Klaus, et al. \\\"Tagger: Deep unsupervised perceptual grouping.\\\" Advances in Neural Information Processing Systems. 2016.\\n[6] van Steenkiste, Sjoerd, et al. \\\"Relational neural expectation maximization: Unsupervised discovery of objects and their interactions.\\\" International Conference on Learning Representations. 2018.\\n[7] Yang, Jianwei, et al. \\\"LR-GAN: Layered recursive generative adversarial networks for image generation.\\\" International Conference on Learning Representations. 2017.\\n[8] Zambaldi, Vinicius, et al. \\\"Relational Deep Reinforcement Learning.\\\" arXiv preprint arXiv:1806.01830 (2018).\"}", "{\"title\": \"Reply to Reviewer 2 [2 / 2]\", \"comment\": \"(3) The conceptual difference between foregrounds and backgrounds i.e. that foregrounds are usually smaller than backgrounds and scattered at various locations is encoded in the learned alpha channel. Each of the object (foreground) generators draws an object in the scene at a random location (based on the mask, and RGB values), requiring location and size to be encoded in the latent representation z_i. This is important as it allows an inference model to extract this information correspondingly.\\n\\nWe would like to emphasize that the reported images in Figures 3 & 4 are representative of how the best performing models generate the scene. In other words, on all datasets we consistently find that the network generates the images as a composition of individual objects. Indeed on CLEVR we found cases in which a component generates more than one object, which is understood by the fact that the number of components is smaller than the number of objects in the scene (see also Figure 8 in Appendix A). We also find infrequent cases (primarily on CLEVR, although sometimes on CIFAR10) in which the background generator generates an additional object. This is understandable as there are no restrictions in our approach that prevent it from doing so. However, given that in almost all other cases the network generate images as compositions of objects and background it seems reasonable to conclude that these are due to optimization issues (as are common in GANs). After all, the compositional solution is clearly the superior choice, as is evident from our human evaluation compared to regular GAN.\\n\\n(4) We are unable to provide the experiment that you are proposing as unlike in [5] the MNIST digits considered in our approach do not appear at a fixed location. We did study other properties of the generated images as can be seen in Figure 6b, Figure 7, and Figure 8. In particular we find that k-GAN is better at generating 3 RGB digits (requiring the relation to be captured) compared to GAN as can be seen in Figure 6b and Figure 7. Secondly, the fact that increasing the number of components to 4 or 5 reduces the accuracy with which the correct number of digits is generated only marginally (eg. Figure 6b) provides additional evidence that the relational mechanism has learned about the correct number of objects.\\nFinally, the large difference in FID scores (Figure 9) in comparing k-GAN with and without a relational mechanism on rgb MM can only be explained by the relational mechanism correctly accounting for the different color of the digits.\\n\\n[1] Eslami, SM Ali, et al. \\\"Attend, infer, repeat: Fast scene understanding with generative models.\\\" Advances in Neural Information Processing Systems. 2016.\\n[2] Greff, Klaus, et al. \\\"Neural expectation maximization.\\\" Advances in Neural Information Processing Systems. 2017.\\n[3] Greff, Klaus, et al. \\\"Tagger: Deep unsupervised perceptual grouping.\\\" Advances in Neural Information Processing Systems. 2016.\\n[4] Lucic, Mario, et al. \\\"Are gans created equal? a large-scale study.\\\" arXiv preprint arXiv:1711.10337 (2017).\\n[5] Yang, Jianwei, et al. \\\"LR-GAN: Layered recursive generative adversarial networks for image generation.\\\" International Conference on Learning Representations. 2017.\"}", "{\"title\": \"Reply to Reviewer 2 [1 / 2]\", \"comment\": \"Thank you for your consideration and feedback.\\n\\nThe primary motivation of this work is to argue for object compositionality in deep generative models (and in particularly GANs), which originates from two key observations. First, real-world images are to a large degree compositional, and a generative model that is suitable equipped with a corresponding inductive bias should be better at capturing this distribution. Second, in disentangling information content corresponding to different objects at a representational level they may be recovered a posteriori unlike in unstructured models.\\n\\nIn the following we will answer each of your comments.\\n\\n(1) Our conclusion regarding FID arises from the way the Inception network (that provides the embedding) was trained. In particular, by training on ImageNet for single-object classification it is unlikely that deep layers (eg. logits or final max-pool) provide high-level features that capture properties of multiple objects accurately. In particular, it suggests that FID is limited in accurately evaluating generated images containing multiple objects, even though it is accurate in evaluating generative models on ImageNet (or related single-object tasks like CIFAR10, etc.) as shown in [4]. Similarly, this does not preclude LR-GAN (or other compositional approaches) from using FID on ImageNet or CIFAR10.\\n\\nOn the contrary, we compute FID on multi-object images using an inception network that was pre-trained on single-object images (ImageNet). We are interested in verifying that the generated images are faithful with respect to the training distribution in terms of the number of objects, their identities, etc. To the best of our knowledge FID has not been used in this way previously. LR-GAN [5] evaluates the Inception score only on MNIST-ONE and not on MNIST-TWO, although they conclude that it is unsuitable even in the single object case (see Appendix 6.3). Based on our own observations in using FID on multi-object datasets (as summarised in Figure 9) we argue that FID is unable to judge generated images based on specific properties relating to multiple objects (eg. their total number, etc.). The large differences that are observed in evaluating the subjective quality (human eval - Figure 6a) for models with similar FID provide additional evidence that this is the case.\\n\\n(2) In this work we are interested generating scenes as compositions of objects, and in particular in verifying that this information can be disentangled at a representation level. This requires evaluation on datasets for which a clear notion of \\u201cobject\\u201d is available. Compared to prior work that has focused primarily on the representation learning part (eg. [1, 2, 3]), we focus on scaling these insights to more complex multi-object datasets.\\n\\nWe would like to emphasize that relevant prior work has only focused on Multi-MNIST [1, 2, 3], Shapes [2, 3], and Textured MNIST [3]. In this work we consider several more complex datasets, including two relational version of Multi-MNIST (triplet, rgb), a variation on CIFAR10 that has RGB MNIST digits in the foreground, and high-resolution CLEVR images that contain many rendered geometric objects and require lighting and shadows to be modeled.\\n\\nIdeally we would be able to apply our approach to common segmentation datasets (eg. Pascal VOC, COCO) although in practice we find that these are still far out of reach for purely unsupervised approaches. Such datasets have been designed with access to ground-truth labels in mind and the large imbalance between the visual complexity of objects (i.e. intra-class variation) and the number of samples renders them unsuitable for our purpose. We consider CLEVR to be among the more complex multi-object datasets that are balanced in this way, and hence the feasibility of our approach on this dataset is an important step forward compared to prior work.\"}", "{\"title\": \"Reply to Reviewer 1 [3 / 3]\", \"comment\": \"> \\u201cIs there any prior imposed on the scene created? Or on the way the objects should interact?\\u201d\\n\\nthere is no prior imposed on the scene created (other than that is compositional) or in the way objects are supposed to interact. \\n\\n> \\u201cOn the implementation side, what MLP is used, how are its parameters validated?\\u201d\\n\\nAll implementation details are available in Appendix B.1 (layer sizes, normalization, activation functions etc.) The choice for this particular configuration of the relational mechanism were obtained after exploring several other variations that did not result in a significant improvement in FID scores. We agree that this is currently unclear in the paper and we will update the Appendix to reflect this. All other hyper-parameters listed in Appendix B.2 participate in a large-scale grid search in which we explore more than 250 different configurations, including 5 seeds. \\n\\n> \\u201cThe attention mechanism has a gate, effectively adding in the original noise to the output \\u2014 is this a weighted sum? If so, how are the coefficient determined, if not, have the authors tried?\\u201d\\n\\nAs per equation (3) in the main text and appendix B.1, the update vector a_i is obtained as a weighted sum of the value vectors of each component. Attention weights are obtained by computing an inner product between the query vector q_i and the key vector k_i, followed by normalization and a softmax activation which ensures that the weights in the total sum add up to 1. The update vector a_i is passed through a post-processing network (2 layer MLP - see appendix for details) before being added to z_i (without a gate). We have not tried a configuration that gates the update a_i with z_i, in order to prevent the initial sample from being ignored.\\n\\n> \\u201cThe paper goes over the recommended length (still within the limit) but still fails to include some important details \\u2014mainly about the implementation\\u201d\\n\\nIt is our understanding that all experiment details (and other important details) are available in the paper. However, if you find that anything is unclear or missing, then we are happy to update the paper accordingly.\\n\\n[1] Azadi, Samaneh, et al. \\\"Compositional GAN: Learning Conditional Image Composition.\\\" arXiv preprint arXiv:1807.07560 (2018).\\n[2] Battaglia, Peter W., et al. \\\"Relational inductive biases, deep learning, and graph networks.\\\" arXiv preprint arXiv:1806.01261 (2018).\\n[3] Chang, Michael B., et al. \\\"A compositional object-based approach to learning physical dynamics.\\\" International Conference on Learning Representations. 2016.\\n[4] Eslami, SM Ali, et al. \\\"Attend, infer, repeat: Fast scene understanding with generative models.\\\" Advances in Neural Information Processing Systems. 2016.\\n[5] Greff, Klaus, et al. \\\"Neural expectation maximization.\\\" Advances in Neural Information Processing Systems. 2017.\\n[6] Greff, Klaus, et al. \\\"Tagger: Deep unsupervised perceptual grouping.\\\" Advances in Neural Information Processing Systems. 2016.\\n[7] van Steenkiste, Sjoerd, et al. \\\"Relational neural expectation maximization: Unsupervised discovery of objects and their interactions.\\\" International Conference on Learning Representations. 2018.\\n[8] Yang, Jianwei, et al. \\\"LR-GAN: Layered recursive generative adversarial networks for image generation.\\\" International Conference on Learning Representations. 2017.\\n[9] Zambaldi, Vinicius, et al. \\\"Relational Deep Reinforcement Learning.\\\" arXiv preprint arXiv:1806.01830 (2018).\"}", "{\"title\": \"Reply to Reviewer 1 [2 / 3]\", \"comment\": \"Regarding (4), we would like to point out that relevant prior work concerned with multi-object images focuses on Multi-MNIST [4, 5, 8], Shapes [5, 6], and Textured MNIST [6]. In this work we consider several more complex datasets, including two relational version of Multi-MNIST (triplet, rgb), a variation on CIFAR10 that has RGB MNIST digits in the foreground, and high-resolution CLEVR images that contain many rendered geometric objects and require lighting and shadows to be modeled.\\n\\nIdeally we would be able to apply our approach to common segmentation datasets (eg. Pascal VOC, COCO) although in practice we find that these are still far out of reach for purely unsupervised approaches. Such datasets have been designed with access to ground-truth labels in mind and the large imbalance between the visual complexity of objects (i.e. intra-class variation) and the number of samples renders them unsuitable for our purpose. We consider CLEVR to be among the more complex multi-object datasets that are balanced in this way, and hence the feasibility of our approach on this dataset is an important step forward compared to prior work.\\n\\n> \\u201cThe very related work by Azadi et al on compositional GAN, while mentioned, is not sufficiently critiqued or adequately compared to within the context of this work.\\u201d\\n\\nOur approach is only marginally related to the Compositional GAN proposed by Azadi et al. [1]. Their approaches takes as input a pair of images (conditional generation) and corresponding segmentation masks that indicate which pixels of an input image belong to an object. Their framework then implements a means to compose the objects in the individual images to obtain a new image. On the other hand our generator only receives noise as input, and an important challenge is in learning how to disentangle information content belonging to different objects (identifying what objects are in the process), such that scenes may be generated in a compositional fashion. In that sense, our work and the work by Azadi et al. could be combined by using the Compositional GAN as a replacement for the \\u201ccomposition\\u201d operation in our approach, while ignoring the relational structure. One interesting observation is that the self-consistency loss from Azadi et al. could be used to learn a network that decomposes the composed image into images of individual objects, which is expected to benefit the discriminator (as per our discussion). We will update the discussion section to list this as a possibility for future work.\\n\\n> \\u201cThe choice of an attention mechanism to model relationship seems arbitrary and perhaps overly complicated for simply creating a set of latent noises. What happens if a simple MLP is used?\\u201d\\n\\nThe role of the attention mechanism in this work is to model object-object interactions, which motivates its choice. MHDPA is an instance of a graph network [2], and similar to other instances (eg. the interaction function in [7], or the relational mechanism in [3]) it excels at relational reasoning. In particular, by factorizing complex relations into pairwise interactions, and weight sharing, this mechanism is compositional and invariant in the number of objects. In prior experiments we have explored several abilations and extensions of the current relational mechanism that approach these configurations. We were unable to obtain significantly better FID scores for any of these variations, and so we settled with the mechanisms proposed in [9] to model relations between objects.\"}", "{\"title\": \"Reply to Reviewer 1 [1/ 3]\", \"comment\": [\"Thank you for your feedback and consideration.\", \"In the following we first provide an overview of the main answers regarding your main concern that \\u201c... the ideas, while interesting, are not novel, the method not clearly motivated, and the paper fails to convince\\u201d before proceeding to a detailed discussion:\", \"Compositionality is critical in reducing complex visual scenes to a set of primitives (objects) that can be re-combined freely to generate new scenes (combinatorial productivity). There is substantial empirical evidence that neural networks can benefit from this in image processing tasks.\", \"The novelty of our approach is in combining insights from unsupervised multi-object image processing (representation learning) with GANs that have proven useful in generating complex images.\", \"The datasets that we consider are non-trivial and substantially more complex compared to relevant related work that only considers variations of Multi-MNIST. Our results on CLEVR already significantly advance upon the state of the art in terms of unsupervised multi-object image-processing.\", \"The experimental evaluation is sound and the reported results are representative for the model performance: it consistently generates images as a composition of individual objects.\", \"Compared to a strong baseline of GANs we find that the generated images are of higher quality, and more faithful to the reference distribution, as confirmed by a large scale human study.\"], \"detailed_answers_below\": \"The primary motivation of this work is to argue for object compositionality in deep generative models (and in particularly GANs), which originates from two key observations. First, real-world images are to a large degree compositional, and a generative model that is suitable equipped with a corresponding inductive bias should be better at capturing this distribution. Second, in disentangling information content corresponding to different objects at a representational level they may be recovered a posteriori unlike in unstructured models.\", \"in_our_experiments_we_find_that_the_proposed_model_is_successful_in_doing_both\": \"it generates images of higher-quality that are more faithful to the reference distribution (as per human evaluation), and it consistently disentangles information content belonging to different objects (visual inspection).\\n\\nWe would like to emphasize this last part, and point out that the reported images in Figures 3 & 4 are representative of how the best performing models generate the scene. In other words, on all datasets we consistently find that the network generates the images as a composition of individual objects. Indeed on CLEVR we found cases in which a component generates more than one object, which is understood by the fact that the number of components is smaller than the number of objects in the scene (see also Figure 8 in Appendix A). We also find infrequent cases (primarily on CLEVR, although sometimes on CIFAR10) in which the background generator generates an additional object. This is understandable as there are no restrictions in our approach that prevent it from doing so. However, given that in almost all other cases the network generate images as compositions of objects and background it seems reasonable to conclude that these are due to optimization issues (as are common in GANs). After all, the compositional solution is clearly the superior choice, as is evident from our human evaluation compared to regular GAN.\\n\\nThe proposed framework combines insights from related work in multi-object image generation and relational reasoning. There is substantial evidence that object compositionality is beneficial in a variety of image-related tasks, although purely unsupervised approaches (in particular those targeted at discovering object representations) have only been evaluated on toy datasets. GANs have been shown to scale to complex images, and the proposed approach demonstrates that a combination of these ideas is fruitful. More specifically, our contributions are (1) an implementation of recent insights from unsupervised multi-object image processing in the GAN framework, (2) strong evidence that a deep generative model may learn about objects purely through the process of generation (i.e. without a \\u201cdecoder\\u201d), (3) strong evidence that object compositionality benefits the quality and properties of generated images, and (4) strong evidence that these ideas can be scaled to more complex datasets in using GANs.\"}", "{\"title\": \"An interesting method, but more experiments needed\", \"review\": \"[Overview]\\n\\nIn this paper, the authors proposed a compositional image generation methods that combines multiple objects and background into the final images. Unlike the counterpart which compose the images sequentially, the proposed method infer the relationships between multiple objects through a relational network before sending the hidden vectors to the generators. This way, the method can model the object-object interactions during the image generation. From the experimental results, the authors demonstrated that the proposed k-GAN can generate the images with comparable or slightly better FID compared with baseline GAN, and achieve plausible performance under the human study.\\n\\n[Strengthes]\\n\\n1. This paper proposed an interesting method for compositional image generation. Unlike the counterparts like LR-GAN, which generate foreground objects recurrently, this method proposed to derive the relationships between objects in parallel simultaneously. This kind of relational modeling has been seen in many other domains. It would be nice to see it can be applied to the compositional image generation domain.\\n\\n2. The authors tried multiple synthesized datasets, including multi-MNIST and its variants, CLEVR. From the visualization, it is found that the proposed k-GAN can learn to disentangle different objects and the objects from the background. This indicates that the proposed model indeed capture the hidden structure of the images through relational modeling. The human study on these generated images further indicate that the generated images based on k-GAN is better than those generated by baseline GAN.\\n\\n[Weaknesses]\\n\\n1. The main contribution of this paper fall to the proposed method for modeling the relational structure for multiple objects in the images. In the appendix, the authors presented the results for the ablated version which does not consider the relationships. As the authors pointed out, these results are a bit counterintuitive and concluded that FID is not a good metrics for evaluating compositional generation. However, as far as I know, the compositional generation can achieve much better Inception scores on CIFAR-10 in LR-GAN paper (Yang et al). Combining the results on MNIST in LR-GAN paper, I would suspect that the datasets used in this paper are fairly simple and all methods can achieve good results without much efforts. It would be good to show some results on more complicated datasets, such as face images with background, or cifar-10. Also, the authors did not present the qualitative results for independent version of k-GAN. Meanwhile, they missed an ablated human study when the relational modeling is muted. I would like to see how the generated images without modeling relationships look like to humans.\\n\\n2. Following the above comment, I think the datasets used in this paper is relatively simpler. In MM and CLEVR, the foreground objects are digits or simple cubes, spheres or cylinders. Also, the background is also simpler for these two datasets. Though CIFAR10+MM has a more complicated background, it is trivial for the model to distinguish the foregrounds from the backgrounds. Again, the authors should try more complicated datasets.\\n\\n3. Though the proposed method can model the relationship between objects simultaneously, I doubt its ability to really being able to disentangle the foregrounds from the backgrounds. Since the background and foregrounds are both whole images, which are then combined with an alpha blending, the model cannot discover the conceptually different properties for foreground and background that foregrounds are usually small than background and scattered at various locations. Actually, this has been partially demonstrated by Figure 4. In the last row, we can find one sphere is in the background image. I tend to think the proposed model performs similar to Kwak & Zhang's paper without a strong proof for the authors that the relational modeling plays an important role in the model.\\n\\n4. It would be nice to perform more analysis on the trained k-GAN. Given the training set, like MM or CLEVR, I am wondering whether k-GAN can learn some reasonable relationship from the datasets. That is, whether it is smart enough to infer the right location for each objet by considering the others. This analysis can be performed, how much occlusions the generated images have compared with the real images. For example, on CLEVR, I noticed from the appendix that the generated CLEVR images base on k-GAN actually have some severe occlusions/overlaps.\\n\\n[Summary]\\n\\nIn this paper, the authors proposed an interesting method for image generation compositionally. Instead of modeling the generation process recurrently, the authors proposed to model the relationships simultaneously in the hidden vector space. This way, the model can generate multiple foreground objects and backgrounds more flexibly. However, as pointed above, the paper missed some experiment, ablation study and analysis to demonstrate the relational modeling in the image generation. The author need to either try more complicated images or add deeper analysis on the recent experimental results.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting idea but not novel and ultimately unconvincing\", \"review\": \"This paper explores compositional image generation. Specifically, from a set of latent noises, the relationship between the objects is modelled using an attention mechanism to generate a new set of latent representations encoding the relationship. A generator then creates objects separately from each of these (including alpha channels). A separate generator creates the background. The objects and background are finally combined in a final image using alpha composition. An independent setting is also explored, where the objects are directly sampled from a set of random latent noises.\\n\\nMy main concern is that the ideas, while interesting, are not novel, the method not clearly motivated, and the paper fails to convince. \\n\\nIt is interesting to see that the model was able to somewhat disentangle the objects from the background. However, overall, the experimental setting is not fully convincing. The generators seem to generate more than one object, or backgrounds that do contain objects. The datasets, in particular, seem overly simplistic, with background easily distinguishable from the objects. A positive point is that all experimented are ran with 5 different seeds. The expensive human evaluation used does not provide full understanding and do not seem to establish the superiority of the proposed method.\\n\\nThe very related work by Azadi et al on compositional GAN, while mentioned, is not sufficiently critiqued or adequately compared to within the context of this work.\\n\\nThe choice of an attention mechanism to model relationship seems arbitrary and perhaps overly complicated for simply creating a set of latent noises. What happens if a simple MLP is used? Is there any prior imposed on the scene created? Or on the way the objects should interact?\\nOn the implementation side, what MLP is used, how are its parameters validated?\\n\\nWhat is the observed distribution of the final latent vectors? How does this affect the generation process? Does the generator use all the latent variables or only those with highest magnitude? \\nThe attention mechanism has a gate, effectively adding in the original noise to the output \\u2014 is this a weighted sum? If so, how are the coefficient determined, if not, have the authors tried?\\n\\nThe paper goes over the recommended length (still within the limit) but still fails to include some important details \\u2014mainly about the implementation\\u2014 while some of the content could be shortened or moved to the appendix. Vague, unsubstantiated claims, such as that structure of deep generative models of images is determined by the inductive bias of the neural network are not really explained and do not bring much to the paper.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting idea but with insufficient experimental validation\", \"review\": \"The paper proposes a compositional generative model for GANs. Basically, assuming existence of K objects in the image, the paper creates a latent representation for each object as well as a latent representation for the background. To model the relation between objects, the paper utilizes the multi-head dot-product attention (MHDPA) due to Vaswani et a. 2017. Applying MHDPA to the K latent representations results in K new latent representations. The K new representations are then fed into a generator to synthesize an image containing one object. The K images together with the synthesized background image are then combined together to form the final image. The paper compares to the proposed approach to the standard GAN approach. The reported superior performance suggest the inductive bias of compositionality of scene leads to improved performance.\\n\\nThe method presented in the paper is a sensible approach and is overall interesting. The experiment results clearly shows the advantage of the proposed method. However, the paper does have several weak points. Firs of all, it misses an investigation of alternative network design for achieving the same compositionality. For example, what would be the performance difference if one replace the MHDPA with LSTM. Another weak point is that it is unclear if the proposed method can be generalize to handle more complicated scene such as COCO images as the experiments are all conducted using very toy-like image datasets.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HJgVisRqtX
SEGEN: SAMPLE-ENSEMBLE GENETIC EVOLUTIONARY NETWORK MODEL
[ "Jiawei Zhang", "Limeng Cui", "Fisher B. Gouza" ]
Deep learning, a rebranding of deep neural network research works, has achieved a remarkable success in recent years. With multiple hidden layers, deep learning models aim at computing the hierarchical feature representations of the observational data. Meanwhile, due to its severe disadvantages in data consumption, computational resources, parameter tuning costs and the lack of result explainability, deep learning has also suffered from lots of criticism. In this paper, we will introduce a new representation learning model, namely “Sample-Ensemble Genetic Evolutionary Network” (SEGEN), which can serve as an alternative approach to deep learning models. Instead of building one single deep model, based on a set of sampled sub-instances, SEGEN adopts a genetic-evolutionary learning strategy to build a group of unit models generations by generations. The unit models incorporated in SEGEN can be either traditional machine learning models or the recent deep learning models with a much “narrower” and “shallower” architecture. The learning results of each instance at the final generation will be effectively combined from each unit model via diffusive propagation and ensemble learning strategies. From the computational perspective, SEGEN requires far less data, fewer computational resources and parameter tuning efforts, but has sound theoretic interpretability of the learning process and results. Extensive experiments have been done on several different real-world benchmark datasets, and the experimental results obtained by SEGEN have demonstrated its advantages over the state-of-the-art representation learning models.
[ "Genetic Evolutionary Network", "Deep Learning", "Genetic Algorithm", "Ensemble Learning", "Representation Learning" ]
https://openreview.net/pdf?id=HJgVisRqtX
https://openreview.net/forum?id=HJgVisRqtX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1xpnM8SxE", "rkgRexJdAX", "ryejOUP7RQ", "B1x636G56Q", "SJxPI_zcp7", "SkeQ8n-caQ", "HkgIko9h2m", "rkxb8KF2hQ", "SJlE9Qm32m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545065141153, 1543135222353, 1542841971317, 1542233524944, 1542232143106, 1542229067272, 1541348061891, 1541343561416, 1541317515570 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper613/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper613/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper613/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper613/Authors" ], [ "ICLR.cc/2019/Conference/Paper613/Authors" ], [ "ICLR.cc/2019/Conference/Paper613/Authors" ], [ "ICLR.cc/2019/Conference/Paper613/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper613/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper613/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper endeavors to combine genetic evolutionary algorithms with subsampling techniques. As noted by reviewers, this is an interesting topic and the paper is intriguing, but more work is required to make it convincing (fairer baselines, more detailed / clearer presentation, ablation studies to justify the claims made int he paper). Authors are encouraged to strengthen the paper by following reviewers' suggestions.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting topic but requires more work\"}", "{\"title\": \"response to rebuttal\", \"comment\": \"I appreciate the authors' effort in clarifying their contributions and the explanation.\\n\\n<1.1> The solution is subsampling a large graph and the ways of subsampling have been adopted by others. As mentioned in the comments, noveling is limited.\\n<1.2> Maybe the use of the word \\\"component\\\" confuses the authors. The three \\\"components\\\" refer to subsampling, GA and ensemble learning.\\n<3> After reading your response, I still think that the key contribution of the paper is combination of subsampling, GA and simple ensemble learning. \\n<4> Using GA for global optimization has long been proposed. The use of GA for evolving neural networks has also been considered and tested with different success. And the way being adopted by the authors is not particularly novel.\\n<5> As explained, combinations of subsampling + existing network embedding methods (LINE/DeepWalk/...) + ensembling may also give good results.\\n<6> As explained, the fact that GA was proposed for global optimization is a well known fact.\\n<7> I think this is an open-ended question leaving for the authors to think about.\\n<8> \\\"GA + Ensemble together not sound\\\" Again I think the authors misunderstand some of my comments and hard to provide further comments here.\\n\\nAfter reading the rebuttal, I think I still uphold my original rating.\"}", "{\"title\": \"Response to the authors' rebuttal\", \"comment\": [\"Thanks for the response. Some of the concerns are resolved through the rebuttal. Here are some important issues:\", \"Point <2> needs to be clarified in the manuscript.\", \"Point <3> is not raised or mentioned in the manuscript! Needs clear clarification, and then ablation studies to show how they help.\", \"With respect to point <4>, the limited space notations cannot be a good reason. The authors have the option of providing supplementary material, which does not have any space limitations or constraints, and the authors did not use that. Furthermore, referring to external material (other papers, or papers on arxiv) in the rebuttal to answer a direct concern of the reviewers does not seem correct. If something is required for the paper to be understood and to justify the correctness of the work, it should be provided in the paper or the supplementary material!\", \"For point <6>, the authors seem not to understand what the fitness function means here. Fitness in the GA (in your setting) would be defined as evaluating a single unit model. The evaluation consists of building a unit model, fine-tuning, and evaluating. All these are functions of the data size (or portions of it). Then, the deep generation by generation (k \\\\times m) adds to it and is a constant factor.\", \"For point <7>, the authors again refer to an external publication, without enough discussions and justifications here.\", \"If every question raised by this reviewer (and also other reviewers) can be answered by other papers that the authors keep referring to in the rebuttal, then, what is the need for publishing this new paper!?\", \"Although this reviewer appreciates the response from the authors, I still think the paper is not mature and is not ready for publication.\"]}", "{\"title\": \"Author Response to Reviewer 3\", \"comment\": \"Thank you for your comments. Please find the response as follows. Hope we resolve your concerns and questions. Welcome to let us know if you have any other questions.\\n\\n<1> Section 4.2.4 in Page 5 on the loss function. We clarify that for each node we can compute its latent feature representation z with the auto-encoder model. However, graph embedding is slightly different form other existing embedding problems, since the nodes are connected. Generally, in graph embedding, we may hope the learned representation features can capture the network structure: connected nodes will have closer representations. Therefore, given two nodes, v_i, v_j, if they are connected, i.e., s_ij = 1, then we may want to project them into close regions; if they are not connected, i.e., s_ij = 0, then we will not count the loss introduced by them, i.e., projecting them to any regions will not matter any more.\\n\\n<2> The auto-encoder model as well as the z vectors is for the network embedding task only (auto-encoder as the base model, we further consider the graph connections). We use it as an example to introduce the overall SEGEN framework settings. The task and the unit models used in it can be changed to any other models, where the detailed loss function and the descriptions will be different. When it comes to CNN+MNIST or MLP+OtherDatasets, we will learn CNN unit models and MLP unit models instead, which will not contain the z vectors or the loss function in section 4.2.4. Instead, we will have some other loss functions on the CNN output, e.g., the cross-entropy on the CNN outputs compared against the true labels.\\n\\n<3> New models are fine-tuned? We claim that the new models will be fine-tuned in the next generation on the dataset. Since in the new generation, the first step is to learn the unit models before involving them in the genetic evolutionary part. They will be trained on the training set. \\n\\n<4> We clarify that we have provide the proof ready, but due to the limited space we remove many important proofs. We demonstrate that via GA and ensemble, we can achieve better performance. The reviewer is suggested to refer to Section 4 in the recent article (https://arxiv.org/abs/1805.07500) for more information. Especially Equation 14 in that article, it indicates that via generations the learning loss will be non-increasing. \\n\\n<5> We clarify that our time and space cost analysis is not for one unit model, it is for the whole SEGEN model with multiple generations. The time complexiety provided before Section 4.4.3 contains K as the generation number, m as the population size.\\n\\n<6> We clarify that for the SEGEN model introduced in this part, the fitness function computation is not the most computationally intensive task actually, since the unit model learning with gradient descent in Section 4.2.3 will be much time consuming. The time costs in learning the models may grow exponentially as the model size (I mean the input data size) increases. That is the reason we try to sample the sub-graphs in this paper instead to lower down the time cost compared against the existing deep models (it is also our main contribution and advantages). Fitness function computation is the most computationally intensive task in (Pelikan and Lobo 1999 mentioned in your comments), mainly compared against the mutation and cross operations. Here, the learning settings change to the deep learning model learning + evolutionary. Compared against model learning, GA based evolutionary time cost is not significant any more, not to mention the fitness function computation part.\\n\\n<7> As to the convergence part, we clarify that we have provide the proof ready, but due to the limited space we remove many important proofs. The reviewer is also suggested to refer to Section 4 in the recent article (https://arxiv.org/abs/1805.07500) for more information. Especially Equation 14 in that article, it indicates that via generations the learning loss will be non-increasing, and it will converge generations by generations.\"}", "{\"title\": \"Author Response to Reviewer 2\", \"comment\": \"Thank you for your comments. Please find the response as follows. Hope we resolve your concerns and questions.\\n\\n<1> First of all, we need to re-clarify the contributions of this article.\\n\\n<1.1> For the learning settings with extremely large-sized but small-numbered data instances (i.e., each data instance is large, but the total number of available data instance is small), training large and deep neural networks is an impossible mission. In this paper, we propose a solution to such a problem. \\n\\n<1.2> This paper doesn\\u2019t really like existing deep learning works focusing in stacking components together. It is not a good idea to interpret our contribution as \\u201cputting three known components together\\u201d. According to <1.1>, to solve the lack of data instance problem, we propose to divide the large graph into small-sized sub-graphs. To ensure the sub-graphs can capture the properties of the large graph, we use different sampling methods. Meanwhile, in the training process, to ensure the learning effectiveness, we also introduce a new learning framework, with both the gradient descent based algorithm with the genetic algorithm. As to the ensemble part, it not merely because we decompose the large graph into smaller graphs. The main reason is we have a group of small models, each one is trained on sub-graphs achieved by a sampling method, we need to integrate the outputs of these models together.\\n\\n<1.3> The model learning part of the model proposed in this paper is based on both the gradient decent based algorithms (for each unit model), as well as the genetic algorithm (between different generations). This part should be notable to the reader and the reviewer.\\n\\n<2> How the gain in performance is resulted? We clarify that with a small number of large data instance inputs, we cannot train effective deep models due to the lack of data instances. By decomposing the large graphs into smaller ones, we will be able to learn effective model variables.\\n\\n<3> sampling+existing embedding model+ensemble should also be useful. The answer is yes, since our framework and our new learning algorithm is useful, replacing the auto-encoder based embedding algorithm with the other shallow or deep embedding models should also work fine. The reviewer is suggested to read the paper again to understand what we do, so as to understanding our contributions, instead of treating it is a combination of sampling+ensembling.\\n\\n<4> Originality and significance is limited: This is the first paper to introduce the genetic evolutional neural network! Different from the existing deep learning model works, we propose a novel network model trainable with a small-set of extremely large data instances. We introduce a new model learning algorithm with both gradient decent based algorithms and the genetic algorithm. I assume the reviewer cannot find another paper with these two novelty and contributions.\\n\\n<5> The baseline methods, LINE, DeepWalk, Node2Vec, and HPE are the state-of-the-art methods in network embedding introduced in recent years.\\n\\n<6> Crossover on models. Crossover operation is to help learn a much better unit model actually. Based on the gradient descent algorithms, we will be able to learn good unit models. However, once the unit model achieving the local optima, it cannot be further improved any more. Genetic algorithm (including crossover, mutation, etc.) allows the models to jump out from the local optima and achieve better performance. The generated child models will be updated with gradient decent algorithm again to achieve the local optimas. Will this make the models worse? The answer is it is possible. However, in the proposed architecture, we will select top m models among the parent models and the newly generated child models. If the child models are bad, they will not be selected for the next generation. In other words, we can ensure the crossover will not degrade the learning performance of the unit models for the next generation. The readers and reviewers are suggested to refer to the recent article (https://arxiv.org/abs/1805.07500) to understand the advantages of incorporating genetic algorithm into the model optimization part.\\n\\n<7> The method is not end-to-end. Since the genetic algorithm involves crossover and mutation, this part involves probabilities into the model, it is impossible to train the crossover and mutation operations with the existing error-backpropagation algorithm. In other words, training the method in an end-to-end is an impossible mission.\\n\\n<8> GA + Ensemble together not sound. We clarify that we have provide the proof ready, but due to the limited space we remove many important proofs. We demonstrate that via GA and ensemble, we can achieve better performance. The reviewer is suggested to refer to Section 4 in the recent article (https://arxiv.org/abs/1805.07500) for more information. Especially Equation 14 in that article, it indicates that via generations the learning loss will be non-increasing.\"}", "{\"title\": \"Author Response to Reviewer 1\", \"comment\": \"Thank you for your comments. Please find the response as follows. Hope we resolve your concerns and questions. Welcome to let us know if you have any other questions.\\n\\n<1> We clarify that we introduce the model chromosomes as the variables of the models. You can refer to the last two sentences in section 4.2.1. as well as section 4.2.5. We also paste the sentences as follows.\\n4.2.1: Formally, the variables involved in each unit model, e.g., M_i^1, can be denoted as vector \\u03b8_i^1, which covers the weight and bias terms in the model (which will be treated as the model genes in the evolution to be introduced later) ).\\n4.2.5: For the k_th pair of parent unit model (M_i^1,M_j^1)k \\u2208 P^1, we can denote their genes as their variables \\u03b8_i^1,\\u03b8_j^1 respectively (since the differences among the unit models mainly lie in their variables), which are actually their chromosomes for crossover and mutation.\\n\\n<2> We clarify that we introduce the computational analysis in Section 4.4, including its performance analysis, space and time cost analysis, as well as advantages analysis.\"}", "{\"title\": \"Evolutionary part is not clear\", \"review\": \"The paper introduces Sample-Ensemble Genetic Evolutionary Network, which adopts a genetic-evolutionary learning strategy to build a group of unit models. Explanation on the evolutionary network part is not enough. For example, there is no clear explanation on how chromosomes are defined. Also, detailed analysis on computational aspect is needed.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Using Subsampling + Genetic Algorithm for Network Embedding\", \"review\": \"This paper proposes to subsample a large network into sub-networks, learn a network model (autoencoder) from each subgraph, perform crossover and mutation operations over the network parameters of different model pairs, and combine the latent representations following the ensemble idea.\\n\\nThe paper is clearly presented. Originality and significance is limited. Putting the three knowns components - subsampling, generation algorithm and ensembling together seems to be the main contribution of this paper. However, the ways of doing subsampling, performing the crossover and mutation operations and doing the ensembling are relatively straightforward ways of applying them. The fact that combining them to obtain better results is not a surprising result. And according to the experimental results, it is not clear how the gain in performance is resulted and to what extent each of the three components is contributing. For instance, I just guess the combination of subsampling + existing network embedding methods (LINE/DeepWalk/...) + ensembling may also give good results. Currently, the performance comparison is done with the original forms of LINE and DeepWalk. That makes the empirical results not very convincing to explain the key strengths of this work.\\n\\n+ve:\\n`1. The paper is clearly presented.\\n2. The design is reasonable one.\\n3. A number of benchmark datasets are used for the evaluation.\\n\\n-ve:\\n1. The originality and significance is limited.\\n2. The performance comparison should be done with references to more competitive candidates as explained above.\\n3. The nodes of different sub-networks are essentially projected to different embedding spaces. The validity and interpretation of performing the crossover operation on two different models (two different embedding spaces) will need more justifications.\\n4. The proposed methodology is not an end-to-end. The ensembling being evaluated is just simple addition.\\n5. The paper claims that \\\"The unit learning model, genetic algorithm and ensemble learning can all provide the theoretic foundation for SEGEN, which will lead to sound theoretic explanation of both the learning result and the SEGEN model itself\\\". Individually being sound does not imply that the way to combine them is sound. Currently, I cannot see the uniqueness of this particular combination.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting topic but several issues with the paper\", \"review\": \"This manuscript introduces SEGEN, a model based on Evolutionary Computation for building deep models. Interestingly, the authors define deep models in a different way. Instead of stacking several hidden layers one after the other (as in traditional deep learning models), SEGEN uses the idea of generations in evolutionary models (Genetic Algorithms or GA) and puts the unit models in the successive generations into layers, i.e., \\u201cevolutionary layer\\u201d. Each layer then performs the validation, selection, crossover, and mutation operations, as in GA. Another interesting point of the proposed method is that the choice of unit models in SEGEN can be traditional machine learning or recent deep learning models.\\nThe paper touches an interesting topic and proposes a sound method. However, there are several issues with the paper. There are several ungrounded and untested claims, as well as many unclear points in the method.\\n-\\tIn page 5, Section 4.2.4, the authors introduce the loss function used to define the fitness for the evolutionary model. It is not clear why they use the difference between the latent representations of the autoencoders (z) from pairwise nodes to define the loss. There are no motivations or discussion for this. Two different representations of two nodes may both be good (e.g., in terms of classification of data), but they do not have to be necessarily identical. \\n-\\tGiven the loss defined in Section 4.2.4, it is not clear how the authors ran their model for MNIST and other datasets, for which they used CNN and MLP unit models. In CNN and MLP there is not latent representation z.\\n-\\tBased on the model descriptions in Section 4.2 (and its Subsections), the proposed method transfers the learned models in previous generations to the next ones. But there is no explanation if the new models are again fine-tuned on the data? For instance, take the autoencoders, for two different unit models, the cross-over operator defuses the variables (weights and bias) from the two selected models to create an offspring. There is no guarantee that the new autoencoder model works properly on the same dataset. As a na\\u00efve example, if there are correlated and redundant features in the data, different autoencoders may separately focus on one/some of these features. Defusing weights of the two autoencoders (built upon different aspects of the data) may most probably ruin the whole model. \\n-\\tThere are four claims in the paper on the advantages of the proposed model, compared to other deep learning algorithms. None of these claims are discussed in depth or at least illustrated experimentally. \\n*** Less Data for Unit Model Learning. The authors could have reported the number of variables used in each model in the experiments. It is important to see with how many of a larger number of variables a traditional deep model can result in comparable results to SEGEN. \\n*** Less Computational Resources. The model operates in several generations and in each generation, many unit models are built. It is not fair to say and not clear how it can occupy less space or time complexity than a regular GCNN or MLP.\\n*** Less Parameter Tuning. Again experiments could clarify this issue.\\n*** Sound Theoretic Explanation. The authors only refer to (Rudolph 1994) for the performance bounds of their model and claim that since they are using GA they are better than other deep learning models. However, performance bounds for GA models are very shallow and proximal. \\n-\\tTo calculate the computational complexity of the model, the authors analyzed the time for learning one unit model. However, in GA models, the complexity is calculated using the bounds on the number of times the fitness function is called since the fitness function is the most computationally intensive task (please see: Pelikan and Lobo 1999 \\u2018Parameterless Genetic Algorithm A Worst-case Time and Space Complexity Analysis\\u2019). \\n-\\tOne of the main fallacies of GAs and evolutionary algorithms is that they may lead to premature convergence. This is very common, especially at the presence of trap functions, such as non-convex functions that real-world problems deal with (please see: Goldberg et al. 1991 \\u2018Massive Multimodality, Deception, and Genetic Algorithms\\u2019). There are no discussions/experiments on how SEGEN may overcome the premature convergence, or even if it converges at all.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rJlEojAqFm
Relational Forward Models for Multi-Agent Learning
[ "Andrea Tacchetti", "H. Francis Song", "Pedro A. M. Mediano", "Vinicius Zambaldi", "János Kramár", "Neil C. Rabinowitz", "Thore Graepel", "Matthew Botvinick", "Peter W. Battaglia" ]
The behavioral dynamics of multi-agent systems have a rich and orderly structure, which can be leveraged to understand these systems, and to improve how artificial agents learn to operate in them. Here we introduce Relational Forward Models (RFM) for multi-agent learning, networks that can learn to make accurate predictions of agents' future behavior in multi-agent environments. Because these models operate on the discrete entities and relations present in the environment, they produce interpretable intermediate representations which offer insights into what drives agents' behavior, and what events mediate the intensity and valence of social interactions. Furthermore, we show that embedding RFM modules inside agents results in faster learning systems compared to non-augmented baselines. As more and more of the autonomous systems we develop and interact with become multi-agent in nature, developing richer analysis tools for characterizing how and why agents make decisions is increasingly necessary. Moreover, developing artificial agents that quickly and safely learn to coordinate with one another, and with humans in shared environments, is crucial.
[ "multi-agent reinforcement learning", "relational reasoning", "forward models" ]
https://openreview.net/pdf?id=rJlEojAqFm
https://openreview.net/forum?id=rJlEojAqFm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SklZdko5lV", "BylRYteJgE", "rke5oY6H0X", "ryglq4An67", "rJlz7QR26Q", "SygcOsjn67", "rJe9bjs36X", "ryeSysshT7", "ryg9NANi6Q", "BJxryqVsTQ", "H1gmKi-56m", "Syes3cZcp7", "Byxww9-qT7", "rygAtEgF6Q", "HJeL-oyF6Q", "SkeCbN1KpX", "rklQ8hTOT7", "S1l7Eh6d6m", "SyemfhaO6Q", "r1xuajaOT7", "H1lg2cadaX", "Byx8_qTuaX", "rJeJTtTd6X", "HkxhF8j8TX", "Hyx9Q__Q67", "HylpLcxbTm", "BJgBS8y0nX" ], "note_type": [ "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1545412456963, 1544649094064, 1542998434092, 1542411400003, 1542411033680, 1542400881796, 1542400770490, 1542400733277, 1542307378271, 1542306268663, 1542228858892, 1542228659413, 1542228575108, 1542157446322, 1542155005623, 1542153222068, 1542147146799, 1542147115382, 1542147083020, 1542147007745, 1542146728017, 1542146669827, 1542146487285, 1542006403913, 1541797922419, 1541634645311, 1541432892632 ], "note_signatures": [ [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper612/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper612/Authors" ], [ "ICLR.cc/2019/Conference/Paper612/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper612/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper612/Authors" ], [ "ICLR.cc/2019/Conference/Paper612/Authors" ], [ "ICLR.cc/2019/Conference/Paper612/Authors" ], [ "ICLR.cc/2019/Conference/Paper612/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper612/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper612/Authors" ], [ "ICLR.cc/2019/Conference/Paper612/Authors" ], [ "ICLR.cc/2019/Conference/Paper612/Authors" ], [ "ICLR.cc/2019/Conference/Paper612/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper612/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper612/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper612/Authors" ], [ "ICLR.cc/2019/Conference/Paper612/Authors" ], [ "ICLR.cc/2019/Conference/Paper612/Authors" ], [ "ICLR.cc/2019/Conference/Paper612/Authors" ], [ "ICLR.cc/2019/Conference/Paper612/Authors" ], [ "ICLR.cc/2019/Conference/Paper612/Authors" ], [ "ICLR.cc/2019/Conference/Paper612/Authors" ], [ "ICLR.cc/2019/Conference/Paper612/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper612/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper612/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper612/AnonReviewer1" ] ], "structured_content_str": [ "{\"comment\": \"Enjoyed reading this work and congrats on the acceptance! Also wanted to bring your attention to a pair of papers that use relational interactions between agents in cooperative and competitive environments to accelerate multi-agent learning that you might find relevant:\\n\\nEvaluating Generalization in Multiagent Systems using Agent-Interaction Graphs\\nAditya Grover, Maruan Al-Shedivat, Jayesh K. Gupta, Yura Burda, Harrison Edwards\\nAAMAS 2018 (short paper)\", \"link\": \"https://arxiv.org/abs/1806.06464\", \"title\": \"Great work! A couple of missing references\"}", "{\"metareview\": \"\", \"pros\": [\"interesting application of graph networks for relational inference in MARL, allowing interpretability and, as the results show, increasing performance\", \"better learning curves in several games\", \"somewhat better forward prediction than baselines\"], \"cons\": \"- perhaps some lingering confusion about the amount of improvement over the LSTM+MLP baseline\\n\\nMany of the reviewer's other issues have been addressed in revision and I recommend acceptance.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper showing the benefit of relational representations in a multi-agent setting\"}", "{\"title\": \"Response to AnonReviewer1 -- 2\", \"comment\": \"Hello,\\n\\nThank you again for taking the time to review for ICLR and for your insightful feedback.\\n\\nFollowing your suggestion, we have updated the text to include a more thorough discussion of Rabinowitz et al 2018 in the Related Work section. We highlighted that Rabinowitz et al.\\u2019s ToM net focuses on single agent RL and on entire behavioral motifs as opposed to an entity-relation interpretable model of each action and event. Similarly, we pointed to Fig. 7 more prominently in the text; this figure contains an additional experiment showing that onboard RFM modules accelerates agents learning to a larger extent than a non-relational MLP + LSTM based module. Finally, we included our thinking for our choice of model-performance metric in the main text and directed the reader to Fig. 10 where, for completeness, we report the next-action classification accuracy of each model.\\n\\nWe think these additions will help the reader put our work in the context of existing methods, appreciate the problems in which relational models might be a preferred choice, as well as provide more complete performance assessment measures.\\n\\nWith only two days left in the rebuttal period we wanted to make sure we have dispelled your concerns. Please do let us know if anything else needs to be further clarified.\\n\\nThank you\"}", "{\"title\": \"Re: Re: Response to AnonReviewer4 2/3 2/2\", \"comment\": \"My concern is that there \\\"static entities (i.e., apples, stags,\\ncoins, and tiles) were represented by vertices\\\" and \\\"Edges connected all non-agent entities to all agents as well as agents to each other.\\\" So unless I misunderstood, there will still be edges from the stag to a2 and vice versa (and not to mentioned from agent 2 to all the other static entities). Won't this make the pruned graph dependent on sa2 in a non-trivial way?\"}", "{\"title\": \"Re: Re: Response to AnonReviewer4 2/3 1/2\", \"comment\": \"I see. So it seems that in this case, \\\"Y\\\" represents whether or not the stag is about to be consumed. I believe this clarifies the misunderstanding.\\n\\nThis seems like a very hand-picked event, but it make sense to manually choose events like this to validate the hypothesis. It would be interesting to reverse-engineering this metric to develop a method: use this changes in activation to automatically find interesting events. This could be useful, for example, if you wanted to perform \\\"multi-agent policy distillation.\\\" One could use changes in magnitude for attention mechanisms, for supervising intermediate layers, or for prioritized experience replay.\\n\\nAfter this discussion, it is more clear to me how the ideas of this paper can be useful (e.g. for ideas like the ones listed in the previous paragraph). I have updated my score accordingly. I feel that this \\\"collaboration metric proposal\\\" is really the main contribution of the paper, and I would suggest emphasizing that more in the abstract and the last paragraph of the introduction. I suspect a number of first-time readers would still find the language vague and the contribution unclear. However, at this point, these are more suggestions than \\\"reviewer critiques.\\\"\"}", "{\"title\": \"Re: Response to AnonReviwer4 3/3\", \"comment\": \"8a) Thank you for your suggestion, we have changed the paragraph in our introduction that outlines our contribution as follows:\\n\\n[...] Perhaps more importantly, they produce intermediate representations that support the social analysis of multi-agent systems: we use our models to propose a new way to characterize what drives each agent's behavior, track when agents influence each other, and identify which factors in the environment mediate the presence and valence of social interactions. [...]\\n\\nSimilarly we modified the motivation of our experiments in Sec. 2.2.2 as follows:\\n\\nWe propose the Euclidean norm of a message vector (i.e., $\\\\|e'_k\\\\|$) as a measure of the influence a sender entity, $v_{s_k}$, has on a receiver, $v_{r_k}$. We validate this suggestion in Fig.~\\\\ref{fig:edges} (top row), where [...]\\n\\n8b) Thank you, we are happy to hear you recognize the value of our work and we are glad you agree that Figure 9 (sorry about the typo!) conveys the information that only relative changes, and not raw values are meaningful.\\n\\n8c) Thank you for the thoughtful suggestion. We\\u2019ve given similar ideas a lot of thought and can share some insights.\\n\\nFirst, we penalize edge activations during training, precisely to encourage the edges to only convey useful information for prediction. As a result of this, if the state s_a2 is not predictive of the action a_a1, the model will learn to suppress the edge activation. In other words, if the derivative you propose is small, the edge norm should be small.\\n\\nIn the ideal case, we would have ground-truth data of the effect of s_a2 on a_a1 to validate this. While we do not have ground-truth data for agent-agent influence (see below), we do have them for object-agent influence (Fig. 3 top-row). In this case, we find that objects (stags and apples) with large edge norms are informative about the direction that the agent subsequently travels, while objects with small edge norms are not informative in the same way. Thus the magnitude of these edges are good proxies for measuring object-agent influence.\\n\\nBeing able to validate that the agent-agent edge norm correlates with this derivative turns out to be technically complex, even in these relatively small environments. For one, in the case of the apples and stags above, the ground-truth influence of objects on agents can be estimated through their attractive effect on the agent (i.e. one knows the action to measure correlation against); while the effect of one agent\\u2019s state on another agent\\u2019s action can be much more intricate (e.g. I\\u2019m going for this apple, so you\\u2019d better not). Another complexity is that the derivative measure you propose needs to be integrated over plausible alternative values for s_a2. Choosing a space of counterfactuals, or averaging over a proposal distribution q(s_a2), brings its own challenges. We\\u2019re actually working on a similar idea in spirit at the moment (though it\\u2019s out of the scope of this work), and we look forward to sharing our results in a later paper when they are ready.\"}", "{\"title\": \"Re: Response to AnonReviewer4 2/3 1/2\", \"comment\": \"Hi, thank you for getting back to us so quickly and for working with us to make sure we put our work out there in a timely fashion.\", \"maybe_we_finally_understand_where_the_confusion_comes_from\": \"our choice of example with Mary and John and their presence had spurious consequences and we sincerely apologize for this. The variable Y in that example corresponds to Mary\\u2019s presence/absence, whereas the variable Y in the paper itself corresponds to the *state* of the agent (e.g. its position, previous action, ...). The fact that you deduce from \\u201call agents are always present in this environment\\u201d the conclusion that \\u201cY is always set to 1\\u201d (which is not actually the case) might stem directly from this specific choice of example. For the sake of avoiding further misleading statements, let us ground the discussion back in the game we considered.\", \"we_hope_these_observations_will_bring_things_back_into_focus\": \"1. The \\u201cstate of the teammate\\u201d, s_a2, does not denote its presence or absence, but rather all the agent node attributes (position and last action). When we remove s_a2 from the equation we do not \\u201cremove the agent\\u201d, but simply make information about its position and last action unavailable to the model, M. In light of this, \\u201caveraging over Y\\u201d makes sense even when \\u201cthe agent is always there\\u201d, because Y contains the agent\\u2019s position and last action, rather than an indicator variable denoting its presence.\\n\\n2. During training, our model has access to examples from all situations (both when using the full graph and the pruned graph). However, when computing the average \\u201cEffect on Return\\u201d in Fig. 4, we restrict ourselves to situations when a stag was *consumed*.\\nIn this case the model with the full graph knows that both agents are on the stag (or near to it, in the time steps leading to the consumption event); the full graph model correctly predicts that both agents will collect a reward of 10. On the other hand the model with the pruned graph does not *know for sure* that the second agent is near or on the stag, so it predicts a lower reward. We measure the difference between these two estimates (one with observed s_a2 and one averaging over an implicit posterior on s_a2) and call it the \\u201cvalue of the actual social context\\u201d.\\n\\n3.You are absolutely correct that if we were to average over *all* situations we would find that the difference between the two estimators is close to 0 (in practice, we find that this difference is less than 5% of the total reward collected by the agents). However, in the analysis of this section, we are not averaging over all situations. Instead, we average only over a specific set of situations: for Fig. 4 middle and right panel, we only consider times when a stag is about to be *consumed*.\"}", "{\"title\": \"Re: Response to AnonReviewer4 2/3 2/2\", \"comment\": \"Q) I also fail to see why removing some edges [...]\\n\\nThe RFM indeed receives both s_a1 and s_a2 as input, but it\\u2019s computing both R_a1 and R_a2 at the same time. The signal path for the pruned-graph estimator is such that s_a1 information is only routed to predict R_a1, and s_a2 information is only routed to predict R_a2. \\n\\nThis can be checked by looking at the Graph Net formulas in Eq. 1 in the paper. Suppose, for simplicity, that there are only 3 vertices:\\n\\nA1 (agent 1) with attributes (x, y, action, N/A)\\nA2 (agent 2) with attributes (x, y, action, N/A)\\nS1 (stag 1) with attributes (x, y, N/A, available/unavailable),\", \"2_edges\": \"\", \"e_1\": \"receiver: A1, sender A2, no attributes\", \"e_2\": \"receiver: A1, sender S1, no attributes,\\n\\nand that globals are empty.\\n\\nWe want to use our graph net to predict the return of agent 1 denoted as A_1\\u2019 (to highlight that this will be an updated node attribute). From Eq. 1 in the paper:\\n\\nE_1\\u2019 = PHI_E(A_1, A_2)\\nE_2\\u2019 = PHI_E(A_1, S_1)\\nA_1\\u2019 = PHI_V[PHI_E(A_1, A_2) + PHI_E(A_1, S_1)]\\n\\n(In the last line we used the fact that that RHO_E--->V is just a sum over the senders A_2 and S_1 of the PHI_Es).\\nIf we remove edge 1 (receiver A_1, sender A_2) then\\n\\nA_1\\u2019 = PHI_V[PHI_E(A_1, S_1)]\\n\\nAnd S_2 does not enter the calculations.\\n\\nWe have also fixed some confusing notation in Eq. 2 and 3 to highlight that M is a function approximator rather than a probability distribution.\"}", "{\"title\": \"Re: Response to AnonReviwer4 3/3\", \"comment\": \"8a) I see. I feel that this is a bit subjective, but I recognize that the community is still trying to decide how to best measure coordination. To the extent that this is a proposed way of doing that, I see the merit of the work. I think the paper would be much better received if it clearly presented the method as a *proposed* way of measuring coordination (and motivate the experiments as a way of demonstrating why this proxy is a good proxy), rather than presenting it as a method that solves the nebulous \\\"coordination measurement\\\" problem.\\n\\n8b) While the relative magnitude between events may be interesting, relative magnitude between neurons would be more insightful, but there's an overarching concern that it is not even clear if hidden unit activation magnitude is meaningful. Figure 9 (9, not 8, right?) does convey this information to a satisfying degree. \\n\\nI'll end this point by making the following suggestion:\\nWhat we seem to care about is d a_a1 / d s_a2, where a_a1 = action of agent 1 and s_a2 = state of agent 2. If the paper showed that the magnitude of the hidden unit was more correlated with d a_a1 / d s_a2 than the magnitude of any other hidden unit, then this would seem to give even stronger, more direct evidence for the claim that, \\\"the magnitude of this neuron is a good proxy for measuring coordination.\\\"\\n\\n8c) Great, thank you for the change.\"}", "{\"title\": \"Re: Response to AnonReviewer4 2/3\", \"comment\": \"I'll try to be more direct.\\n\\\"it\\u2019s directly modelling p(X), where Mary could be 0 or 1 (but it never gets to know what the value truly is).\\\"\\nSimply because a random variable is unknown doesn't make it 0 or 1 with probability 0.5.\", \"put_another_way\": \"Let \\\"Y\\\" be the random variable that's 1 if the other agent is there and 0 otherwise. As you said, \\\"all agents are always present in this environment.\\\" Therefore, Y is always set to 1. Simply because Y is unobserved doesn't make Y either 0 or 1 with random probability. If Prune Graph was actually modelling E[X], then you would need to train Prune Graph on data where Y = 0 or Y = 1. However, you always train Prune Graph on data where Y = 1 (because \\\"all agents are always present\\\"). Therefore, Prune Graph models E[X | Y = 1]. Ignoring inputs doesn't change the input distribution; it results in modeling the marginal distribution (with the ignored input being marginalized out). I'm not sure how else to say this, but maybe there is still something that I am missing...\\n\\nFrankly, given that most of my other concerns have been addressed, I'm inclined to raise my score if Section 2.2.3 is completely removed, or I realize that I am completely mistaken. As it stands, this section seems to be wrong: the method does not seem to be doing what it's claiming. Prune graph models E[X | Y= 1], and not E[X], so one cannot compare E[X | Y=1] with E[X].\\n\\nI also fail to see why removing some edges from a graph network (a modeling/architecture choice) is equivalent to ignoring the input. The RFM still receives s_a2 as input. (I do not think this concern has been addressed by the authors.)\\n\\nI cannot, in good conscience, change my rating to an \\\"accept\\\" (even marginally) for a paper with a section that I feel is technically incorrect.\"}", "{\"title\": \"Re: Response to AnonReviwer4 3/3\", \"comment\": \"Thank you for raising these concerns.\\n\\n8a) Learning coordinated behavior is a goal of multi-agent reinforcement learning. Apart from the coarse game-theoretic definition, coordinated behavior is abstractly defined as inter-dependent behavior towards a common objective. Yet this does not give clear direction on how to actually *measure* whether a set of policies acts in a coordinated manner or not. The way we structure the RFM model allows us to do this directly. This can be useful to any researcher that wants to design either tasks or algorithms that lead to coordinated behavior: we now have a way to measure whether they got it or not. In turn, this will assist the MARL community in building better algorithms, and better tasks. \\n\\nHere\\u2019s a concrete example. Suppose we want to design the next generation of house-cleaning robots. In particular, we\\u2019re designing a pair of robots, one of whom operates a dustpan, and one operates a brush. As engineers designing this system, we would want to ensure that the agents are actually learning a coordinated solution to the dusting problem. Our method could allow us to identify whether a particular pair of policies is actually achieving this by fitting a powerful non-linear predictive model that goes well beyond simple correlation and co-occurrence. Moreover, we could use it to find the situations for which the robots\\u2019 behavior is most inter-dependent, and use this to design training environments or curricula to increase the learning pressure for such behavior.\", \"we_have_added_some_clarifying_description_in_the_introduction_of_the_paper\": \"\\u201cAlongside the challenges of learning coordinated behaviors, there are also the challenges of measuring them. In learning-based systems, the analysis tools currently available to researchers focus on the functioning of each single agent, and are ill-equipped to characterize systems of diverse agents as a whole. Moreover, there has been little development of tools for measuring the contextual inter-dependence of agents' behaviors in complex environment, which will be valuable for identifying the conditions under which agents are successfully coordinating.\\u201d\\n\\n8b) You're correct that the absolute magnitudes (e.g. 2.7 and 3.2) are not meaningful in and of themselves. What matters is the relative values. The change in magnitude is certainly statistically significant, as can be seen from the error bars. The edges from stags to agents change even more than this when the stags respawn, as shown in Figure 3b.\\n\\nWith respect to your suggestion that it's not surprising that when the input changes, the activation of hidden units changes too: this is not a given! In particular, we now show in Figure 8 that these results do not show up when stags are not relevant for coordination (i.e. their presence does not mediate coordinative behaviors). \\n\\nWe added a comment in Section 2.2.2 to the effect that the raw numbers are not intrinsically meaningful, and that one should instead consider comparisons between edge norm values or rank order of edge norm values.\\n\\n8c) We agree that \\\"explains\\\" and \\\"how\\\" claim too much ground. We have changed this sentence to \\\"Our models enable a characterization of what drives each agent's behavior, tracking when agents influence each other, and identifying factors in the environment which mediate the presence and valence of social interaction.\\\". With respect to your last point, the analyses in Figure 3 and 9 study the presence, and Figure 4 studies the valence.\"}", "{\"title\": \"Re: Response to AnonReviewer4 2/3\", \"comment\": \"Thank you for taking the time to clarify your question.\\n\\n4) This statement is incorrect: \\\"the prune graph is only trained on data collected when the other agent is present, correct?\\\". The pruned-graph estimator and the full-graph estimator are *both* trained on *all* data. The pruned graph just never gets to know the state of the other agent when making its predictions.\\n\\nThere\\u2019s perhaps a subtlety here, in case you missed it before: all agents are always present in this environment; the difference between the full-graph estimator and the pruned-graph estimator is that one is given access to the state s_a2, and the other is not. In the John/Mary case, the state s_a2 corresponds to the presence/absence of Mary. So your statement that the prune graph \\u201cseems to directly model E[X | Mary=1]\\u201d is incorrect: it\\u2019s directly modelling p(X), where Mary could be 0 or 1 (but it never gets to know what the value truly is). \\n\\nAs more detail, both the pruned-graph estimator and the full-graph estimator are produced by a single graph neural network. Thus M is the same model in the two equations. We only have one neural network, which is trained to predict agent 1's return both using the full graph (i.e. knowing the actual state of a2) and the pruned graph (i.e. not knowing the actual state of a2). During training we randomly drop out all edges between teammates. At test time, when can then compute the full-graph estimate by using all edges, and the pruned-graph estimator by dropping out edges between teammates. \\n\\nWe provide this information in the paragraph beginning \\u201cWe ran this experiment\\u2026\\u201d. However, we realize from your responses that we did not communicate this clearly, so we have re-written this section by including the following:\\n\\nWe note that within this setup, both the pruned-graph estimator and the full-graph estimator are produced by a single graph neural network. This network is trained to predict agent 1's return both using the full graph (i.e.\\\\ knowing the actual state of $a_2$) and the pruned graph (i.e.\\\\ not knowing the actual state of $a_2$). During training we randomly drop out edges between teammates (to ensure that both full graph and pruned graph are in-distribution for $M$). At test time, we then compute the full-graph estimate by using all edges, and the pruned-graph estimator by dropping out edges between teammates.\\n\\nnew question) In the pruned graph, there are no indirect paths between the two agents through the graph. In other words, there are no ways of sending information from s_a2 (from the current input or in the past) through other entities to the node corresponding to a1. There does remain the possibility that the network could infer *something* about s_a2 from the environment state (which we denote z in the main text). For example, if an apple at a particular location was consumed 5 time steps ago, then the network could effectively determine that agent a2 was within a 5-step radius of that location. We make this comment in the footnote at the bottom of pg 8.\"}", "{\"title\": \"Re: Response to AnonReviewer4 1/3\", \"comment\": \"Thank you for your question!\\n\\nWe tried to match models for capacity (with the exception of NRI which has about 3x more parameters than other models because of its autoencoder connectivity estimator). The raw number of parameters (as reported by the TensorFlow checkpoint loader) were as follows for the RFM, FeedForward and MLP + LSTM:\\n\\n(RFM, CoopNav) ---> 61194\\n(RFM, CoinGame) ---> 63134\\n(RFM, StagHunt) ---> 63134\\n\\n(FeedForward, CoopNav) ---> 60240\\n(FeedForward, CoinGame) ---> 65140\\n(FeedForward, StagHunt) ---> 65140\\n\\n(MLP + LSTM, CoopNav) ---> 59307\\n(MLP + LSTM, CoopNav) ---> 66718\\n(MLP + LSTM, StagHunt) ---> 71803\\n\\nWe don\\u2019t think a discrepancy in the number of parameters of less than 3% can account for the difference in performance we observe.\\n\\nWe have added a comment to the paper (\\u201cWe matched all models for capacity\\u2026\\u201d).\"}", "{\"title\": \"Re: Response to AnonReviwer4 3/3\", \"comment\": \"5-7) Thank you for the clarification and update.\\n8) Perhaps this is a bit subject and still ill-defined for the multi-agent learning community, but it's still not clear to me what \\\"measuring the emergence of coordinated behavior\\\" means or how the analysis provided does this. What may help me (and other readers) understand the significance of the work is to include specific, concrete uses for these correlations.\", \"a_more_grounded_concern\": \"The numbers in Figure 3 are now compared to anything. For example, in Figure 3c, left: Is a change in magnitude from 2.7 to 3.2 large? How does this compare to the edge from teammate to apple? From stag to teammate? It's not surprising that when the input changes, the magnitude of some hidden layer of a graph network changes. What would be interesting is if the magnitude of the hidden units only changed when the stag appeared and changed to a much larger degree than any other hidden unit.\\n\\nRegardless, the language around the analysis seems a bit overstated. \\\"Our models explain what drives each agent\\u2019s behavior, track how agents influence each other, and what factors in the environment mediate the presence and valence of social interactions.\\\" The use of the word \\\"explains\\\" may be why Reviewer #3 is asking about causality, since this model seems to only construct correlations. It's also not clear to me why the analysis shows \\\"how agents influence each other.\\\" Nor do I see an explanation for \\\"what factors in the environment mediate...valence of social interactions.\\\" It seems the only analysis for this looked at the presence of an edge in the RFM (i.e. prune vs full graph).\\n\\n9) Okay.\"}", "{\"title\": \"Re: Response to AnonReviewer4 2/3\", \"comment\": \"3) Thank you for including this Figure! The update addresses this concern.\\n4) I think I should clarify my concern, because I don't think I conveyed it clearly. Let's say X is a random variable representing John's heart rate. I understand that if E[X | Mary=1] > E[X], then this suggests that the presence of Mary tends to increase X. However, my point is that the prune graph does not model E[X]; it seems to directly model E[X | Mary=1]. To use the analogy: if you trained the \\\"simple model\\\" on data only when Mary was present, then you would expect the two models to estimate the same BMP when Mary is present.\\n\\nTo tie it back to the paper, \\\"Mary being present\\\" is equivalent to \\\"the other agent being present.\\\" Unless I'm mistaken, the prune graph is only trained on data collected when the other agent is present, correct? The paper states, \\\"In practice, this latter estimate\\ncan be obtained by removing the edge connecting the two agents from the input graph\\\" which to me suggests that the presence of this one edge is the only difference (and so they train on the same data, i.e. when the other agent/Mary is always present).\\n\\nIt seems like the only way to actually get a model of E[X] is to get data when Mary isn't there, i.e. collect data when the other agent is not there. It wouldn't be particularly surprising to find that a value function trained on this data would predict lower average values, but if the hypothesis is that another agent's presence seems helpful, it seems necessary to compare when the other agent is present with when the agent is not present.\\n\\nAnother (new) question: Why is removing the edge sufficient for removing knowledge about agent 2? Do you mean that all edges from s_a2 are removed? It seems like the RFM still has access to both s_a1 and s_a2 when making the predictions. I apologize for not including this question in the original review.\"}", "{\"title\": \"Re: Response to AnonReviewer4 1/3\", \"comment\": \"1) I did not realize that the policies were stateful and forgot that the coin game is partially observed. That would also explain why there is a larger difference in that environment. Thank you for this clarification.\\n2) Great point!\\n\\nThis was a smaller concern, but are there any comments regarding the fact that the recurrent model simply has more parameters than Feedforward?\"}", "{\"title\": \"Response to AnonReviewer3 1/2\", \"comment\": \"Thank you for your insightful questions and comments on the submission.\\n\\nWe hope we have addressed your questions below. In particular, we have addressed your major criticism (#4 below): RFM-augmented agents do not require pre-trained agents to be able to learn in the first place. Given that this is not a limitation of the method (and we show experiments demonstrating this), we hope you will revise your rating accordingly.\\n\\n1) Is this approach dependent on having semantic representations of the environment?\\n\\nYes, at the moment, the approach we describe is dependent on having such a semantic representation. Learning such representations purely from perceptual input is a field of active research. For example, there has been some success in relational reasoning from pixels (e.g. Santoro et al, 2017; Watters et al, 2017; Barrett et al, 2018; Zambaldi et al, 2018), though little attempt has been made to interrogate these systems to uncover the semantics of the intermediate representations (which is something we leverage for our analysis). We do not try to solve the problems of learning semantic representations from pixels here, but we anticipate that as progress is made in this domain, we will be to transfer it over to build the next generation of models.\\n\\n2) Would this work in continuous settings?\\n\\nActually, the methods we use originated in continuous settings. For example, graph nets have been used to model the dynamics of interacting particles (Battaglia et al, 2016). In multi-agent settings, both VAIN (Hoshen, 2017) and NRI (Kipf et al, 2018) have been applied to model behavior in continuous domains (soccer and basketball, respectively). We did not explicitly test the RFM model in continuous domains in this submission, but we have no reason to believe that it would not work.\"}", "{\"title\": \"Response to AnonReviewer4 1/3\", \"comment\": \"Thank you for a very thorough and thoughtful review.\\n\\n0) The architecture is a rather straightforward extensions of previous work, and using graph networks for predictive modeling in multi-agent settings has been examined in the past, making the technical contributions not particularly novel. Examining the correlation between edge activation magnitudes and certain events is intriguing and perhaps the most novel aspect of this paper, but it is not clear how or why this information would be useful. There a few unsubstantiated claims that are concerning. There are also some odd experimental decisions and results that should be addressed.\\n\\nWe agree that components of the model are drawn from previous work, and the improvements to this part of the architecture are incremental. However, our two major novel contributions are elsewhere. The first, as the reviewer points out, is in using this model for analysis of multi-agent behavior. This is useful for teasing apart patterns of influence in complex situations; for instance, we can use this method to answer AnonReviewer3\\u2019s questions about whether (and when) agents are coordinating with each other. Such analysis can also assist with the evaluation of different multi-agent algorithms to determine whether they are producing desirable policies, or to more finely dissect the cooperative or competitive behaviors that a task induces.\\n\\nOur second contribution is to integrate the RFM model into agents, which we show assists with their decision-making. This provides a measurable improvement over baselines in multi-agent tasks.\\n\\n\\n1) Why would using a recurrent network help (i.e. RFM vs Feedforward)? Unless the policies are non-Markovian, the entire prediction problem should Markovian. I suspect that most of the gains are coming from the fact that the RFM method simply has more parameters than the Feedforward method (e.g. it can amortize some of the computation into the recurrent part of the network). Suggestion: train a Feedforward model that has more parameters (with appropriate hyperparameter sweeps) to see if this is the cause. If not, provide some analysis for why \\u201cmemories of the relations between entities\\u201d would be any more beneficial than simply recomputing those relations.\\n\\nBy the nature of the problem, we do expect a priori that a stateful RFM should outperform a stateless one, for two reasons. First, in all cases, the agents that the RFMs were modelling were themselves stateful. If the agents are making any functional use of their memory (which we anticipate in the general case), then the RFM would benefit from taking advantage of previous relations between entities. Second, while CoopNav and StagHunt are fully observed, CoinGame is not, as the episode-specific reward for each agent\\u2019s coins are known to the agents themselves, but not to their teammates (or the RFM). This latent variable has to be inferred from teammates\\u2019 history of actions, since its value is often aliased within a single observation. From the results in Figure 2, we indeed find that the recurrent RFM outperforms the feedforward RFM the most when modelling behavior in CoinGame.\\n\\n\\n2) The other potential reason that the recurrent method did better is that policy actions are highly correlated (e.g. because agents move in straight lines to locations). If so, then recurrent methods can outperform feedforward methods without having to learn anything about what actually causes policies to move in certain directions. Suggestion: measure the correlation between consecutive actions. If there is non-trivial correlation, than this suggests that RFM does better than Feedforward (which is basically prior work of Battaglia et. al.) is for the wrong reasons.\\n\\nWe don\\u2019t think this alternative hypothesis explains the improved performance of the recurrent model, for two reasons. First, the feedforward RFM includes the most recent previous action in its inputs, allowing it to take advantage of correlations between consecutive actions. Second, any autocorrelation structure in an action sequence is either a consequence of autocorrelation in the MDP state (in which case a feedforward RFM should be able to reproduce it), or it is due to statefulness of the agent (in which case a recurrent RFM is necessary; see our argument above). Either which way, the RFM model will indeed learn what actually causes policies to move in certain directions.\"}", "{\"title\": \"Response to AnonReviewer4 2/3\", \"comment\": \"3) If I understand the evaluation metric correctly, for each rollout, it counts how many steps from the beginning of the rollout match perfectly before the first error occurs. Then it averages this \\u201cminimum time to failure\\u201d across all evaluation rollouts. If this is correct, why was this evaluation metric chosen? A much more natural metric would be to just compute the average number of errors on a test data-set (and if this is what is actually reported, please update the description to disambiguate the two). The current metric could be very deceptive: Methods that do very well on states around the initial-state distribution but poorly near the end of trajectories (e.g. perfectly predicts the actions in the first 10 steps, but then resorts to random guessing for the last 99999 time steps) will outperform methods that have lower average error rate (e.g. a model that is correct 50% of the time). Suggestion: change the metrics to average number of errors, or report both, or provide a convincing argument why this metric is meaningful.\\n\\nWe have now provided an alternative metric in Figure 10 (next-step action classification accuracy) which shows the same qualitative results.\\n\\nThe reason we use the particular metric in the main text is that is gives us a measure of how long the model remains useful. In particular, we are learning a simulator of the agents dynamics; the metric gives an indication of how many steps one can simulate before making a mistake. There most likely isn\\u2019t a perfect metric that covers all bases, and in particular alternative rollout metrics are hard to define after the model makes a mistake, since the ground-truth observations and predictions no longer match. Nonetheless, between this and the new Figure 10 we believe there\\u2019s a strong case that the RFM-based model is better. \\n\\n\\n4) Unless I misunderstood, the results in Section 2.2.3 seem spurious and the claims seem unsubstantiated. For one, if we look at Equations (1) and (2), when we average over s_a1 and s_a2, they should both give the same average for R_a1. Put another way: the prune graph should (in theory) marginalize out s_a2. On average, its expected output should be the same as the output of the full graph (after marginalizing out s_a1 and s_a2). Obviously, it is possible to find specific rollouts where the full graph has higher value than the prune graph (and it seems Figure 4 does this), but it should equally be possible to find rollouts where the opposite is true. I\\u2019m hoping I misunderstood this section, but otherwise this seems to invalidate all the claims made in this section.\\n\\nWe think we\\u2019ve identified the cause of the misunderstanding. You\\u2019re correct that if one averaged over *all* situations (i.e. if one marginalized out both s_a1 and s_a2, and z as well), then the two estimators should give the same results. In other words, the full-graph estimator and the pruned-graph estimator should have the same mean value. But here we\\u2019re not averaging over all situations. We\\u2019re picking a particular *class* of situations.\", \"to_give_an_example\": \"in the middle of Figure 4, each point represents a particular distribution of situations q_t(s_a1, s_a2, z). These situations are defined by the time t until stag consumption. These distributions, q_t, are themselves not equal to the overall marginal distribution p(s_a1, s_a2, z). Thus when you compute the expected value of the full-graph estimator under q_t, and the expected value of the pruned-graph estimator under q_t, you get different quantities.\\n\\nHere\\u2019s an analogy: imagine we build two models for John\\u2019s heart rate. One model includes more factors than the other model. They agree on the average value though: marginalizing over all circumstances, John\\u2019s heart rate averages at 80bpm. The richer model also includes a particular factor: when Mary is in the room, John\\u2019s heart rate goes up by 10bpm. If we marginalize over situations when Mary is present, the richer model estimates John\\u2019s average heart rate to be 90bpm, while the smaller model estimates it as 80bpm. The two models may still agree in expectation over *all* situations (though, as you intuit, the richer model would have to make up for this somehow by having John\\u2019s heart rate being lower on average when Mary is absent). \\n\\nOverall, this allows us to measure whether an interaction is overall \\u201cgood\\u201d or \\u201cbad\\u201d for the agents.\\n\\n(Some extra info if it helps: the left panel of Figure 4 shows the differences from a single episode, for illustration purposes. We compute the quantities in the middle and right panels over 10 randomly-chosen episodes from the test set).\"}", "{\"title\": \"Response to AnonReviwer4 3/3\", \"comment\": \"5) Even if concern #4 is addressed, the following sentence would still seem false: \\u201cThis figure shows that teammates\\u2019 influence on each other during this time is beneficial to their return.\\u201d The figure simply shows predictions of the RFM, and not of the ground truth. Moreover, it\\u2019s not clear what \\u201cteammates\\u2019 influence\\u201d actually means.\\n\\nGood catch. We cannot actually compute the ground truth. We have rephrased this to: \\u201cThus the model estimates that teammates' specific interactions during this time are beneficial to their return.\\u201d\\n\\n6) The comparison to NRI seems rather odd, since that method uses strictly less information than RFM.\\n\\nNRI has access to the same set of information as the RFM. In comparison however, NRI actively discards information: it infers a connectivity map from the past trajectory, and may choose to do inference using any number of edges. This might actually confer advantages if the ground-truth process is indeed sparse and relatively stationary. Overall, NRI is a good method and it has many advantages. Our results nevertheless demonstrate that these design decisions are less well-suited to the patterns of multi-agent interaction in these environments. \\n\\n7) For Section 3, is the RFM module pretrained and then fine-tuned with the new policy? If so, this gives the \\u201cRFM + A2C\\u201d agent extra information indirectly via the pretrained weights of the RFM module.\\n\\nThe on-board RFM is trained *from scratch* alongside the policy network. There is no pre-training.\\n\\n\\n8) I\\u2019m not sure what to make of the correlation analysis. It is not too surprising that there is some correlation (in fact, it\\u2019d be quite an interesting paper if the findings were that there wasn\\u2019t a correlation!), and it\\u2019s not clear to me how this could be used for debugging, visualizations, etc. If someone wanted to analyze the correlation between two entities and a policy\\u2019s action, it seems like they could directly model this correlation.\\n\\nMeasuring the emergence of coordinated behavior in multi-agent systems is an important open problem in this field (see also AnonReviewer3\\u2019s comment to this effect). Especially in the case of *learning* systems, assessing whether or not agents are able to coordinate, what drives their behavior, whether they learn to help or hinder one another and how they modify their behavior in response to changes in the environment are all crucial aspects of multi-agent analysis that we struggle to quantify. Here we show that learning relational models of multi-agent systems might be a good place to look.\\n\\nWith respect to the particular suggestion that one could directly model correlations between entities and a policy\\u2019s actions, we wish it were that simple! The influence of one agent\\u2019s state on another\\u2019s behavior can be highly contextual, so one would need to factor in the state to tease apart the appropriate effect. This amounts to fitting a parameterized model of the interaction, which is precisely what we\\u2019re doing here. \\n\\n\\n\\n9) Some minor comments\\na. In Figure 3C, right, why isn\\u2019t the magnitude 0 at time=1? Based on the other plots in Figure 3c, it seems like it should be 0.\\nAgents tend to rapidly move away from fruit or stags they just consumed. The probability of respawning is small (0.05) so there will be nothing there for some time (on average). This means that there is some information that a recently-consumed entity can provide to the agent that just consumed it: when the entity is adjacent to the agent, its chances of respawning are lower than otherwise. This predictive information drops sharply after the first step as agents will move towards available entities (which might be anywhere) rather than directly away from unavailable ones (see also Figure 3 top row the two right panels).\\n\\nb. The month/year in many of the citations seems odd.\\nWe have fixed this.\\n\\nc. The use of the word \\u201cvalence\\u201d seems unnecessarily flowery and distracting.\\n\\u201cValence\\u201d is a standard technical term from psychology, denoting the attractiveness or aversiveness of a stimulus (e.g. Frijda, 1986).\\n\\nReferences\\nFrijda, N. H. (1986). The emotions. Cambridge University Press.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your comments.\\n\\nWe take it as a good sign that you see the ideas as obvious a posteriori. To the best of our knowledge, though, no one has actually explored them. When others have modelled the dynamics of multi-agent systems (e.g. NRI, VAIN, ToMnet), they have not attempted to integrate these models into the agents themselves. Conversely, in papers that do put models of opponents in MARL, these models are not relational, and they model goals in agents identical to oneself (e.g. Raileanu et al. 2018), policy representations or Q-values (He et al. 2016), but not future actions. In a similar manner, no one has developed this method to interpret multi-agent behavior to the extent that we do here. The event-based analysis and value-based analyses are novel applications of this framework. \\n\\nThank you for pointing out that we were missing a citation to Scarselli, we included it the revised manuscript.\"}", "{\"title\": \"Response to AnonReviewer 3 2/2\", \"comment\": \"3) Are the agents truly coordinated? Can we measure causal influence between agents?\\n\\nThis is a good question. There are many potential definitions of coordination. From a game theoretic perspective, the agents have definitely found a coordinative equilibrium. They coordinate to consume stags, which is reflected in their overall return. Another sense of coordination is whether agents\\u2019 behaviors are mutually interdependent in service of a common goal. The RFM analysis in the bottom of Figure 3 demonstrates that it is statistically appropriate to describe the agents\\u2019 behavior as mutually interdependent when stags are present. The common goal, of course, is stag consumption. A final sense might be whether there are ground-truth causal influences between agents. We note that the RFM approach we pursue here is not designed to answer causal questions, per se: it is a statistical fit to time-series data, and does not traffic directly with interventions or counterfactuals. We are currently exploring such possibilities in ongoing research. If there\\u2019s an additional sense of coordination that you\\u2019re interested, please feel free to suggest a specific experiment to falsify the conjecture that we\\u2019re picking up on something other than coordination here.\\n\\nAs a further test of whether the agents are coordinated in these ways, we ran two additional experiments (1) where there is no scope for coordination between the agents and (2) when no interdependent behavior is required. For additional experiment (1) We trained deep RL agents on a modified version of Stag Hunt where stags yielded no reward at all (i.e. the only rewards are for consuming apples). We then trained the RFM on rollouts of these agents\\u2019 behavior. In contrast to the standard case, we found that the edge norms between agents was not modulated by the appearance of a stag (Figure 9a). In additional experiment (2) we obtained similar results in a version of the environment where stags could be consumed by single agents, without the need for coordination (Figure 9b).\\n\\n4) Does the RFM-augmented agent require pre-trained agents?\\n\\nNot at all. We only chose to include this experiment previously to isolate the benefit of including the RFM in the augmented agent and because this situation is relevant to artificial agnets learning to act in an environment shared with human experts. However, the RFM-augmented agent can be trained in just the same way alongside learning agents too, with similar benefits. We have included this in Figure 8 in a revised version of the manuscript.\\n\\nWe note that when the RFM-augmented agent is trained alongside teammates which are also learning agents, the relative benefit of the RFM on the agent\\u2019s return is smaller in magnitude than when this agent is trained alongside expert teammates. We suspect the reason for this is that the RFM model is initially modeling the behavior of untrained teammates, and there are fewer opportunities for rewarded coordination. Since the teammates are learning at roughly the same rate as the RFM-augmented agent, the RFM only has a chance to provide useful information later on during training.\", \"references\": \"Battaglia et al, (2016). Interaction networks for learning about objects, relations and physics. NIPS.\\nSantoro, et al. (2017). A simple neural network module for relational reasoning. NIPS.\\nWatters, et al. (2017). Visual interaction networks: Learning a physics simulator from video. NIPS.\\nBarrett, et al. (2018). Measuring abstract reasoning in neural networks. arXiv:1807.04225.\\nZambaldi, et al. (2018). Relational Deep Reinforcement Learning. arXiv:1806.01830.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your insightful questions and suggestions on the submission.\\n\\nWe hope we have addressed your questions below. In particular, we have addressed your two major weak points: the graphical structure of the embedded RFM is indeed helpful for speeding up learning; and the type and degree of interpretability differs substantially from previous work. We hope you will revise your rating accordingly.\\n\\n\\n1) Can we augment agents with models other than the RFM (e.g. MLP + LSTM)?\\n\\nYes! We have performed your suggested experiments, which we show in Figure 7. Indeed, the RFM-augmented agents outperform the MLP+LSTM-augmented agents (i.e. those with forward models that are non-relational).\\n\\n2) How does interpretability compare with ToMNet (Rabinowitz et al, 2018)?\\n\\nThis is a good question. To begin, interpretability is not a binary property of a model; there are a number of advantages that RFMs offer over the ToMNet construction in Rabinowitz et al, 2018. Most importantly, the ToMNet embeds sequences of behavior as points in an unstructured, high-dimensional Euclidean space, relying on the inherent structure of the data (and optionally an information bottleneck) to yield interpretable representations. In contrast, RFMs shape behavioral representations through the structure of entities and relations (via the graph net). These representations are very natural for humans to interface with, as they conform to representation schemas of human cognition (Spelke & Kinzler, 2007). Moreover, the representations of the RFM also allow us to easily ask directed questions about how different entities influence agent behavior (e.g. Figure 3), which is not something that the ToMnet enables. \\n\\nWe also note that the ToMNet was designed as a single-agent forward model, while RFMs naturally scale to the multi-agent setting. This allows us to augment agents with the RFM module, which is not something pursued in this previous work.\\n\\n3) Clarifications about Stag Hunt\\n\\n4 players - do all 4 need to step on the stag to capture it? No, only 2\\nIs there a negative reward for hunting alone? No\\nDoes including the RFM leads to more Stags being captured? We have only made qualitative observations, but anecdotally, the RFM-augmented agents go for stags like maniacs.\\n\\n4) Choice of metric in Figure 2.\\n\\nWe have now provided an alternative metric in Figure 10 (next-step action classification accuracy) which shows the same qualitative results.\\n\\nThe reason we use the particular metric in the main text is that is gives us a measure of how long the model remains useful. In particular, we are learning a simulator of the agents dynamics; the metric gives an indication of how many steps one can simulate before making a mistake. Alternative rollout metrics are hard to define beyond the first mistake when the true environment and the predictions have diverged. There most likely isn\\u2019t a perfect metric that covers all bases, and in particular alternative rollout metrics are hard to define after the model makes a mistake, since the ground-truth observations and predictions no longer match. Nonetheless, between this and the new Figure 10 we believe there\\u2019s a strong case that the RFM-based model is better.\", \"references\": \"Spelke & Kinzler. (2007). Core knowledge. Developmental science, 10(1), 89-96.\"}", "{\"title\": \"Review of Relational Forward Models for Multi-Agent Learning\", \"review\": \"This paper studies predicting multi-agent behavior using a proposed neural network architecture. The architecture, called a relational forward model (RFM) is the same graph network proposed by Battaglia et al., 2018, but adds a recurrent component. Two tasks are define: predict the next action of each agent, and predict the sum of future rewards. The paper demonstrates that RFMs outperform two baselines and two ablations. The authors also show that edge activation magnitudes are correlated with certain phenomenons (e.g. an agent walking towards an entity, or an entity being \\u201con\\u201d or \\u201coff\\u201d). The authors also show that appending the output of a pre-trained RFM to the state of a policy can help it learn faster.\\n\\nOverall, this paper presents some interesting ideas and is easy to follow, but the significance of the paper is not clear. The architecture is a rather straightforward extensions of previous work, and using graph networks for predictive modeling in multi-agent settings has been examined in the past, making the technical contributions not particularly novel. Examining the correlation between edge activation magnitudes and certain events is intriguing and perhaps the most novel aspect of this paper, but it is not clear how or why this information would be useful. There a few unsubstantiated claims that are concerning. There are also some odd experimental decisions and results that should be addressed.\", \"for_specific_comments\": \"1. Why would using a recurrent network help (i.e. RFM vs Feedforward)? Unless the policies are non-Markovian, the entire prediction problem should Markovian. I suspect that most of the gains are coming from the fact that the RFM method simply has more parameters than the Feedforward method (e.g. it can amortize some of the computation into the recurrent part of the network). Suggestion: train a Feedforward model that has more parameters (with appropriate hyperparameter sweeps) to see if this is the cause. If not, provide some analysis for why \\u201cmemories of the relations between entities\\u201d would be any more beneficial than simply recomputing those relations.\\n2. The other potential reason that the recurrent method did better is that policy actions are highly correlated (e.g. because agents move in straight lines to locations). If so, then recurrent methods can outperform feedforward methods without having to learn anything about what actually causes policies to move in certain directions. Suggestion: measure the correlation between consecutive actions. If there is non-trivial correlation, than this suggests that RFM does better than Feedforward (which is basically prior work of Battaglia et. al.) is for the wrong reasons.\\n3. If I understand the evaluation metric correctly, for each rollout, it counts how many steps from the beginning of the rollout match perfectly before the first error occurs. Then it averages this \\u201cminimum time to failure\\u201d across all evaluation rollouts. If this is correct, why was this evaluation metric chosen? A much more natural metric would be to just compute the average number of errors on a test data-set (and if this is what is actually reported, please update the description to disambiguate the two). The current metric could be very deceptive: Methods that do very well on states around the initial-state distribution but poorly near the end of trajectories (e.g. perfectly predicts the actions in the first 10 steps, but then resorts to random guessing for the last 99999 time steps) will outperform methods that have lower average error rate (e.g. a model that is correct 50% of the time). Suggestion: change the metrics to average number of errors, or report both, or provide a convincing argument why this metric is meaningful.\\n4. Unless I misunderstood, the results in Section 2.2.3 seem spurious and the claims seem unsubstantiated. For one, if we look at Equations (1) and (2), when we average over s_a1 and s_a2, they should both give the same average for R_a1. Put another way: the prune graph should (in theory) marginalize out s_a2. On average, its expected output should be the same as the output of the full graph (after marginalizing out s_a1 and s_a2). Obviously, it is possible to find specific rollouts where the full graph has higher value than the prune graph (and it seems Figure 4 does this), but it should equally be possible to find rollouts where the opposite is true. I\\u2019m hoping I misunderstood this section, but otherwise this seems to invalidate all the claims made in this section.\\n5. Even if concern #4 is addressed, the following sentence would still seem false: \\u201cThis figure shows that teammates\\u2019 influence on each other during this time is beneficial to their return.\\u201d The figure simply shows predictions of the RFM, and not of the ground truth. Moreover, it\\u2019s not clear what \\u201cteammates\\u2019 influence\\u201d actually means.\\n6. The comparison to NRI seems rather odd, since that method uses strictly less information than RFM.\\n7. For Section 3, is the RFM module pretrained and then fine-tuned with the new policy? If so, this gives the \\u201cRFM + A2C\\u201d agent extra information indirectly via the pretrained weights of the RFM module.\\n8. I\\u2019m not sure what to make of the correlation analysis. It is not too surprising that there is some correlation (in fact, it\\u2019d be quite an interesting paper if the findings were that there wasn\\u2019t a correlation!), and it\\u2019s not clear to me how this could be used for debugging, visualizations, etc. If someone wanted to analyze the correlation between two entities and a policy\\u2019s action, it seems like they could directly model this correlation.\", \"some_minor_comments\": [\"In Figure 3C, right, why isn\\u2019t the magnitude 0 at time=1? Based on the other plots in Figure 3c, it seems like it should be 0.\", \"The month/year in many of the citations seems odd.\", \"The use of the word \\u201cvalence\\u201d seems unnecessarily flowery and distracting.\", \"My main concern with this paper is that it is not particularly novel and the contribution seems questionable. I have some concerns over the experimental metric and Section 2.2.3, but even if that is clarified, it is not clear how impactful this paper would be. The use of a recurrent network seems unnecessary, unjustified, and not analyzed. The analysis of correlations is interesting, but not particularly compelling or surprising. And lastly, the RFM-augmented results are not very strong.\", \"--\"], \"edit\": \"After discussing with the authors, I have changed my rating. The authors have adjusted some of the language, which I previously thought overstated the contributions and was misleading. They have added a number of experiments which valid the claim that their method is proposing a reasonable way of measuring collaboration. I also realized that I misunderstood one of the sections, and I encourage the authors to improve the presentation to (1) present the significance of the experiments more clear, (2) not overstate the results, and (3) emphasize the contribution more clearly.\\n\\nOverall, the paper presents convincing evidence that factors in a graph neural networks do capture some notion of collaboration. I do not feel that the paper is particularly novel, but the experiments are thorough. Furthermore, their experiments show that adding an RFM module to an agent consistently helps (albeit not by much). Given that the multi-agent community is still trying to decide how to best quantify and use metrics for collaboration, I find it difficult to access the long-term impact of this paper. However, given the thoroughness of the experiments and analysis, I suspect that this will be valuable for the community and deserves some visibility.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Review Relational Forward Models for Multi-Agent Learning\", \"review\": \"This paper proposes to use graph neural networks in the scenario of multi-agent reinforcement learning (MARL). It tackles two current challenges, learning coordinated behaviours and measuring such coordination.\\n\\nAt the core of the approach are graph neural networks (a cite to Scarselli 2009 would be reasonable): acting and non-acting entities are represented by a graph (with (binary) edges between acting-acting and acting-nonacting entities) and the graph network produces a graph where these edges are transformed into a vectorial representation, which then can be used by a downstream task, e.g. a policy algorithm (as in this paper) that uses it to coordinate behavour. Because the output of the graph network is a structurally identical graph to the input, it is possible to interpret this output.\\n\\nThe paper is well written, the main ideas are clearly described. I'm uncertain about the novelty of the approach, at least the way the RFM is utilized in the policy is a nice idea (albeit, a-posteriori, sounds straight forward in the context of MARL). Similarly, using the graph output for interpretations is an obvious choice). Nevertheless, showing empirically that the ideas actually work gives the paper a lot of credibility for being a stepping stone in the area of MARL.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Relational Forward Models for Multi-Agent Learning provides a new tool for assessing coordination in MARL and can improve MARL training speeds.\", \"review\": \"This paper used graph neural networks to do relational reasoning of multi-agent systems to predict the actions and returns of MARL agents that they call Relational Forward Modeling. They used RFM to analyze and assess the coordination between agents in three different multi-agent environments. They then constructed an RFM-aumented RL agent and showed improved training speeds over non relational reasoning baseline methods. \\n\\nI think the overall approach is interesting and a novel way to address the growing concern of how to access coordination between agents in multi-agent systems. I also like how they authors immediately incorporated the relational reasoning approach to improve the training of the MARL agents. \\n\\nI wonder how dependent this approach is to the semantic representation of the environment. These semantic descriptions are similar to hand crafted features and thus will require some prior knowledge about the environment or task and will be harder to obtain on more difficult environment and tasks. \\n\\nWill this approach work on continuous tasks? For example, the continuous state and action space of the predator-prey tasks that use the multi-agent particle environment from OpenAi. \\n\\nI think one of the biggest selling points from this paper is using this method to assess the coordination/collaboration between agents (i.e. the social influence amongst agents). I would have liked to see\\nmore visualizations or analysis into these learned representations. The bottom row of Figure 3 shows that \\\"when stags become available, agents care about each other more than just before that happens\\\". While this is very interesting and an important result, i think that this allows one to see what features of the environment (including other agents) are important to a particular agents decision making but it doesn't really answer whether the agents are truly coordinated, i.e. whether there are any causal dependencies between agents. \\n\\nFor the RFM augmented agents, I like that you are able to train the policy as well as the RFM simultaneously from scratch, however, it seems that this requires you to only train a single agent in the multi-agent environment. If I understand correctly, for a given multi-agent environment, you first pre-trained A2C agents to play the three MARL games and then you paired one of the pre-trained (expert) agents with the RFM-augmented learning agents during training. This seems to limit the practicality and usability of this method as it requires you to have pre-trained agents that have already solved the task. I would like to know why the authors didn't try to train two (or four) RFM-augmented agents from scratch together. When you use one of the agents as a pre-trained agent, this might make the training of the RFM module a bit easier since you have at least one agent with a fixed policy to predict actions from. It could be challenging when trying to train both RFM modules on two learning agents as the behaviors of learning agents are changing over time and thus the learning might be unstable. \\n\\nOverall, i think this is an interesting approach and especially for probing what information drives agents' behaviors. However, I don't see the benefit of the RFM-augmented agent provides. It's clearly shown to learn faster than non RFM-augmented agents (which is good), however, unless I'm mistaken, the RFM-augmented agent requires a pre-trained agent to be able to learn in the first place. \\n\\n--edit:\\nThe authors have sufficiently addressed my questions and concerns and have performed additional analysis. My biggest concern of weather or not the RFM-augmented agent was capable of learning without a pre-trained agent has been addressed with additional experiments and analysis (Figure 8). \\n\\nBased on this, i have adjusted my rating to a 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"RELATIONAL FORWARD MODELS FOR MULTI-AGENT LEARNING\", \"summary\": \"Model free learning is hard, especially in multi-agent systems. The authors consider a way of reducing variance which is to have an explicit model of actions that other agents will take. The model uses a graphical structure and the authors argue it is a) interpretable, b) predicts actions better and further forward than competing models, c) can increase learning speed.\", \"strong_points\": [\"The main innovation here is that the model uses a graph conv net-like architecture which also allows for interpretable outputs of \\u201cwhat is going on\\u201d in a game.\", \"The authors show that the RFM increases learning speed in several games\", \"The authors show that the RFM does somewhat better at forward action prediction than a na\\u00efve LSTM+MLP setup and other competing models\", \"Weak Point\", \"The RFM is compared to other models in predicting forwards actions but is not compared to other models in Figure 5, so it is not clear that the graphical structure is actually required to speed up learning. I would like to see these experiments added before we can say that the RFM is adding to performance.\", \"Related: The authors argue that an advantage of the RFM is that it is interpretable, but I thought a main argument of Rabinowitz et. al. was that simple forward models similar to the LSTM+MLP here were also interpretable? If the RFM does not improve learning above and beyond the LSTM+MLP then the argument comes down to more accurate action prediction (ok) and more interpretability (maybe) which is less compelling.\", \"Clarifying Questions\", \"How does the 4 player Stag Hunt work? Do all 4 agents have to step on the Stag together or just 2 of them? How are rewards distributed? Is there a negative payoff for Hunting the stag alone as in the Peysakhovich & Lerer paper?\", \"Related: In the Stag Hunt there are multiple equilibria, either agents learn to get plants (which is safe but low payoff) or they learn to Hunt (which is risky but high payoff). Is the RFM leading to more convergence to the Hunting state or is it simple leading agents to learn the safe but low payoff strategies faster?\", \"The choice of metric in Figure 2 (# exactly correct prediction) is non-standard (not saying it is wrong). I think it would be good to also see a plot of a more standard metric such as loglikelihood of the model's for each of X possible steps ahead. It would help to clarify where the RFM is doing better (is it better at any horizon or is it just able to look further forward more accurately than the competitors?)\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SJVmjjR9FX
Variational Bayesian Phylogenetic Inference
[ "Cheng Zhang", "Frederick A. Matsen IV" ]
Bayesian phylogenetic inference is currently done via Markov chain Monte Carlo with simple mechanisms for proposing new states, which hinders exploration efficiency and often requires long runs to deliver accurate posterior estimates. In this paper we present an alternative approach: a variational framework for Bayesian phylogenetic analysis. We approximate the true posterior using an expressive graphical model for tree distributions, called a subsplit Bayesian network, together with appropriate branch length distributions. We train the variational approximation via stochastic gradient ascent and adopt multi-sample based gradient estimators for different latent variables separately to handle the composite latent space of phylogenetic models. We show that our structured variational approximations are flexible enough to provide comparable posterior estimation to MCMC, while requiring less computation due to a more efficient tree exploration mechanism enabled by variational inference. Moreover, the variational approximations can be readily used for further statistical analysis such as marginal likelihood estimation for model comparison via importance sampling. Experiments on both synthetic data and real data Bayesian phylogenetic inference problems demonstrate the effectiveness and efficiency of our methods.
[ "Bayesian phylogenetic inference", "Variational inference", "Subsplit Bayesian networks" ]
https://openreview.net/pdf?id=SJVmjjR9FX
https://openreview.net/forum?id=SJVmjjR9FX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJeaaM3ge4", "r1x5JtLm0X", "rylqTcJF67", "Bkl2HqJYTm", "SkgNStowpX", "rkeE3OswTX", "HklnmwjDaX", "BkgUn-Vwpm", "rke2eXKkTm", "SJeInxZA3m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544762053362, 1542838497884, 1542154945611, 1542154820305, 1542072635822, 1542072491679, 1542072100307, 1542042030337, 1541538548406, 1541439662023 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper611/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper611/Authors" ], [ "ICLR.cc/2019/Conference/Paper611/Authors" ], [ "ICLR.cc/2019/Conference/Paper611/Authors" ], [ "ICLR.cc/2019/Conference/Paper611/Authors" ], [ "ICLR.cc/2019/Conference/Paper611/Authors" ], [ "ICLR.cc/2019/Conference/Paper611/Authors" ], [ "ICLR.cc/2019/Conference/Paper611/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper611/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper611/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers lean to accept, and the authors clearly put a significant amount of time into their response. I will also lean to accept. However, the comments of reviewer 2 should be taken seriously, and addressed if possible, including an attempt to cut the paper length down.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-Review for Phylogenetic Inference paper\"}", "{\"title\": \"Revision summary\", \"comment\": [\"We thank all reviewers for the constructive feedback. We have revised the paper, and have incorporated their suggestions with the following major changes:\", \"We reorganized the SBN section and added more detailed discussion on SBN implementations and parameter sharing to better explain the subsplit Bayesian network framework.\", \"We cut down the paper a bit, in particular in the stochastic gradient estimator section, and now the main text (not including the discussion) is within 9 pages.\", \"We added more discussion on support estimation feasibility to the discussion section.\", \"We clarified a bit the KL divergence used in our experiments, and explained why a low KL divergence is a really strong statement about the quality of the tree posterior approximation. We hope that this addresses the concerns of reviewer 3 about the approximation performance on tree topologies.\", \"We added an additional qualitative summary (a consensus tree) of VBPI and MCMC to our experiments (Figure 5 in Appendix D). This tree shows that the MCMC-inferred and the VB-inferred consensus trees are identical. We hope that this addresses the concerns of reviewers 2 and 3 who were interested in more qualitative and scientifically-relevant posterior summaries.\", \"We clarified a bit the marginal likelihood estimates presented in Table 1, more clearly describing why VBPI provides a substantial advantage for this task.\", \"We hope our revision has adequately addressed the reviewers' questions and concerns, and look forward to reading any other additional comments.\"]}", "{\"title\": \"Clarification on technical issues (Part 2/2)\", \"comment\": \"4) \\\"In table 1, is the point that all methods are basically the same with different variance? This is not clear from the text. What about the variational bounds?\\\"\\n\\nYes, you are right. All methods provide estimates for the same marginal likelihood, and better approximation would lead to smaller variance. The phylogenetic model is well defined with fixed structure, in contrast to generative models (e.g., VAE) where the generative network is trainable. In the paper we do not report the variational bounds since we want to compare to the stepping-stone (SS) algorithm on marginal likelihood estimation. These lower bounds are definitely different and improve as more particles and more flexible approximations are adopted, which we list below (averaged over 1000 runs).\\n\\nIn our revision we look forward to clarifying this point.\\n\\n Variational Lower Bounds\\n+---------+------------------+------------------+--------------------------+--------------------------+\\n|\\t | VIMCO(10) | VIMCO(20) | VIMCO(10)+PSP | VIMCO(20)+PSP |\\n+---------+------------------+------------------+--------------------------+--------------------------+\\n| DS1 | -7108.91 | -7108.77 | -7108.73 | -7108.61 |\\n+---------+------------------+------------------+--------------------------+--------------------------+\\n| DS2 | -26367.91 | -26367.82 | -26367.89 | -26367.83 |\\n+---------+------------------+------------------+--------------------------+--------------------------+\\n| DS3 | -33735.36 | -33735.26 | -33735.29 | -33735.24 |\\n+---------+------------------+------------------+--------------------------+--------------------------+\\n| DS4 | -13330.49 | -13330.32 | -13330.37 | -13330.22 |\\n+---------+------------------+------------------+--------------------------+--------------------------+\\n| DS5 | -8215.85 | -8215.56 | -8215.64 | -8215.36 |\\n+---------+------------------+------------------+--------------------------+--------------------------+\\n| DS6 | -6725.69 | -6725.42 | -6725.48 | -6725.19 |\\n+---------+------------------+------------------+--------------------------+--------------------------+\\n| DS7 | -37332.91 | -37332.65 | -37332.72 | -37332.49 |\\n+---------+------------------+------------------+--------------------------+--------------------------+\\n| DS8 | -8655.02 | -8652.55 | -8651.76 | -8651.53 |\\n+---------+------------------+------------------+--------------------------+--------------------------+\"}", "{\"title\": \"Clarification on technical issues (Part 1/2)\", \"comment\": \"Thank you for your thoughtful review and valuable feedback. Below are the answers to your comments:\\n\\n1) \\\"The main advantage would seem to be a large speedup over MCMC-based methods (Figure 4), which could be of significant value to the phylogenetics community. This point would benefit from more discussion. How do the number of iterations (reported in Figures 3&4, which was done carefully) correspond to wallclock time? Can this new method scale to numbers of sites and sequences that were previously unfeasible?\\\"\\n\\nWe are glad that this reviewer appreciates the care with which we crafted the comparison in terms of number of likelihood evaluations. Our motivation in doing a comparison in terms of likelihood evaluations is because our current implementation is in Python, whereas while MrBayes is in C that has been optimized for many years. This is the first paper introducing the ideas and initial implementation of variational Bayes phylogenetic inference, and we think that this level of comparison is appropriate. We will soon begin developing a highly optimized implementation, for which we are planning a more applications-driven paper which will include a wallclock comparison.\\n\\nRegarding large data sets, given that the learned SBNs can provide guided exploration in tree space and variational approaches naturally incorporate stochastic gradients, we believe it is much easier for VBPI to scale to datasets with large numbers of sequences and sites. However, we have not tried out our initial Python implementation on especially big data sets.\\n\\n2) \\\"The main technical contribution is the use of SBNs as variational approximations over tree-space, but it is difficult to follow their implementation and parameter sharing without the explanation of the original paper.\\\"\\n\\nAs explained in point 2 to reviewer 3, this is mainly due to the page limit of the conference. We will definitely add more detailed explanation in our revision if there is room after trimming proposed by Reviewer 2.\\n\\n3) \\\"Additionally, the issue of estimating the support of the subsplit CPTs needs more discussion. As the authors acknowledge, complete parameterizations of these models scale in a combinatorial way with ?all possible parent-child subsplit pairs?, and they deal with this by shrinking the support up front with various heuristics. It seems that these support estimation approaches would be feasible when the data are strong but would become challenging to scale when the data are weak. Since VB is often concerned with the limited-data regime, more discussion of when support estimation is feasible and when it is difficult would clarify how widely applicable the method is.\\\"\\n\\nThis is indeed an important point. We agree that when the data are weak, the posterior on subsplit pairs could have a large support.\\n\\nHowever, the SBN approach actually has a strong natural advantage in the weak-data regime. When data is weak, the support of the posterior distribution on complete trees, as evaluated by classical MCMC approaches, is enormous. For example, if there is uncertainty in multiple different parts of the tree, the support on complete trees scales as the product of these local uncertainties.\\n\\nThe SBN parameterization alleviates this issue by factorizing the uncertainty into local structures. Thus, if the support of parent-child pairs is too large, then one should certainly not be trying to assign posterior support to each tree individually as in classical MCMC.\\n\\nRegarding heuristics for support estimation, we show in section 4.2 that bootstrap-based support estimation is effective even for diffuse posteriors across four data sets (DS5, DS6, DS7, DS8). See below for the numbers of unique trees in the standard MCMC run samples for all data sets (which is an indicator of the diffusivity of the posteriors).\\n\\n-------------------------------------------------------------------------------------------------------------------------\\ndatasets | DS1 DS2 DS3 DS4 DS5 DS6 DS7 DS8\\n-------------------------------------------------------------------------------------------------------------------------\\n# sample trees | 1228 7 43 828 33752 35407 1125 3067\\n-------------------------------------------------------------------------------------------------------------------------\\n\\nWe agree that a further discussion of the weak-data regime is important and we look forward to adding to the discussion in a revision.\"}", "{\"title\": \"A note to all reviewers\", \"comment\": \"We thank the reviewers for their reviews and the time spent on the manuscript, and their encouragement on the novelty of our work. We want to emphasize that, in addition to using subsplit Bayesian networks for approximating phylogenetic tree posteriors, our structured parameterization of the branch length distributions is also novel which allows us to jointly learn the branch length distributions across tree topologies. In contrast, classical MCMC typically uses simple random perturbations which contributes to the low acceptance rate for large topological modifications.\"}", "{\"title\": \"Thanks for the suggestions and we need some clarifications on your part\", \"comment\": \"We thank the reviewer for your review and time. We would like to incorporate the suggestions into our revision and think we would benefit from some clarifications on your part.\\n\\n1) \\\"The paper is 10 pages long, and I'm not convinced it needs to be. The reviewer guidelines (https://iclr.cc/Conferences/2019/Reviewer_Guidelines) say that \\\"the overall time to read a paper should be comparable to that of a typical 8-page conference paper. Reviewers may apply a higher reviewing standard to papers that substantially exceed this length.\\\" So I recommend trying to cut it down a bit during the revision phase.\\\"\\n\\nThe main reason we took 10 pages for the paper is that phylogenetic inference is probably not well known to the machine learning community and much space is devoted to putting the phylogenetic models and experiments in context. We have tried to balance between being short and being a little bit long (but more self-contained) and thought the latter would eventually save the reviewers' time. However, we would like to cut down our paper as suggested and would appreciate it very much if the reviewer can point to us which parts of the paper that you find are redundant and can be made more brief.\\n\\n2) \\\"The empirical comparisons are all likelihood/ELBO-based. These metrics are important, but it would be nice to see some kind of qualitative summary of the inferences made by different methods?two methods can produce similar log-likelihoods or KL divergences but suggest different scientific conclusions.\\\"\\n\\nFirst, we would like to make sure the reviewer is aware how the KL results show the SBN-based approximations to be very close in distribution on the discrete space of phylogenetic tree structures. We have emphasized in point 3 to reviewer 3, and realize that we should have been more clear on this point.\\n\\nHowever, we are happy to incorporate any qualitative summaries the reviewer would like to suggest. We could certainly add, for example, tree shape summaries, but such a comparison would be significantly weaker than the current comparison on tree structures.\\n\\n3) \\\"One final comment: it's not clear to me that ICLR is the most relevant venue for this work, which is purely about Bayesian inference rather than deep learning. This isn't a huge deal?certainly there's plenty of variational inference at ICLR these days?but I suspect many ICLR attendees may tune out when they realize there aren't any neural nets in the paper.\\\"\", \"we_think_iclr_is_an_excellent_venue_for_this_work_because\": \"(i) Representation learning on discrete/structured objects has received increasing attention from the machine learning community, and our work represents an important advance in variational inference on complex structured models. (ii) Our variational framework admits many extensions that can incorporate the approximating power of neural networks (e.g, using normalizing flow and deep networks for more flexible within-tree and between-tree approximations, as mentioned in the discussion section of our paper).\"}", "{\"title\": \"Computational expense and approximation performance compared to MrBayes were reported.\", \"comment\": \"Thank you for your review and feedback. We address your specific questions and comments below:\\n\\n1) \\\"the computational expense is not given, or I've missed it, for the variational approach - presumably it is relatively small compared to MCMC?\\\"\\n\\nWe present the computational expense for the variational approach in terms of the number of likelihood evaluations, and compare to MCMC. We direct the reviewer to Figure 4 in section 4.2, where we show the KL divergence to the ground truth as a function of the number of iterations of different methods (including MCMC via MrBayes). For a fair comparison, the number of iterations for MCMC is mapped to the number of iterations of variational methods that take the same number of likelihood evaluations.\\n\\n2) \\\"My main criticism is that I found the details of subsplit Bayesian networks difficult to follow. Googling them suggests they are a relatively new model, which has not been well studied or used (there are no citations of the paper that introduces them for example!).\\\"\\n\\nSBNs are indeed a new model. The relatively short discussion of subsplit Bayesian networks (SBNs) is mainly due to the page limit of the conference, but we would like to present a more detailed discussion of SBNs if there is room in our revision. For a more detailed discussion, we refer the reviewer to the original paper [1] that introduced SBNs, which has been accepted to NIPS this year.\\n\\n3) \\\"The paper would be stronger if it discussed these in more detail - how close can they come to approximating the models usually used in phylogenetic analyses? Often the inferred phylogeny is itself of interest - how similar are the trees inferred here to those found from MrBayes?\\\"\\n\\nFirst, we would like to ensure that our means of evaluating the SBN approximation is clear. We compute KL divergence over the discrete collection of phylogenetic tree structures, from the SBN distribution to the ground truth distribution on phylogenetic tree models obtained from extremely long MCMC runs using MrBayes. In order to get a low KL divergence to this ground truth, it is not enough to have similar trees: one must find practically the same set of trees as MrBayes, with nearly identical probability weights.\\n\\nBased on the low KL divergence reported in [1] and our experiments, SBNs can indeed provide accurate approximations to the phylogenetic posteriors inferred from real data (see Table 1 in [1] and section 4.2 in our paper.). Therefore, we believe SBN-based phylogenetic inference represents an important advance in this field, especially on structural learning of phylogenies.\\n\\n\\nReference\\n[1] C. Zhang and FA. Matsen. Generalizing tree probability estimation via Bayesian networks. arXiv preprint arXiv:1805.07834, 2018\"}", "{\"title\": \"New approximate inference approaches for phylogenetic trees\", \"review\": \"This paper explores an approximate inference solution to the challenging problem of Bayesian inference of phylogenetic trees. Its leverages recently proposed subsplit Bayesian networks (SBNs) as a variational approximation over tree space and combines this with modern gradient estimators for VI. It is thorough in its evaluation of both methodological considerations and different datasets.\\n\\nThe main advantage would seem to be a large speedup over MCMC-based methods (Figure 4), which could be of significant value to the phylogenetics community. This point would benefit from more discussion. How do the number of iterations (reported in Figures 3&4, which was done carefully) correspond to wallclock time? Can this new method scale to numbers of sites and sequences that were previously unfeasible?\\n\\nThe main technical contribution is the use of SBNs as variational approximations over tree-space, but it is difficult to follow their implementation and parameter sharing without the explanation of the original paper. Additionally, the issue of estimating the support of the subsplit CPTs needs more discussion. As the authors acknowledge, complete parameterizations of these models scale in a combinatorial way with \\u201call possible parent-child subsplit pairs\\u201d, and they deal with this by shrinking the support up front with various heuristics. It seems that these support estimation approaches would be feasible when the data are strong but would become challenging to scale when the data are weak. Since VB is often concerned with the limited-data regime, more discussion of when support estimation is feasible and when it is difficult would clarify how widely applicable the method is.\\n\\nOverall, this work is an interesting extension of variational Bayes to a tree-structured inference problem and is thorough in its evaluation. While it is a bit focused on classical inference for ICLR, it could be interesting both for the VI community and as a significant application advancement.\", \"other_notes\": \"In table 1, is the point that all methods are basically the same with different variance? This is not clear from the text. What about the variational bounds?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A nice approach to inferring phylogenetic trees\", \"review\": \"This paper proposes a variational approach to Bayesian posterior inference in phylogenetic trees. The novel part of the approach (using subsplit Bayesian networks as a variational distribution) is intelligently combined with recent ideas from the approximate-inference literature (reweighted wake-sleep, VIMCO, reparameterization gradients, and multiple-sample ELBO estimators) to yield what seems to be an effective approach to a very hard inference problem.\", \"my_score_would_be_higher_were_it_not_for_two_issues\": [\"The paper is 10 pages long, and I'm not convinced it needs to be. The reviewer guidelines (https://iclr.cc/Conferences/2019/Reviewer_Guidelines) say that \\\"the overall time to read a paper should be comparable to that of a typical 8-page conference paper. Reviewers may apply a higher reviewing standard to papers that substantially exceed this length.\\\" So I recommend trying to cut it down a bit during the revision phase.\", \"The empirical comparisons are all likelihood/ELBO-based. These metrics are important, but it would be nice to see some kind of qualitative summary of the inferences made by different methods\\u2014two methods can produce similar log-likelihoods or KL divergences but suggest different scientific conclusions.\"], \"one_final_comment\": \"it's not clear to me that ICLR is the most relevant venue for this work, which is purely about Bayesian inference rather than deep learning. This isn't a huge deal\\u2014certainly there's plenty of variational inference at ICLR these days\\u2014but I suspect many ICLR attendees may tune out when they realize there aren't any neural nets in the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A novel well executed paper\", \"review\": \"This paper is well written, appears to be well executed, and the results look good. I am not particularly well informed about the area, but the work appears to be novel. MCMC for phylogenetic inference is hugely expensive, and anything we can do to reduce that cost would be beneficial (the computational expense is not given, or I've missed it, for the variational approach - presumably it is relatively small compared to MCMC?).\\n\\nMy main criticism is that I found the details of subsplit Bayesian networks difficult to follow. Googling them suggests they are a relatively new model, which has not been well studied or used (there are no citations of the paper that introduces them for example!). The paper would be stronger if it discussed these in more detail - how close can they come to approximating the models usually used in phylogenetic analyses? Often the inferred phylogeny is itself of interest - how similar are the trees inferred here to those found from MrBayes?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
rk4Qso0cKm
Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network
[ "Xuanqing Liu", "Yao Li", "Chongruo Wu", "Cho-Jui Hsieh" ]
We present a new algorithm to train a robust neural network against adversarial attacks. Our algorithm is motivated by the following two ideas. First, although recent work has demonstrated that fusing randomness can improve the robustness of neural networks (Liu 2017), we noticed that adding noise blindly to all the layers is not the optimal way to incorporate randomness. Instead, we model randomness under the framework of Bayesian Neural Network (BNN) to formally learn the posterior distribution of models in a scalable way. Second, we formulate the mini-max problem in BNN to learn the best model distribution under adversarial attacks, leading to an adversarial-trained Bayesian neural net. Experiment results demonstrate that the proposed algorithm achieves state-of-the-art performance under strong attacks. On CIFAR-10 with VGG network, our model leads to 14% accuracy improvement compared with adversarial training (Madry 2017) and random self-ensemble (Liu, 2017) under PGD attack with 0.035 distortion, and the gap becomes even larger on a subset of ImageNet.
[ "randomness", "adversarial defense", "adversarial attacks", "algorithm", "liu", "bnn", "new algorithm", "robust neural network", "following" ]
https://openreview.net/pdf?id=rk4Qso0cKm
https://openreview.net/forum?id=rk4Qso0cKm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1xsyihZe4", "SJeDntQ3k4", "HkgXvWptJN", "H1xh1O74AQ", "r1l_tTGfCm", "Bye2N8dbAX", "Hylsz71KpQ", "SklNhuR_67", "SyggL18wpm", "HJxoBdEI6X", "HyxExoXr6X", "S1x4D9aETX", "SJeXtbTEpQ", "r1xcGYhNTX", "SyeHI_C72X", "Hye80M2JnX", "Byxndvbkh7" ], "note_type": [ "meta_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544829666664, 1544464815198, 1544307034979, 1542891491581, 1542757760364, 1542714932360, 1542152978626, 1542150315580, 1542049608114, 1541978179418, 1541909228186, 1541884508044, 1541882235380, 1541880082058, 1540773965240, 1540502221998, 1540458356139 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper610/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper610/Authors" ], [ "~Zhanxing_Zhu1" ], [ "ICLR.cc/2019/Conference/Paper610/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper610/Authors" ], [ "ICLR.cc/2019/Conference/Paper610/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper610/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper610/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper610/Authors" ], [ "ICLR.cc/2019/Conference/Paper610/Authors" ], [ "ICLR.cc/2019/Conference/Paper610/Authors" ], [ "ICLR.cc/2019/Conference/Paper610/Authors" ], [ "ICLR.cc/2019/Conference/Paper610/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper610/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper610/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"Reviewers are in a consensus and recommended to accept after engaging with the authors. Please take reviewers' comments into consideration to improve your submission for the camera ready.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Paper decision\"}", "{\"title\": \"Thanks for introducing your work!\", \"comment\": [\"Thank you very much for introducing your recent paper on this topic! Since the paper is available after the ICLR submission deadline, we were not aware of this work. We will include some discussions and comparisons in our paper:\", \"Based on our understanding, although both papers use Bayesian method to defense, the algorithms are quite different: your algorithm contains two separate SGLD sampling procedures to sample both adversarial samples and model weights, while we do not sample the adversarial samples. Instead, we integrate the adversarial training process into a single min-max optimization problem.\", \"Our method (in the current form) is using variational Bayes and it makes adversarial training process much more efficient. In fact, our algorithm has time complexity similar to the original adversarial training. This can also be observed from experimental results: we are able to scale to complex datasets like CIFAR or even ImageNet-143. We are curious about how your algorithm perform under such situation and will conduct some comparisons.\", \"Lastly, it seems that your paper/code are publicly available after October 26 and our submission is on September 27, so we couldn\\u2019t include the comparison/discussion in our submission. But we will definitely add discussions/comparisons into our final version.\"]}", "{\"comment\": \"Our NeurIPS2018 work \\\"Bayesian Adversarial Learning\\\" also approaches the adversarial training from a Bayesian perspective, where MCMC is employed for sampling both adversarial examples and the parameters of the classifier network. Bayesian Adversarial Learning is a general framework for improving robustness of neural network to the adversarial examples. One special case of our framework is Bayesian Neural Network combined with adversarial training, when the \\\"point\\\" estimate of adversarial examples is used. Therefore, I think it might be interesting to have a discussion about our work.\\n\\nN. Ye and Z. Zhu. \\\"Bayesian Adversarial Learning\\\" NeurIPS2018.\", \"title\": \"It might be interesting to have a discussion about our NeurIPS2018 work \\\"Bayesian Adversarial Learning\\\"\"}", "{\"title\": \"Yes this method sounds more aligned to Bayesian decision theory :)\", \"comment\": \"...although you might need some careful derivation to figure out which data to be conditioned on, how many datapoint counts in as observations (so that the uncertainty is well calibrated), etc.\\n\\nI would encourage you to work on this direction in the future, in order to have a principled method to adversarially train BNNs. The following references might be helpful for reading:\", \"http\": \"//proceedings.mlr.press/v15/lacoste_julien11a/lacoste_julien11a.pdf\", \"https\": \"//arxiv.org/pdf/1805.03901.pdf\"}", "{\"title\": \"Reply\", \"comment\": \"Thanks for introducing this question! We haven't tried any other inference methods during the implementation stage of this paper, but we think it is possible to extend our method, Please see our responses below:\\n\\n1. The adversarial dataset D_adv not only depends on the training data D, but also the posterior p(w | D). So our method should be iterative in its nature (find adversarial examples --> inference on D_adv --> find new adversarial examples ...).\\n\\n2. For general inference methods that posterior is not trained (e.g. by optimization method), we may still find an iterative algorithm, as shown below:\\n\\nSuppose we have a \\\"black-box\\\" algorithm that can do inference on data D, and the posterior is p(w|D), we may return the sample distribution p*(w) by the following iterative algorithm:\", \"input\": \"original training dataset D\\n1. Initialize posterior p0(w) := p(w | D), set loop variable i = 0\\n2. Perturb dataset D to get D_adv := {x+eps^* | eps^* = argmin_{ ||eps||<delta } \\\\int_w p(y | x+eps, w) * p_i(w) dw, forall x\\\\in D}. We can simulate the integration by sampling from p_i(w)\\n3. Run the \\\"black-box\\\" inference algorithm on D_adv to get new posterior p_{i+1}(w) := p(w | D_adv)\\n4. Set i = i + 1\\n5. GoTo step 2 until p_i(w) converges ( to p*(w) ).\\n6. return p*(w).\\n\\nThe above algorithm is a natural extension of Algorithm 1 in the paper, except that here we allow the use of a more general inference method and assume the attack is on the whole dataset instead of a subset.\", \"q\": \"Does this method still encourage nice properties of BNNs?\", \"a\": \"We think p*(w) should do the job. After all, both algorithms involve the adversarial game between inference method and the attacker.\"}", "{\"title\": \"Is your method a principled way to train BNNs with adversaries?\", \"comment\": \"I appreciate your efforts on responding my review and updating your paper. Now the extra experiments look much more relevant to the paper which is good. Still, I would like to discuss with you, on whether your method is a principled way to train BNNs with adversaries.\\n\\nLet us set aside hyper-parameter optimizations for now and assume we have selected a good prior for the weights w. In your method you only use adversarial inputs as the observation, therefore, the exact posterior is p(w |D_adv), with D_adv containing adversarial inputs crafted on all x \\\\in D.\\n\\nNote that if we can draw samples from the exact posterior p(w | D_adv), then in principle BNN requires **no training**, and in prediction time the BNN should be robust to adversarial examples that are crafted in a similar way as D_adv. So in this idealized setting, the adversarial game cannot be played between the adversaries and the exact posterior, because the exact posterior is not obtained by optimization. \\n\\nApparently in practice we cannot sample from the exact posterior, and VI does introduce optimization methods to approximate p(w | D_adv). I have no problem for optimizing a lower-bound, however, I doubt whether the underlying idea of your approach is principled. In other words, does your idea generalize to other BNN inference methods, e.g. message passing and SG-MCMC? Does your method still encourage nice properties of BNNs, e.g. calibrated uncertainty?\\n\\nI would like to see a discussion on this topic. Either you need to be more specific and say your method applies to VI-BNN only, or you need to justify why your approach is principled.\"}", "{\"title\": \"Reply\", \"comment\": \"Yes, strictly speaking the inequality may not hold for PGD attack, because it is not guaranteed to find the optimal adversarial perturbation. Thanks for your reminding!\\n\\nAlthough in our experiments on five models (no-defense, BNN, Adv. Training, RSE, Adv-BNN), we did not observe any violation of this inequality (as you can see in Fig. 3, all correlations are within range [0, 1]).\\n\\nIn the latest revision, we fixed this problem by changing \\\"correlation\\\" measure to \\\"affinity\\\" measure, and corrected the imprecise sentences.\\n\\nAgain, thanks for catching this mistake!\"}", "{\"comment\": \"Re: 2, my point was not that you should not assume you know the weights. Rather, PGD is only an approximation to the best attack. If you actually had the worst-case attack then I agree that Acc[B|A] >= Acc[B | B] but given that you are making an approximation this need not hold.\\n\\nBest,\\nSame commenter as above\", \"title\": \"response\"}", "{\"title\": \"Thanks for your comments!\", \"comment\": \"I am very glad to see your comments, both positive and negative ones.\\n\\nWe have uploaded the model checkpoints to the GitHub page, sorry we cannot disclose the github link but you may find it easily.\\n\\nAs to your other comments, here are my thoughts on that:\\n\\n1. Our deduction is an extremely simplified version to the robust optimization objective. As you can see, we disagree on whether regularizing just at the training points is a good approximation to Lipschitz regularization at the neighborhoods.\\n\\nTo me, the differences between the two regularizations are just up to higher order terms. After all, recall the PGD adversarial training sets L_inf maximum distortion to ~0.03, which is very small compared to the distance between two different images and the Taylor expansion to low order terms precisely track the original objective. So yes this simplified regularizer leads to \\\"gradient masking\\\", but in (Liu & Hsieh, 2018) we see even the neighborhood regularizer makes small curvature very locally, it cannot guarantee a lot to the test set.\\n\\nSo the question is ---\\\"when we only have limited number of training samples, how to guarantee a small curvature on the whole data generating distribution? \\\"\\n\\nOur claim (not this paper's claim) is that in order to guarantee the robustness on the whole distribution, both point-wise regularization and regularize across neighborhoods may not be sufficient, but of course we can argue that the latter regularization method is a better choice.\\n\\n\\n2. Sorry about the confusion, in fact, the Acc[B | B] denotes the accuracy of attacking model B with model B, both models have the same architecture and *weights*. So technically it belongs to the white-box attack. We see even if the model B performs \\\"gradient masking\\\", if we know everything inside the model, we can still easily attack it. That's why Acc[B|A] >= Acc[B | B] is always true, because white-box attack is the strongest.\\n\\nWe are aware that traditionally, Acc[B | B] should assume we only know the architecture but not weights, the reason we made such adaption is that we want to guarantee the relation above, and therefore a valid correlation measure (0<= \\\\rho <= 1).\"}", "{\"comment\": \"I had a couple comments, some positive and some negative.\\n\\nOn the positive side, I appreciate that the authors carefully test for convergence of PGD (as in Figure 4) and also perform investigations on the number of models needed in the ensemble. I found both of these results helpful to me and it raised my overall impression of the paper substantially. I also found it admirable that hyperparameter settings and detailed explanations of the attack, defense, etc. were included, which aids in reproducibility. I would be even happier if the authors made their model weights publicly available so that others can test the robustness claims.\\n\\nIn the other direction, I wanted to raise a few concerns with claims made in the paper, although I don't see these as serious issues (rather a case of uncareful writing).\", \"at_the_end_of_section_3_the_paper_claims\": \"\\\"In other words, the adversarial training can be simplified to Lipschitz regularization, and if the\\nmodel generalizes, the local Lipschitz value will also be small on the test set. Yet, as (Liu & Hsieh,\\n2018) indicates, for complex dataset like CIFAR-10, the local Lipschitz is still very large on test set,\\neven though it is controlled on training set.\\\"\\n\\nI'm not sure this is a correct take-away. The issue with Lipschitz regularization is not necessarily that it does not generalize to the test set, but that regularizing the Lipschitz constant *only at individual points* is not sufficient; rather, we want the Lipschitz constant to be small across an entire neighborhood of each of the train/test points. Regularizing the pointwise Lipschitz constant tends not to do this and instead tends to lead to \\\"gradient masking\\\" where the gradient at a given data point is uninformative due to high curvature near that point. For a stylized illustration of this, see Figure 1 of the following paper from last year's ICLR: https://arxiv.org/abs/1801.09344.\\n\\n\\\"Obviously, it is always easier to find adversarial examples through the target model itself, so we have Acc[B|A] \\u2265 Acc[B|B] \\\". This is not true, in fact if a model performs gradient masking often the best way to attack it is via transferring from a similar model that does not have such masking. It certainly does not hold mathematically that Acc[B | A] >= Acc[B | B].\", \"title\": \"some comments\"}", "{\"title\": \"Change list\", \"comment\": \"Below are the major differences in the revised paper:\\n\\n1. We removed section 3.3, because we agree with Reviewer 3 that this part is less relevant to the topic. Meanwhile, two new experiments are added to show the effectiveness of our method\\n\\n2. Shorten introduction part, remove unnecessary background information.\\n\\n3. Added more details when deriving the ELBO, as well as our main objective function (Eq. 7,8,9)\\n\\n4. We cited some relevant papers to support some claims, according to the useful suggestions of Reviewer 2. We also improved the organization of Section 1.1\\n\\n5. We added more details why our adversarial attack algorithm is sound, given the randomness of BNN. This is discussed in Appendix A.\\n\\n6. We replaced the python code in Algorithm 1 with pseudo code.\\n\\n7. Motivations for the transfer attack experiment\\n\\n8. Fixed the imprecise description in Section 4.2, this was noticed by an anonymous reader\\n\\n9. Fixed many typos.\"}", "{\"title\": \"Thanks for your helpful suggestions!\", \"comment\": \"We thank the reviewer for valuing our paper and giving informative suggestions, below we address your comments in detail.\", \"major_weaknesses\": \"1. Thanks for pointing out this mistake, this issue is also noticed by AnonReviewer 2 and we have already fixed it in the revised paper. And perhaps it will be clearer to think our objective function as an expectation on the original data, rather than on x_adv. Because the new objective function is a lower bound of the original ELBO.\\n\\n2. The evidence is not calculated on x_adv, but on the original data $x$. So it does not interfere with Jensen's ineq. when deriving the ELBO. I think it will be clearer to see the revised paper, where we give more details regarding the objective function.\\n\\n3. We have renamed to \\\"Bayes by Backprop\\\" in the revised version.\\n\\nWe agree with you that the local reparamterization trick has much smaller variance during the training time, replacing Bayes-by-Backprop by local reparameterization trick will definitely have a faster convergence. The reason is that we didn\\u2019t think very carefully at the implementation stage and somehow \\\"forgot\\\" it. Nevertheless, both algorithms should yield similar results and we will definitely try this idea and replace the code base.\\n\\nWe want to address that our main goal is to combine Bayesian NN with adversarial training, and there are many ways we could do the approximate inference efficiently. Here we only choose a naive approach considering its simplicity and effectiveness.\\n\\nIn the revised paper, we give readers a reminder that local reparametrization trick should perform better.\", \"minor_issues\": \"1. The original introduction includes intro, background and related work. We have split it into 2 sections and shortened each of them. We have also added more details in the proposed method section. \\n\\n2. Thanks for your suggestion, we rewrite the algorithm box to pseudo code in order to make it looks more formal.\"}", "{\"title\": \"Thanks for your helpful suggestions!\", \"comment\": \"Please see the revised paper as well as the change list for details, we believe that the revised paper has already addressed the issues. Below we give more details on that,\\n\\n1. The references you mentioned are indeed very relevant to our topic, we discussed some of them in the introduction section. However, we still think it does not diminish the main contributions of our paper due to the following reasons:\\n\\ni) [1] includes one small scale experiment on MNIST dataset, the goal is to show that although the Bayesian NN is still easily \\u201ccheated\\u201d by adversarial images, the uncertainty of predictions also increases. Meaning the Bayesian NN is aware of the epistemic uncertainty. And the authors explored this nice property in adversarial detection. \\n\\nSimilar to [1], the experiments in [3] are still small scale (MNIST/CIFAR10), although the paper shows that the Bayesian NN has stronger adversarial robustness than a plain NN, the authors also admit that \\u201cadversarial examples are harder to escape and be uncertain about in CIFAR10, due to higher dimension\\u201d. In contrast, our proposed AdvBNN has made a huge progress in adversarial robustness: the accuracy under strong adversarial attack algorithm on even more complex, high dimensional datasets is much higher than baselines (including the Bayesian NN).\\n\\nii) [2] and [4] are both on adversarial detection, while our focus is the adversarial defense, these are similar topics but different scenarios.\\n\\nYes, perhaps it is not very suitable to call \\\"BNN with factorized gaussian as approximated posterior\\\" simply as BNN, because it does not include the previous works on BNN + adversarial attacks. But it is very straightforward to extend our work to include other inference methods.\\n\\nI think the major contribution of our work is that we show Bayesian neural networks empower the robustness of adversarially trained neural networks. Moreover, we demonstrate that even the most simple approximate inference method can benefit a lot to model robustness, and our method scales easily to large datasets (not just MNIST).\\n\\n\\n2. In fact we already assumed the attacker knows the structure of BNN in our setting (using the same approach in Carlini and Wagner (2017a) and Athalye et al). We briefly mentioned this in Section 3.1 in the initial version, and we have added more details in the revised version (see Appendix). Therefore, as you can see in our Figure 2 that BNN has a very low accuracy under attack in all datasets, which does not contradict to Athalye et al. We also use the same attack (assume the adversary knows every details of model) to test the robustness for the proposed AdvBNN model. Our conclusion is that BNN itself does not help much, but using the proposed framework, one can combine the idea of BNN with adversarial training to achieve much better robustness. \\n\\nAthalye et al. does not negate the effectiveness of adversarial training, for detailed information, please refer to their Github page: https://github.com/anishathalye/obfuscated-gradients, there is a table comparing the performance of different methods, among them, the adversarial training (Madry et al) has a pretty good accuracy.\\n\\n\\n3. We are not quite sure if we understand your point, do you mean the actual objective function should be\\n \\\\min_{||\\\\delta_x||} \\u2026. \\\\max_{q}.... \\nwhile our objective function is \\n \\\\max_{q} \\u2026\\u2026 \\\\min_{||\\\\delta_x||} \\u2026\\u2026\\nand so you think Eq 7 is an lower bound of your equation?\\n\\nOur objective function Eq 7 is indeed a lower bound of your proposed equation, this is because we are maximizing the \\u201cworst case\\u201d evidence lower bound. So the \\\\max_{q} should be moved to the leftmost position. \\n\\nIn summary, in training the model, we need to do\\n \\\\max_q \\\\min_{ ||\\\\delta_x|| < \\\\gamma } log p(D_adv),\\n where log p(D_adv) = \\\\log \\\\int \\\\prod_{(x, y) \\\\sim D_tr} p(y|x + \\\\delta_x, w) p(w) dw\\nThis can be further simplified to our objective function. \\n\\nWe have added more details in the revised paper to make it clearer.\\n\\n\\n4. Sorry about the confusion, we also think section 3.3 is diverged from the main topic, in the revised paper, we replaced this experiment with other controlled experiments. We hope these experiments can strengthen our findings.\"}", "{\"title\": \"Thanks for your helpful suggestions!\", \"comment\": \"Please see the revised paper as well as the change list for details, below we address your comments. We find your comments very informative and we absorbed most of them in the new version.\\n\\n\\n1. We revised Section 1.1 following your suggestions. Specifically, we merged the PGD attack into the Attack part, and we also modified the defense part in the same way.\\n\\n2. Thanks for pointing out this mistake, we agree that we left HMC behind when writing the initial draft, we modified this sentence as suggested.\\n\\n3. (a) We are indeed meant to it, we changed a lot to eq. (7) in response to the suggestions of all reviewers, I hope the revised version is clearer.\\n (b) We added more details why the new objective function is still an ELBO in the updated version, briefly speaking, we made a lower bound of the original ELBO, and the lower bound of ELBO is still an evidence lower bound. \\n (c) Good point, we modified the expression in the revision, thanks for pointing out.\\n (d) It is a very good suggestion, adding an error term makes our model more general to both regression and classification problems, thanks!\\n\\n4. We added a brief introduction to Bayes by Backprop. The space is really limited so forgive us if you find this part hands-waving.\\n\\n5. We added more citations to support our claims\\n\\n6. We gave the motivation of this experiment in section 4.2. \\n The goal of this experiment is to test the robustness under black-box attack, specifically we answer the question: \\u201cHow does the Adv-BNN perform under transfer attack from other models?\\u201d and the key finding is our AdvBNN model is also very robust to blackbox attack, no matter which the source model is. Blackbox defense is also a very important task because in reality, attackers may not have access to the target model.\\n\\n7. We agree with Reviewer 2 that Section 3.3 is not necessary and not quite relevant to the main point of this paper, so we removed this subsection. Instead, we added two other experiments aiming at showing the sample efficiency as well as the robustness of our model.\\n\\n8 ~ 12. Thanks for pointing out our mistakes, we fixed all the typos and unclear parts as suggested.\\n\\n\\nAbout your minor points\\n---------------------------------\\n1, 2, 3, 4, 6: Thanks for pointing out our typos! We fixed all of them in the revised paper.\", \"5\": \"I think Eq. (12) should be the plus sign, because we are doing the Taylor expansion: f(x+\\\\delta, w) ~ f(x)+\\\\delta^T \\\\nabla f(x) + ...\"}", "{\"title\": \"Interesting contribution\", \"review\": \"After feedback: I would like to thank the authors for careful revision of the paper and answering and addressing most of my concerns. From the initial submission my main concern was clarity and now the paper looks much more clearer.\\n\\nI believe this is a strong paper and it represents an interesting contribution for the community.\", \"still_things_to_fix\": \"a) a dataset used in 4.2 is not stated\\nb) missing articles, for example, p.5 \\\".In practice, however, we need a weaker regularization for A small dataset or A large model\\\"\\nc) upper case at the beginning of a sentence after question: p.8 \\\"Is our Adv-BNN model susceptible to transfer attack? we answer\\\" - \\\"we\\\" -> \\\"We\\\"\\n====================================================================================\\n\\nThe paper proposes a Bayesian neural network with adversarial training as an approach for defence against adversarial attacks.\", \"main_pro\": \"It is an interesting and reasonable idea for defence against adversarial attacks to combine adversarial training and randomness in a NN (bringing randomness into a new level in the form of a BNN), which is shown to outperform both adversarial training and random NN alone.\", \"main_con\": \"Clarity. The paper does not crucially lack clarity but some claims, general organisation of the paper and style of quite a few sentences can be largely improved.\\n\\nIn general, the paper is sound, the main idea appears to be novel and the paper addresses the very important and relevant problem in deep learning such as defence against adversarial attacks. Writing and general presentation can be improved especially regarding Bayesian neural networks, where some clarity issues almost become quality issues. Style of some sentences can be tuned to more formal.\", \"in_details\": \"1. The organisation of Section 1.1 can be improved: a general concept \\\"Attack\\\" and specific example \\\"PGD Attack\\\" are on the same level of representation, while it seems more logical that \\\"PGD Attack\\\" should be a subsection of \\\"Attack\\\". And while there is a paragraph \\\"Attack\\\" there is no paragraph \\\"Defence\\\" but rather only specific examples\\n2. The claim \\u201cwe can either sample w \\u223c p(w|x, y) efficiently without knowing the closed-form formula through the method known as Stochastic Gradient Langevin Dynamics (SGLD) (Welling & Teh, 2011)\\u201d sounds like SGLD is the only sampling method for BNN, which is not true, see, e.g., Hamiltonian Monte Carlo (Neal\\u2019s PhD thesis 1994). It is better to be formulated as \\\"through, for example, the method ...\\\"\\n3. Issues regarding eq. (7):\\n a) Why there is an expectation over (x, y)? There should be the joint probability of all (x, y) in the evidence.\\n b) Could the authors add more details about why it is the ELBO given that it is unconventional with adversarial examples added?\\n c) It seems that it should be log p(y | x^{adv}, \\\\omega) rather than p(x^{adv}, y | \\\\omega). \\n d) If the authors assume noise component, i.e., y = f(x; \\\\omega) + \\\\epsilon, then they do not need to have a compulsory Softmax layer in their network, which is important, for example, for regression models. Then the claim \\u201cour Adv-BNN method trains an arbitrary Bayesian neural network\\u201d would be more justified\\n4. It would make the paper more self-contained if the Bayes by Backprop algorithm would be described in more details (space can be taken from the BNN introduction). And it seems to be a typo that it is Bayes by Backprop rather than Bayes by Prop\\n5. There are missing citations in the text:\\n a) no models from NIPS 2017 Adversarial Attack and Defence competition (Kurakin et al. 2018) are mentioned\\n b) citation to justify the claim \\u201cC&W attack and PGD attack (mentioned below) have been recognized as\\ntwo state-of-the-art white-box attacks for image classification task\\u201d\\n c) \\u201cwe can approximate the true posterior p(w|x, y) by a parametric distribution q_\\u03b8(w), where the unknown parameter \\u03b8 is estimated by minimizing KL(q_\\u03b8(w) || p(w|x, y)) over \\u03b8\\u201d - there are a variety of works in approximate inference in BNN, it would be better to cite some of them here\\n d) citation to justify the claim \\\"although in these cases the KL-divergence of prior and posterior is hard to compute and practically we replace it with the Monte-Carlo estimator, which has higher variance, resulting in slower convergence rate.\\u201d\\n6. The goal and result interpretation of the correlation experiment is not very clear\\n7. From the presentation of Figure 4 it is unclear that this is a distribution of standard deviations of approximated posterior.\\n8. \\u201cTo sum up, our Adv-BNN method trains an arbitrary Bayesian neural network with the adversarial examples of the same model\\u201d \\u2013 unclear which same model is meant\\n9. \\\"Among them, there are two lines of work showing effective results on medium-sized convolutional networks (e.g., CIFAR-10)\\\" - from this sentence it looks like CIFAR-10 is a network rather than a dataset\\n10. In \\\"Notations\\\" y introduction is missing\\n11. It is better to use other symbol for perturbation rather than \\\\boldsymbol\\\\delta since \\\\delta is already used for the Dirac delta function\\n12. \\u201cvia tuning the coefficient c in the composite loss function\\u201d \\u2013 the coefficient c is never introduced\", \"minor\": \"1. There are a few missing articles, for example, in Notations, \\u201cIn this paper, we focus on the attack under THE norm constraint\\u2026\\u201d\\n2. Kurakin et al. (2017) is described in the past tense whereas Carlini & Wagner (2017a) is described in the present tense\\n3. Inner brackets in eq. (2) are bigger than outer brackets\\n4. In eq. (11) $\\\\delta$ is not bold\\n5. In eq. (12) it seems that the second and third terms should have \\u201c-\\u201d rather than \\u201c+\\u201d\\n6. Footnote in page 6 seems to be incorrectly labelled as 1 instead of 2\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"An approach that can work well in practice, but not principled\", \"review\": \"I have read the feedback and discussed with the authors on my concerns for a few rounds.\\n\\nThe revision makes much more sense now, especially by removing section 3.3 and replacing it with more related experiments.\\n\\nI have a doubt on whether the proposed method is principled (see below discussions). The authors responded honestly and came up with some other solution. A principled approach of adversarially training BNNs is still unknown, but I'm glad that the authors are happy to think about this problem. \\n\\nI have raised the score to 6. I wouldn't mind seeing this paper accepted, and I believe this method as a practical solution will work well for VI-based BNNs. But again, this score \\\"6\\\" reflects my opinion that the approach is not principled.\\n\\n=========================================================\\n\\nThank you for an interesting read.\\n\\nThe paper proposes training a Bayesian neural network (BNN) with adversarial training. To the best of my knowledge the idea is new (although from my perspective is quite straight-forward, but see some discussions below). The paper is well written and easy to understand. Experimental results are promising, but I don't understand how the last experiment relates to the main idea, see comments below.\", \"there_are_a_few_issues_to_be_addressed_in_revision\": \"1. The paper seems to have ignored many papers in BNN literature on defending adversarial attacks. See e.g. [1][2][3][4] and papers citing them. In fact robustness to adversarial attacks is becoming a standard test case for developing approximate inference on Bayesian neural networks. This means Figure 2 is misleading as in the paper \\\"BNN\\\" actually refers to BNN with mean-field variational Gaussian approximations.\\n\\n2. Carlini and Wagner (2017a) has discussed a CW-based attack that can increase the success rate of attack on (dropout) BNNs, which can be easily transferred to a corresponding PGD version. Essentially the PGD attack tested in the paper does not assume the knowledge of BNN, let alone the adversarial training. This seems to contradict to the pledge in Athalye et al. that the defence method should be tested against an attack that is aware of the defence.\\n\\n3. I am not exactly sure if equation 7 is the most appropriate way to do adversarial training for BNNs. From a modelling perspective, if we can do Bayesian inference exactly, then after marginalisation of w, the model does NOT assume independence between datapoints. This means if we want to attack the model, then we need to do \\n\\\\min_{||\\\\delta_x|| < \\\\gamma} log p(D_adv), \\nD_adv = {(x + \\\\delta_x, y) | (x, y) \\\\sim \\\\sim D_tr},\\nlog p(D_adv) = \\\\log \\\\int \\\\prod_{(x, y) \\\\sim D_tr} p(y|x + \\\\delta_x, w) p(w) dw.\\nNow the model evidence log p(D_adv) is intractable and you resort to variational lower-bound. But from the above equation we can see the lower bound writes as\\n\\\\min_{||\\\\delta_x|| < \\\\gamma} \\\\max_{q} E_{q} [\\\\sum_{(x, y) \\\\sim D_tr} \\\\log p(y|x + \\\\delta_x, w) ] - KL[q||p],\\nwhich is different from your equation 7. In fact equation 7 is a lower-bound of the above, which means the adversaries are somehow \\\"weakened\\\".\\n\\n4. I am not exactly sure the purpose of section 3.3. True, that variational inference has been used for compressing neural networks, and the experiment in section 3.3 also support this. However, how does network pruning relate to adversarial robustness? I didn't see any discussion on this point. Therefore section 3.3 seems to be irrelevant to the paper.\\n\\nSome papers on BNN's adversarial robustness:\\n[1] Li and Gal. Dropout Inference in Bayesian Neural Networks with Alpha-divergences. ICML 2017\\n[2] Feinman et al. Detecting Adversarial Samples from Artifacts. arXiv:1703.00410\\n[3] Louizos and Welling. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks. ICML 2017 \\n[4] Smith and Gal. Understanding Measures of Uncertainty for Adversarial Example Detection. UAI 2018\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A nice paper that bridges adversarial training and Bayesian neural nets\", \"review\": \"The paper extends the PGD adversarial training method (Madry et al., 2017) to Bayesian Neural Nets (BNNs).\\nThe proposed method defines a generative process that ties the prediction output and the adversarial input \\npattern via a set of shared neural net weights. These weights are then assinged a prior and \\nthe resultant posterior is approximated by variational inference.\", \"strength\": [\"The proposed approach is incremental, but anyway novel.\", \"The results are groundbreaking.\", \"There are some technical flaws in the way the method has been presented,\", \"but the rest of the paper is very well-written.\"], \"major_weaknesses\": [\"Equation 7 does not seem to be precise. First, the notation p(x_adv, y | w) is severely misleading. If x_adv is also an input, no matter if stochastic or deterministic, the likelihood should read p(y | w, x_adv). Furthermore, if the resultant method is a BNN with an additional expectation on x_adv, the distribution employed on x_adv resulting from the attack generation process should also be written in the form of the related probability distribution (e.g. N(x_adv|x,\\\\sigma)).\", \"Second, the constraint that x_adv should lie within the \\\\gamma-ball of x has some implications on the validity of\", \"the Jensen's inequality, which relates Equation 7 to proper posterior inference.\", \"Blundell et al.'s algorithm should be renamed to \\\"Bayes-by-BACKprop\\\". This is also an outdated inference technique for quite many scenarios including the one presented in this paper. Why did not the authors benefit from the local reparametrization trick that enjoy much lower estimator variance? There even emerge sampling-free techniques that nullify this variance altogether and provide much more stable training experience.\"], \"and_some_minor_issues\": [\"The introduction part of paper is unnecessarily long and the method part is in turn too thin. As a reader, I would prefer getting deeper into the proposed method instead of reading side material which I can also find in the cited articles.\", \"I do symphathize and agree that Python is a dominant language in the ML community. Yet, it is better scientific writing practice to provide language-independent algorithmic findings as pseudo-code instead of native Python.\", \"Overall, this is a solid work with a novel method and very strong experimental findings. Having my grade discounted due to the technical issues I listed above and the limitedness of the algorithmic novelty, I still view it as an accept case.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
ryf7ioRqFX
h-detach: Modifying the LSTM Gradient Towards Better Optimization
[ "Bhargav Kanuparthi", "Devansh Arpit", "Giancarlo Kerg", "Nan Rosemary Ke", "Ioannis Mitliagkas", "Yoshua Bengio" ]
Recurrent neural networks are known for their notorious exploding and vanishing gradient problem (EVGP). This problem becomes more evident in tasks where the information needed to correctly solve them exist over long time scales, because EVGP prevents important gradient components from being back-propagated adequately over a large number of steps. We introduce a simple stochastic algorithm (\textit{h}-detach) that is specific to LSTM optimization and targeted towards addressing this problem. Specifically, we show that when the LSTM weights are large, the gradient components through the linear path (cell state) in the LSTM computational graph get suppressed. Based on the hypothesis that these components carry information about long term dependencies (which we show empirically), their suppression can prevent LSTMs from capturing them. Our algorithm\footnote{Our code is available at https://github.com/bhargav104/h-detach.} prevents gradients flowing through this path from getting suppressed, thus allowing the LSTM to capture such dependencies better. We show significant improvements over vanilla LSTM gradient based training in terms of convergence speed, robustness to seed and learning rate, and generalization using our modification of LSTM gradient on various benchmark datasets.
[ "LSTM", "Optimization", "Long term dependencies", "Back-propagation through time" ]
https://openreview.net/pdf?id=ryf7ioRqFX
https://openreview.net/forum?id=ryf7ioRqFX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BkgprCHZEV", "BygPNUrZ4V", "rJlMyDSbGV", "B1xmq2H5bV", "SylZld4tZ4", "HylFpbzsy4", "rkeMoIKYC7", "rkexWojBpX", "B1xTkqsS6Q", "rkxqBUoHpQ", "BygbhKulp7", "H1g48OS5h7", "Syliw--DnX" ], "note_type": [ "official_comment", "comment", "comment", "official_comment", "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1548996164688, 1548994095129, 1546897114158, 1546439818936, 1546369000773, 1544393152634, 1543243418014, 1541942008181, 1541941732879, 1541940801736, 1541601705086, 1541195852370, 1540981091255 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper609/Authors" ], [ "~Shuai_Li5" ], [ "~Aniket_Rajiv_Didolkar1" ], [ "ICLR.cc/2019/Conference/Paper609/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper609/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper609/Authors" ], [ "ICLR.cc/2019/Conference/Paper609/Authors" ], [ "ICLR.cc/2019/Conference/Paper609/Authors" ], [ "ICLR.cc/2019/Conference/Paper609/Authors" ], [ "ICLR.cc/2019/Conference/Paper609/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper609/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper609/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Citation is already present\", \"comment\": \"Citation to this paper was added in our final submission.\"}", "{\"comment\": \"In addition to the uRNN series of works, the recent IndRNN (Independently recurrent neural network) also addresses the gradient exploding and vanishing problems. Experiments have also shown its effective performance in solving problems concerning long-term dependency (even up to 5000 timesteps). Also it shows a great advantage in constructing deep RNN networks (easily over 20 layers).\\n[1] S. Li, W. Li, C. Cook, C. Zhu, Y. Gao, \\u201cIndependently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN,\\u201d IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Jun. 18-22, 2018.\\n\\n\\nThanks.\", \"title\": \"related work on solving gradient exploding and vanishing problems\"}", "{\"comment\": \"As a part of the ICLR 2019 reproducibility challenge, we worked to reproduce the results of this paper (h-detach). The code for reproducing the results was provided by the authors. We perform some of the experiments mentioned in the paper. A link to the full report and the codebase can be found at the end of this message.\\n\\nThe authors tackle the exploding and vanishing gradient problem(EVGP) in the LSTM. The authors say that when the weights of the LSTM are large, due to their repeated multiplication, the gradients through the cell state path get suppressed. The authors empirically prove that this path carries information about long-term dependencies. The authors have also provided theoretical proof of their claim. \\n\\nThe authors have proposed a stochastic algorithm to mitigate the above mentioned problems. The main motive of the authors is to prove that training an LSTM with h-detach results in faster convergence and more stability during training. We performed experiments for the copying task, sequential mnist task and our results were able to confirm the claims made by the author. We also performed the experiments mentioned in the ablation study and were able to get similar results as the authors.\\n\\nThe authors have given information on how they chose the detach probability for the sequential mnist task. They have also mentioned, in a reply to AnonReviewer3 below, that they tried different values to of the probability and found the values between 0.25 and 0.50 work best. They have sufficiently experimented with their algorithm on different learning rates and seeds. It would also be interesting to see how the performance of h-detach is affected by a change in batch size.\\n\\nIn conclusion, we have validated the claims of the authors through our experiments that using h-detach stabilizes training, leads to faster convergence and is robust to different seeds and learning rates.\", \"codebase\": \"https://github.com/dido1998/h-detach\", \"report\": \"https://drive.google.com/drive/folders/1EtEYBS0LRRGsCwhxiXwfDY2ndsgCJPpc?usp=sharing\", \"title\": \"Reproducibility study of h-detach:Modifying the LSTM Gradient Towards Better Optimization\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you for the references. We will review and add them in our final version.\"}", "{\"comment\": \"The simplicity of the method and its effectiveness are very impressive and evaluation on multiple datasets seems to validate the claims.\\n\\nThe only concern I have is about the related work on the stability of RNN training. After the Arjovsky et al., (ICML 2016), there have been many papers on its lines and have improved the techniques even further and the latest being Zhang et al (ICML 2018). The other works on Unitary RNNs include Wisdom et al., (NeurIPS 2016), Mhammedi et al., (ICML 2017), Jing et al., (ICML 2017), Vorontsov et al., (ICML 2017), Jose et al., (ICML 2018). Another slightly orthogonal work (moving away from unitary methods on the stability of RNNs seems to be Kusupati et al., (NeurIPS 2018). \\n\\nAdding the relevant literature (from above) about stabilizing gradients for RNN training for better and faster optimization will make the related work much more comprehensive, given they fall into the similar class as the current paper.\\n\\nThanks.\", \"title\": \"Interesting approach and nice experimentation. Can have more comprehensive related work.\"}", "{\"metareview\": \"This paper presents a method for preventing exploding and vanishing gradients in LSTMs by stochastically blocking some paths of the information flow (but not others). Experiments show improved training speed and robustness to hyperparameter settings.\\n\\nI'm concerned about the quality of R2, since (as the authors point out) some of the text is copied verbatim from the paper. The other two reviewers are generally positive about the paper, with scores of 6 and 7, and R1 in particular points out that this work has already had noticeable impact in the field. While the reviewers pointed out some minor concerns with the experiments, there don't seem to be any major flaws. I think the paper is above the bar for acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"a simple but well motivated trick for stabilizing LSTM optimization\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for going through the revised version and re-evaluating our paper. We have also added the new citation you provided in the discussion section. We are grateful for your constructive comments.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"Thank you for your comments.\\n\\nGiven the superficial similarity, we agree that it warrants a discussion between dropout and our proposed method. The two methods are fundamentally different. Dropout randomly masks the hidden units of a network during the forward pass. Therefore, a common view of dropout is training an ensemble of networks. On the other hand, our method does not mask the hidden units during the forward pass. It instead randomly blocks the gradient component through the h-states (and not cell state, so we block a specific component instead of randomly choosing any component) of the LSTM only during the backward pass and our method does not change the output of the network during forward pass. Our theoretical analysis shows the precise behavior of our method: the effect of this operation is that it changes the update direction used for descent which prevents the gradient components through the cell state path from being suppressed (which we show are important for tasks involving longer term dependencies). We have added this discussion in the revised version in section 5.\\n\\nTransfer copy task is a commonly used benchmark task for evaluating how good a recurrent model is at retaining information over large time scales. Therefore we report numbers on this task purely for this reason. Our goal and the proposed method has nothing to do with transfer learning otherwise.\\n\\nWe would also like to point out that the main benefits of our algorithm which modifies the LSTM update direction (for which we provide theoretical analysis) are that it leads to improvements in convergence speed, robustness to seed and learning rate, and generalization compared to the usual LSTM updates.\\n\\nWe have revised our manuscript. We hope to have addressed your concerns.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank you for your insightful comments.\\n\\nWe provided a theoretical analysis showing that the gradient through the cell state (A_t) gets suppressed when the gradient through the h-states (B_t) are larger in magnitude (theorem 1 and 2 and the discussions around them).\\nWe indeed have provided empirical support for this claim. In ablation studies, we show that blocking the gradients through the cell states result in extremely poor performance of LSTMs on both pixel MNIST and copying task. On the other hand, the use of our method on these tasks which stochastically blocks the gradients through the h-states of the LSTM results in faster convergence. In the former case, the theory guarantees that B_t overwhelms A_t, while in the latter case A_t becomes comparable to B_t.\\n\\nYour insights are perfectly correct. In order to damp the gradient components of B_t, we can indeed multiply B_t by a constant factor during back propagation or regularize the weights of the h-state path to be small. We have added these remarks as future work in the revised paper in section 5.\\n\\nFor MNIST task, when training a model with a very small p=0.001, the convergence was quite slow and the final model was worse than baseline. Further, in our internal experiments, we tried detach probability p with values 0.1, 0.25, 0.4, 0.5, 0.75 and 0.9. We found that the values between 0.25 and 0.5 usually had best performance and so we used values in this interval for our hyper-parameter search.\\n\\nPeephole LSTM makes all the gates depend on previous cell state in addition to h-state and the current input. The computational graph of peephole LSTM will have an edge pointing from C(t-1) to h(t) in Fig. 1 of our paper. Hence at least intuitively, we believe it will not be able to prevent the gradient component through the cell state path from being suppressed because the gradient component through the other paths will still grow polynomially as the magnitude of recurrent weights grow.\\n\\nRegarding the improvement in SOTA, we believe that the main benefit of our method is improvement in training stability, convergence and robustness to seed for tasks where the cell states carry important information about the task. For instance, it has been shown that recurrent networks are sensitive to the randomness in initialization (\\\"seed\\\" in coding terminology) [1]. In our paper, we reported experiments on various seeds and learning rate showing these aforementioned benefits (Fig. 2,3,4,6,8 in the revised version). Additionally, our goal was not to compete with existing SOTA algorithms which we believe may also benefit from our method when used in conjunction. Our goal was to rather to investigate and alleviate the source of the problem that makes the training of LSTMs unstable and sensitive to seed when training on tasks where the cell states carry important information (such as tasks involving long term dependencies).\\n\\n[1] On the State of the Art of Evaluation in Neural Language Models\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We highly appreciate your constructive comments and the missing citations you provided.\\n\\nThank you for bringing up the point that image captioning does not fit the profile of a task involving long term dependencies. We believe the reason why our method leads to improvements for the image captioning task is that the gradient components from the cell state path are important for this task. As our theoretical analysis of h-detach shows, it prevents these components from getting suppressed compared with the gradient components from the h-state paths. Since the obvious target for our method were tasks involving long term dependencies, we use it as our main pitch. We have revised the paper with these comments.\\nAlso, we did try our method on language modeling tasks but we did not find any benefit in these cases. We have added this detail in the discussion section 5 of the revised version.\\n\\nRecurrent batch normalization is indeed beneficial for training LSTMs. However, as the reviewer pointed out, it adds computation overhead and its implementation is quite involved (and also adds dependence on mini-batch statistics). Our method on the other hand reduces computation needed for vanilla LSTM and is very simple to implement while improving the convergence speed and robustness over traditional LSTM updates.\\n\\nFor a discussion on difference between dropout and h-detach, please see our reply to AnonReviewer 2. We understand that the version of dropout referred by the reviewer is different from the original dropout technique. But the difference we have stated applies to this version of dropout as well.\\n\\nWe thank the reviewer for point out the earlier manuscript that noticed the vanishing gradient problem. We, the main authors of the paper, were not aware of this, especially given the manuscript is not in English. We have cited the thesis at all places in the paper when referring to vanishing gradients in the revised version (introduction and related work sections).\\n\\nWe have changed the sentence saying GRU is a variant of LSTM with forget gates. We have also pointed out that LSTMs are more powerful compared with GRUs along with the citation mentioned by the reviewer. These changes have been added in the introduction section of the revised version.\\n\\nFinally, we would like to point out that the main benefits of our simple algorithm for modifying the LSTM update direction (for which we provide theoretical analysis) are that it leads to improvements in convergence speed, robustness to seed and learning rate, and generalization as shown in Fig. 2,3 and 6.\\n\\nWe hope we have addressed your concerns.\"}", "{\"title\": \"Interesting but there are some technical details missing\", \"review\": \"The author introduces a simple stochastic algorithm (h-detach) that is specific to LSTM optimization and targeted towards addressing this problem. Specifically, the authors show that when the LSTM weights are large, the gradient components through the linear path (cell state) in the LSTM computational graph get suppressed. Based on the hypothesis that these components carry information about long term dependencies (which we show empirically), their suppression can prevent the LSTM from capturing them. Our algorithm prevents the gradients flowing through this path from getting suppressed, thus allowing the LSTM to capture such dependencies better. The experimental results show that the proposed algorithm appears to be effective. Some detailed comments are listed as follows,\\n\\n1 The h-detach algorithm seems to be the dropout technology. However, the authors did not discuss the relation or difference between the proposed h-detach algorithm and the dropout technology. \\n\\n2 The proposed method can transfer the positive knowledge. However, for the transfer learning, one concerned and important issue is that some negative knowledge information can be also transferred. So how to avoid the negative transferring? Some necessary discussions about this should be given in the manuscript.\\n\\n2 There are many grammar errors in the current manuscript. The authors are suggested to improve the English writing.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"In this paper, the authors propose a simple modification to the training process of the LSTM. The goal is to facilitate gradient propagation along cell states, or the \\\"linear temporal path\\\". It blocks the gradient propagation of hidden states with a probability of $1-p$ independently. The proposed method is evaluated on the copying task, sequential MNIST task, and image captioning tasks. The performance is sightly boosted on those tasks.\\n\\nThe paper is well-written. The h-detach method is very simple and easy to implement. It seems novel in dealing with the trainability issue with recurrent networks. Since LSTM is very commonly used, if the method is proved to be effective on other tasks, it will potentially benefit a large portion of the community. However, the reviewer thinks the paper is not sufficiently motivated and the quality of the paper could be further improved by conducting a more thorough analysis of the proposed method, and discussing the connection with other existing methods.\\n\\nAs the motivation of the work, the authors seem to claim that if the magnitude of $B_t$ is much bigger that $A_t$, then the backpropagation will be problematic. Is there any theoretical or empirical support of this claim?\\n\\nIn order to damp the gradient component of $B_t$, it does not have to be stochastic. Can we simply multiply the matrix $B_t$ by a constant factor $p$ during backpropagation? Or regularize the weights $W_{*h}$ to be small so that $\\\\phi_t$ and $\\\\tilde\\\\phi_t$ are small?\\n\\nIt would be interesting to study the effect of the probability $p$ and to suggest an \\\"optimal\\\" choice of $p$, either theoretically or empirically. Is it still possible to train the model with a very small $p$?\\n\\nThe h-detach method seems to have a flavor of dropout, but the \\\"dropout\\\" only happens during backpropagation. The design goal also resembles the peephole LSTM, that is to disentangle the cell state and the hidden state. Is there any possible connections between the proposed method and the dropout and peephole LSTM?\\n\\nThe reviewer understands that a one percent difference in the accuracy on MNIST is probably not very meaningful, but it seems that the SOTA performance on pMNIST is at least 94.1% [1].\\n\\n[1] Scott Wisdom, Thomas Powers, John Hershey, Jonathan Le Roux, and Les Atlas. Full-capacity unitary recurrent neural networks. NIPS, 2016.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Intriguing results. But don't similar methods achieve similar things with similar mechanisms?\", \"review\": \"The results are intriguing. However, similar methods like BN-LSTM [3] and Variational RNNs [4] achieve arguably the same with very similar mechanisms. We do not think they can be considered as orthogonal. This should be addressed by the authors. Also, hard long-term experiments like sequentially predicting pixels (like through MDLSTM-based PixelRNN) or language modelling should be favoured over short sentence image captions.\\n\\nIt is possible that we will improve our ratings once our concerns are addressed.\", \"paper_summary\": \"The authors claim that the gradient along the computational path that goes through the cell state (the linear temporal path or A gradient) of an LSTM carries information about long-term dependencies. Those gradients can be corrupted by the gradient of all other computational paths (i.e. the B gradient). They claim that this makes it hard to learn long-term dependencies and has, therefore, significant negative effects on the convergence speed, training stability, and generalisation performance. They propose a method called h-detach and run experiments on the delayed copy task, sequential MNIST, permuted sequential MNIST (pMNIST), and caption generation on the MS COCO dataset. All show either somewhat improved performance or much more stable learning curves. At every step, h-detach randomly drops all gradients that flow through the h of the standard LSTM, the B gradients, and only keeps the ones from the linear temporal path, the A gradients. Experiments also suggest that the A gradients carry more long-term information than B gradients and that LSTMs with h-detach do not need gradient clipping for successful training.\", \"positive\": \"The paper is written clearly. It is well structured and well motivated. H-detach is simple, effective, and somewhat novel (see below). Experiments indicate that its main benefit is training stability as well as minor performance improvements.\", \"negative\": \"\", \"we_are_not_sure_how_significant_these_results_are_for_the_following_reasons\": [\"MS COCO image caption generation is the only more challenging dataset, but it seems a bit misplaced as it has very short sentences, while the authors motivate their work through a focus on long-term dependencies. Why not apply h-detach to a language model such as [1] with official online implementations, e.g., [2]. A setting with PixelRNN [6] based on MD-LSTM [7] would also be a great testbed for h-detach.\", \"The purpose of h-detach is to scale down the B gradients. However, methods which apply e.g. BatchNorm to the hidden state learn a scale parameter which could be learned by the network explicitly. For the backward pass, this has the effect of scaling down the B gradient. Consider e.g. [3] which also achieves similar training stability on sequential MNIST and pMNIST with little overhead.\", \"Another very related method is [4] which properly applies a random dropout mask over the recurrent inputs that is shared across timesteps of an RNN. We think that h-detach is essentially achieving the same in a similar way.\"], \"problems_with_introduction_and_related_work_section\": \"- The vanishing gradient problem was first described by Hochreiter in 1991 [5] (not by Bengio in 1994). \\n\\n- Intro mentions GRU as if it was separate from LSTM. Clarify that GRU is essentially a variant of vanilla LSTM with forget gates [8]. Since one gate is missing, GRU is less powerful than the original LSTM [9]. \\n\\n[1] Zaremba et al. \\\"Recurrent neural network regularization.\\\" arXiv:1409.2329 (2014).\\n[2] https://www.tensorflow.org/tutorials/sequences/recurrent\\n[3] Cooijmans et al. \\\"Recurrent batch normalization.\\\" arXiv:1603.09025 (2016).\\n[4] Gal et al. \\\"A theoretically grounded application of dropout in recurrent neural networks.\\\" NIPS 2016.\\n[5] Hochreiter, Sepp. \\\"Untersuchungen zu dynamischen neuronalen Netzen.\\\" Diploma thesis, TUM (1991)\\n[6] Oord et al. \\\"Pixel recurrent neural networks.\\\" arXiv preprint arXiv:1601.06759 (2016).\\n[7] Graves et al. \\\"Multi-Dimensional Recurrent Neural Networks\\\" arXiv preprint arXiv:0705.2011 (2011).\\n[8] Gers et al. \\u201cLearning to Forget: Continual Prediction with LSTM.\\u201c Neural Computation, 12(10):2451-2471, 2000. \\n[9] Weiss et al. On the Practical Computational Power of Finite Precision RNNs for Language Recognition. arXiv:1805.04908.\", \"comments_after_rebuttal\": \"The paper has clearly improved. \\n\\nIt leaves a few questions open though. For example, it is surprising that h-detach doesn't work on language modelling since Dropout-LSTM and BN-LSTM clearly improve over vanilla LSTM in this case (if not every case). In the new version, the authors only reference it in one or two sentences but don't discuss this in detail. \\n\\nWhen dropout is mentioned, one should also mention that dropout is a variant of the old stochastic delta rule:\\n\\nHanson, S. J. (1990). A Stochastic Version of the Delta Rule, PHYSICA D,42, 265-272. See also arXiv:1808.03578 \\n\\nNevertheless, we now think that this is a very interesting LSTM regularization paper that people who study this field should probably know. We are increasing the score by 2 points!\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
H1M7soActX
The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Minima and Regularization Effects
[ "Zhanxing Zhu", "Jingfeng Wu", "Bing Yu", "Lei Wu", "Jinwen Ma" ]
Understanding the behavior of stochastic gradient descent (SGD) in the context of deep neural networks has raised lots of concerns recently. Along this line, we theoretically study a general form of gradient based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics. Through investigating this general optimization dynamics, we analyze the behavior of SGD on escaping from minima and its regularization effects. A novel indicator is derived to characterize the efficiency of escaping from minima through measuring the alignment of noise covariance and the curvature of loss function. Based on this indicator, two conditions are established to show which type of noise structure is superior to isotropic noise in term of escaping efficiency. We further show that the anisotropic noise in SGD satisfies the two conditions, and thus helps to escape from sharp and poor minima effectively, towards more stable and flat minima that typically generalize well. We verify our understanding through comparing this anisotropic diffusion with full gradient descent plus isotropic diffusion (i.e. Langevin dynamics) and other types of position-dependent noise.
[ "Stochastic gradient descent", "anisotropic noise", "regularization" ]
https://openreview.net/pdf?id=H1M7soActX
https://openreview.net/forum?id=H1M7soActX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1lw6o7Jx4", "BJgWpYA53Q", "ByxZzQI9hX", "H1lJasFPnX" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544661951048, 1541233081228, 1541198601152, 1541016502698 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper608/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper608/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper608/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper608/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers point our concerns regarding paper's novelty, theoretical soundness, and empirical strength. The authors provided to clarifications to the reviewers.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Metareview\"}", "{\"title\": \"needs more work\", \"review\": \"This paper studies the effort of anisotropic noise in stochastic optimization algorithms. The goal is to show that SGD escapes from sharp minima due to such noise. The paper provides preliminary empirical results using different kinds of noise to suggest that anisotropic noise is effective for generalization of deep networks.\", \"detailed_comments\": \"1. I have concerns about the novelty of the paper: It builds heavily upon previous work on modeling SGD as a stochastic differential equation to understand its noise characteristics. The theoretical development of this manuscript is straightforward until simplistic assumptions such as the Ornstein-Uhlenbeck process (which amounts to a local analysis of SGD near a critical point) and a neural network with one hidden layer. Similar results have also been in the the literature before in a number of places, e.g., https://arxiv.org/abs/1704.04289 and references therein.\\n\\n2. Proposition 4 looks incorrect. If the neural network is non-convex, how can the positive semi-definite Fisher information matrix F sandwich the Hessian which may have strictly negative eigenvalues at places?\\n\\n3. Section 5 contains toy experiments on a 2D problem, a one layer neural network and a 1000-image subset of the FashionMNIST dataset. It is hard to validate the claims of the paper using these experiments, they need to be more thorough. The Appendix contains highly preliminary experiments on CIFAR-10 using VGG-11.\\n\\n4. A rigorous theoretical understanding of SGD with isotropic noise or convergence properties of Lagevin dynamics has been developed in the literature previously, it\\u2019d be beneficial to analyze SGD with anisotropic noise in a similar vein.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"A paper analyzing effect of anisotropic noise on SGD dynamics\", \"review\": \"The authors studied the effect of the anisotropic noise of SGD on the algorithm\\u2019s ability to escape from local optima. To this end, the authors depart from the established approximation of SGD in the vicinity of an optimum as a continuous-time Ornstein-Uhlenbeck process. Furthermore, the authors argue that in certain deep learning models, the anisotropic noise indeed leads to a good escaping from local optima.\\n\\nProposition 3 (2) seems to assume that the eigenvectors of the noise-covariance of SGD are aligned with the eigenvectors of the Hessian. Did I understand this correctly and is this sufficient? Maybe this is actually not even necessary, since the stationary distribution for the multivariate Ornstein-Uhlenbeck process can always be calculated (Gardiner; Mandt, Hoffman, and Blei 2015\\u20132017)\\n\\nI think this is a decent contribution.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting but lacks clarity\", \"review\": \"The paper studies the benefit of an anisotropic gradient covariance matrix in SGD optimization for training deep network in terms of escaping sharp minima (which has been discussed to correlate with poor generalization in recent literature).\\n\\nIn order to do so, SGD is studied as a discrete approximation of stochastic differential equation (SDE). To analyze the benefits of anisotropic nature and remove the confounding effect from scale of noise, the scale of noise in the SDE is considered fixed during the analysis. The authors identify the expected loss around a minimum as the efficient of escaping the minimum and show its relation with the hessian and gradient covariance at the minimum. It is then shown that when all the positive eigenvalues of the covariance matrix concentrate along the top eigenvector and this eigenvector is aligned with the top eigenvector of the Hessian of the loss w.r.t. the parameters, SGD is most efficient at escaping sharp minima. These characteristics are analytically shown to hold true for a 1 hidden layer network and experiments are conducted on toy and real datasets to verify the theoretical predictions.\", \"comments\": \"I find the main claim of the paper intuitive-- at any particular minimum, if noise in SGD is more aligned with the direction along which loss surface has a large curvature (thus the minimum is sharp along this direction), SGD will escape this minimum more efficiently. On the other hand, isotropic noise will be wasteful because a sample from isotropic noise distribution may point along flat directions of the loss even though there may exist other directions along which the loss curvature is large. However, I have several concerns which I find difficult to point out because *many equations are not numbered*. \\n\\n1. In proposition 2, it is assumed under the argument of no loss of generality that both the loss at the minimum L_0=0 and the corresponding theta_0 =0. Can the authors clarify how both can be simultaneously true without any loss of generality?\\n2. A number of steps in proposition 2 are missing which makes it difficult to verify. When applying Ito's lemma and taking the integral from 0 to t, it is not mentioned that both sides are also multiplied with the inverse of exp(Ht).\\n3. In proposition 2, when computing E[L(theta_t)] on page 12, the equalities after line 3 are not clear how they are derived. Please clarify or update the proof with sufficient details.\\n4. It is mentioned below proposition 2 that the maximum of Tr(H. Sigma) under constraint (6) is achieved when Sigma* = Tr(Sigma). lambda_1 u1.u1^T, where lambda_1 is the top eigenvalue of H. How is lambda_1 a factor in Sigma*? I think Sigma* should be Tr(Sigma). u1.u1^T because this way the sum of eigenvalues of Sigma remains unchanged which is what constraint (6) states.\\n5. The proof of proposition 5 is highly unclear.Where did the inequality ||g_0(theta)||^2 <= delta.u^TFu + o(|delta|) come from? Also, the inequality right below it involves the assumption that u^Tg_0 g_0u <= ||g_0||^2 and no justification has been provided behind this assumption.\\n\\n\\nRegarding experiments, the toy experiment in section 5.1 is interesting, but it is not mentioned what network architecture is used in this experiment. I found the experiments in section 5.3 and specifically Fig 4 and Fig 7 insightful. I do have a concern regarding this experiment though. In the experiment on FashionMNIST in Fig 4, it can be seen that both SGD and GLD 1st eigvec escapes sharp minimum, and this is coherrent with the theory. However, for the experiment on CIFAR-10 in Fig 7, experiment with GLD 1st eigvec is missing. Can the authors show the result for GLD 1st eigvec on CIFAR-10? I think it is an important verification of the theory and CIFAR-10 is a more realistic dataset compared with FashionMNIST.\", \"a_few_minor_points\": \"1. In the last paragraph of page 3, it is mentioned that the probability of escaping can be controlled by the expected loss around minimum due to Markov's inequality. This statement is inaccurate. A large expected loss upper bounds the escaping probability, it does not control it.\\n2. Section 4 is titled \\\"The anisotropic noise of SGD in deep networks\\\", but the sections analyses a 1 hidden layes network. This seems inappropriate.\\n3. In the conclusion section, it is mentioned that the theory in the paper unifies various existing optimization mentods. Please clarify.\\n\\nOverall, I found the argument of the paper somewhat interesting but I am not fully convinced because of the concerns mentioned above.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HJgXsjA5tQ
On the loss landscape of a class of deep neural networks with no bad local valleys
[ "Quynh Nguyen", "Mahesh Chandra Mukkamala", "Matthias Hein" ]
We identify a class of over-parameterized deep neural networks with standard activation functions and cross-entropy loss which provably have no bad local valley, in the sense that from any point in parameter space there exists a continuous path on which the cross-entropy loss is non-increasing and gets arbitrarily close to zero. This implies that these networks have no sub-optimal strict local minima.
[ "loss landscape", "local minima", "deep neural networks" ]
https://openreview.net/pdf?id=HJgXsjA5tQ
https://openreview.net/forum?id=HJgXsjA5tQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1gTF2byeV", "rke_QE95RQ", "HylgTfXc0X", "SkgOtfWD0m", "SJentufURQ", "r1erFqJMAQ", "rylHw4Tx0X", "ByeYv_JTaX", "rkeg9UyaTX", "HkgeJDjn6Q", "BkeL8Gi26Q", "Hkg3N5wTnQ", "Ske7SH5q37", "BJx5sWN5hX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544653956794, 1543312415715, 1543283384031, 1543078527599, 1543018627727, 1542744700650, 1542669404778, 1542416481149, 1542416007930, 1542399703734, 1542398541667, 1541401139750, 1541215547455, 1541190049747 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper607/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper607/Authors" ], [ "ICLR.cc/2019/Conference/Paper607/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper607/Authors" ], [ "ICLR.cc/2019/Conference/Paper607/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper607/Authors" ], [ "ICLR.cc/2019/Conference/Paper607/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper607/Authors" ], [ "ICLR.cc/2019/Conference/Paper607/Authors" ], [ "ICLR.cc/2019/Conference/Paper607/Authors" ], [ "ICLR.cc/2019/Conference/Paper607/Authors" ], [ "ICLR.cc/2019/Conference/Paper607/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper607/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper607/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper introduces a class of deep neural nets that provably have no bad local valleys. By constructing a new class of network this paper avoids having to rely on unrealistic assumptions and manages to provide a relatively concise proof that the network family has no strict local minima. Furthermore, it is demonstrated that this type of network yields reasonable experimental results on some benchmarks. The reviewers identified issues such as missing measurements of the training loss, which is the actual quantity studied in the theoretical results, as well as some issues with the presentation of the results. After revisions the reviewers are satisfied that their comments have been addressed. This paper continues an interesting line of theoretical research and brings it closer to practice and so it should be of interest to the ICLR community.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-review\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you very much for your positive feedback and all the helpful comments so far.\"}", "{\"title\": \"Thanks for your efforts\", \"comment\": \"I appreciate the authors for their efforts in revising the paper. Many of my concerns are addressed throughout the revision/feedback process, and I think the paper is now in a better shape.\\n\\nI'll edit the rating accordingly.\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"We thank the reviewer for the response and further comments on the presentation issue of our experimental results.\\n\\nWe have updated the paper accordingly by taking into account both comments of the reviewer together. Regarding comment 2), we removed the column of data-augmentation in Table 2, and moved them to the appendix for interested readers. We then used this space to show the training accuracy of all models, which is recommended by the reviewer in comment 1). We hope that this becomes more clear now.\\n\\nWe thank the reviewer again and we welcome further comments on our paper.\"}", "{\"title\": \"some concerns on experiments remain\", \"comment\": \"Overall, I think this paper is quite nontrivial since a rigorous mathematical proof is indeed the interesting part and often quite difficult, and the idea of having flexible skip connections is interesting. But perhaps it is less than a breakthrough due to prior related work on CNN.\\n\\nI'd like to thank the authors for the effort in improving the paper. My concerns are partially but not fully addressed, as explained below.\\n\\n 1) As I said, \\\"This paper has no theory on generalization, thus if a whole section is just about test error, then the connection to theoretical parts is weak.\\\" It is good that the authors add the training error table Table 4, but Table 4 appears in the appendix. I have to compare Table 2 and Table 4 a few times, when I re-read the paper. Isn't it better to put Table 4 in the main body? That may be a hard choice, as some parts need to moved into the appendix. But having Table 2 and 4 separately is strange. In fact, from a theoretician's perspective, having solely Table 2 in the main body while having Table 4 in the appendix is fine (though some practitioners don't think so). Anyway, having both may be better. \\n In addition, \\\"The training error is zero in all cases, except when the original VGG models are used with sigmoid activation function\\\" is inconsistent with Table 4, which shows for SoftPlus the training accuracy is also 10%. After comparing with Table 2, I noticed it is probably due to typo. All SoftPlus results in Table 4 should be 100. These typos probably won't appear if Table 4 is near Table 2.\\n\\n(2) I am not satisfied with this explanation on data augmentation.\\n First, there are two types of augmentation: \\\"at each training iteration the network uses additional examples\\\" refers to online augmentation; increasing the dataset size and use them in all iterations is off-line augmentation. Clearly, for off-line augmentation, N is increased.\\n Second, note that for SGD, some statisticians often refer to one-pass over all data, while many optimizers often refer to multi-passes. In other words, for online augmentation, these statisticians would count additional data (even just used once) into N. \\n It is not clear why the authors need to include the experiments with data augmentation in Table 2. For the purpose of illustrating their point, experiments with data augmentation are not necessary --this is a theory paper after all. From a theory perspective, it may break the assumption. The easiest way to fix is just to remove the columns on data augmentation. If not, it requires further explanation such as \\\"yes, it does not satisfy the assumption, but we just want them to be more comprehensive\\\", or \\\"simple data changes do not affect the training much, so it is close to theory\\\". Anyhow, none of them is very satisfying for me.\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"Thank you for the quick response.\\n\\n\\\"First of all, I still believe it is weird that the assumptions are never used *explicitly* anywhere in the main text. The paper makes some assumptions and never uses them directly in the main text. I would suggest the authors to at least add a \\u201cproof sketch\\u201d paragraph below Lemma 3.2, and briefly outline the proof while mentioning how the assumptions come into play.\\\"\\n\\nWe agree. We have added a proof sketch for Lemma 3.2 and briefly discussed how the assumptions are used now.\\n\\n\\\"As for the proof technique part, by \\u201cas expected\\u201d I meant I would have been more surprised if the set of U with rank-deficient \\\\Psi(U) had measure greater than zero. This was because in general, rank deficient matrices lie in a set of measure zero, and I\\u2019ve seen many results such as \\u201cif a hidden layer is wider than N and activation functions have good properties, then some matrix has full rank almost everywhere.\\u201d\\\"\\n\\nSure. But the problem becomes highly non-trivial when the matrix has very special and sophisticated structure, such as the one analyzed in this paper. Despite of all intuitions, it's still a mathematical problem that needs to be rigorously proved.\\n\\n\\\"Unfortunately, however, I can hardly agree that the proof is \\u201celegant\\u201d at the moment, especially for Lemma 3.2. There are many steps that makes the proof unnecessarily longer. For example, the very first equation in step 1 is not necessary; you can just start with eq (4). Similarly, I believe that steps 2-5 can be made much more concise. In defining eq (9), why don\\u2019t you just start by \\u201cfor all nodes j in layer l, define all \\\\alpha_j to be:\\u201d? I also don\\u2019t understand a few lines above eq (10). Given that the network is not fully connected but a DAG, how can you guarantee that u_j and u_{j\\u2019} are of the same size and make them identical? For the softplus case, the choice of \\\\beta is missing. Without this, how can you make sure that some of the data points fall into the negative side of softplus?\\\"\\n\\nFollowing reviewer's suggestion, we have revised/shortened the proof of Lemma 3.2. Please check our revision. Regarding u_j and u_{j'}, we already added further explanation in the proof. Basically they need not have the same size because according to our network description in Section 2, only those neurons with the same number of incoming units can have shared weights. For instance, it's fine to have on the same layer two neurons with weights (1,0,0) and other two neurons (0,0,1,0). The bias for softplus is mentioned now. The \\\\beta variable is defined in the beginning, so basically we use the same value of \\\\beta as in the first case.\"}", "{\"title\": \"Reply to the response\", \"comment\": \"First of all, I would like to appreciate the authors for their extensive efforts in revising and improving the paper.\\n\\nI think most of my concerns were more or less addressed, except for the \\u201cassumptions\\u201d and \\u201cproof technique\\u201d parts.\\n\\nFirst of all, I still believe it is weird that the assumptions are never used *explicitly* anywhere in the main text. The paper makes some assumptions and never uses them directly in the main text. I would suggest the authors to at least add a \\u201cproof sketch\\u201d paragraph below Lemma 3.2, and briefly outline the proof while mentioning how the assumptions come into play.\\n\\nAs for the proof technique part, by \\u201cas expected\\u201d I meant I would have been more surprised if the set of U with rank-deficient \\\\Psi(U) had measure greater than zero. This was because in general, rank deficient matrices lie in a set of measure zero, and I\\u2019ve seen many results such as \\u201cif a hidden layer is wider than N and activation functions have good properties, then some matrix has full rank almost everywhere.\\u201d\\n\\nUnfortunately, however, I can hardly agree that the proof is \\u201celegant\\u201d at the moment, especially for Lemma 3.2. There are many steps that makes the proof unnecessarily longer. For example, the very first equation in step 1 is not necessary; you can just start with eq (4). Similarly, I believe that steps 2-5 can be made much more concise. In defining eq (9), why don\\u2019t you just start by \\u201cfor all nodes j in layer l, define all \\\\alpha_j to be:\\u201d? I also don\\u2019t understand a few lines above eq (10). Given that the network is not fully connected but a DAG, how can you guarantee that u_j and u_{j\\u2019} are of the same size and make them identical? For the softplus case, the choice of \\\\beta is missing. Without this, how can you make sure that some of the data points fall into the negative side of softplus?\\n\\nI agree that there are some interesting techniques used in constructing the parameter U. However, the main theoretical contribution (proof of Lemma 3.2) is hidden in the appendix, which many readers will end up skipping. My current score is based on the main text, and at least in my opinion, the main text itself doesn\\u2019t reveal anything particularly interesting.\"}", "{\"title\": \"Response to AnonReviewer1. Part 2\", \"comment\": \"Answers to specific comments/questions:\\n* \\\"Assumption 3.1.2 doesn\\u2019t make sense. Assumption 3.1.2 says \\u201cthere exists N neurons satisfying\\u2026\\u201d and then the first bullet point says \\u201cfor all j = 1, \\u2026, M\\u201d. Also, the statement \\u201cone of the following conditions\\u201d is unclear. Does it mean that we must have either \\u201cN satisfying the first bullet\\u201d or \\u201cN satisfying the second bullet\\u201d, or does it mean we can have N/2 satisfying the first and N/2 satisfying the second?\\\"\\n\\nWe apologize for the typo and confusion. Please check our revision now where we have rephrased this a bit. It is possible to have mixed skip-connections as the reviewer mentioned, but for simplicity at the moment we just require that all the neurons with skip-connections have the same activation functions which satisfy one of our conditions.\\n\\n* \\\"The paper does not describe where the assumptions are used...but if you can sketch/mention how the assumptions come into play in the proofs, that will be more helpful in understanding the meaning of the assumptions.\\\"\\n\\nAs the reviewer noted, these assumptions are used in the proof of Lemma 3.2, and hence in our main result Theorem 3.3 (though not directly used here). Basically in proving Lemma 3.2, we used our conditions on activation functions to prove that there exists a set of parameters so that the matrix Psi has full rank. Then we use the analytic property of the activation functions together with Lemma A.1 to establish the result on the measure-zero set property. The condition on the training data is used to guarantee that the value of each hidden unit can be chosen to be non-identical for different training samples.\\n\\n* \\\"Are there any specific reasons for considering cross-entropy loss only? Lemma 3.2 looks general, so this result seems to be applicable to other losses...\\\"\\n\\nThe reviewer is right. Indeed our result holds for other convex loss functions. Please check our extension to this setting in Section C in the appendix. The reason why we presented our main result with cross-entropy loss in the beginning is because we wanted to keep everything simple, and also because this is the loss actually used in practice.\\n\\n* \\\"...Figure 2 is a bit misleading because there are hidden nodes with skip connections to only one of the output nodes.\\\"\\n\\nYes, they are connected to all the hidden units. We apologize for the confusion in Figure 2 as we thought it might look a bit too dense. Please check our revision now where we have updated the figure. \\n\\n* \\\"For the experiments, how did you deal with pooling layers in the VGG and DenseNet architectures? Does max-pooling satisfy the assumptions? Or the experimental setting doesn\\u2019t necessarily satisfy the assumptions?\\\"\\n\\nIt depends. In general, max-pooling can be used above all the neurons with skip-connections in the network. However as the main goal of the experiments is to find out the generalization performance of skip-networks, we did not want to include this part in the paper. Nevertheless, we have added Section G in the appendix to treat this question separately. \\n\\n* Can you show the \\u201cimprovement\\u201d of loss surface by adding skip connections? Maybe coming up with a toy dataset and network WITH bad local valleys will be sufficient, because after adding N skip connections the network will be free of bad local valleys.\\n\\nYes. Please check our Section E in the appendix now, where we provide a visual example of the loss landscape of a small network, before and after adding skip-connections. One can easily see that skip-connections to the output help to smooth the loss landscape and get rid of bad local valleys. \\n\\n* \\\"In the Assumption 3.1.3, the $N$ in $r \\\\neq s \\\\in N$ means $[N]$?\\\"\\nYes. We fixed the typo. Thanks!\\n\\n* \\\"In the introduction, there is a sentence \\u201cpotentially has many local minima, even for simple models like deep linear networks (Kawaguchi, 2016),\\u201d which is not true....\\\"\\n\\nThe reviewer is right. It's actually an english issue as we meant non-convexity which previously appears before this term. We removed it now in our revision. \\n\\n* \\\"Assumption 3.1.3 looked a bit confusing to me at first glance. You might want to add some clarification such as \\u201cfor example, in the fully connected network case, this means that all data points are distinct.\\u201d\\\"\\n\\nThanks for another helpful comment. We have updated/improved the statement of this condition a bit. In particular, we require now only the distinctness between the input patches at the same location across different training samples. This is just a subtle change and the current proof of Lemma 3.2 is not affected by this modification. We follow your suggestion by adding the following sentence right below Equation (3): \\n\\\"The third condition is always satisfied for fully connected networks if the training samples are distinct. For CNNs, this condition means that the corresponding input patches across different training samples are distinct.\\\"\"}", "{\"title\": \"Response to AnonReviewer1. Part 1\", \"comment\": \"Thank you very much for the detailed feedbacks. Below are answers to your comments/questions in the order that they appear.\\n\\n* \\\"In the first place, figuring out \\u201cwhy existing models work\\u201d would be more meaningful than suggesting a new architecture which is on par with existing ones, unless one can show a significant performance improvement over the other ones.\\\"\\n\\nWe absolutely agree that understanding why existing models work is what one desires to achieve in the end. But to reach that point, one has to start somewhere, and make progress continually. This is the reason for the existence of a bunch of recent work on this topic:\\n\\nA. Choromanska, M. Hena, M. Mathieu, G. B. Arous, and Y. LeCun. The loss surfaces of multilayer networks. 2015.\\nI. Safran and O. Shamir. On the quality of the initial basin in overspecified networks. 2016.\\nB. D. Haeffele and R. Vidal. Global optimality in neural network training. 2017.\\nH. Lu, K. Kawaguchi. Depth creates no bad local minima. 2017.\\nM. Hardt and T. Ma. Identity matters in deep learning. 2017.\\nC. Yun, S. Sra, and A. Jadbabaie. Global optimality conditions for deep neural networks. 2017.\\nD. Soudry and E. Hoffer. Exponentially vanishing sub-optimal local minima in multilayer neural networks. 2017.\\nM. Nouiehed and M. Razaviyayn. Learning Deep Models: Critical Points and Local Openness. 2018.\\nT. Laurent and J. H. von Brecht. The Multilinear Structure of ReLU Networks. 2018.\\nS. Liang, R. Sun, J. D. Lee, and R. Srikant. Adding one neuron can eliminate all bad local minima. 2018.\\n\\nAt the moment, we are not aware of any previous work which can prove directly strong theoretical results on the loss landscape of \\\"existing models\\\" which actually work in practice. Moreover in this paper, we show that the presented class of networks enjoy both strong theoretical properties and good empirical performance. We do not make great claim about the result, but we believe that this is a significant contribution to the literature, especially w.r.t. the recent great effort of the community in trying to make progress on theoretical understanding of deep learning models.\\n\\n* \\\"The proof of the main theorem (Thm 3.3) is not very interesting, nor develops novel proof techniques. It heavily relies on Lemma 3.2, which I think is the main technical contribution of this paper. Apart from its technicality in the proof, the statement of Lemma 3.2 is just as expected and gives me little surprise, because having more than N hidden nodes connected directly to the output looks morally \\u201cequivalent\\u201d to having a layer as wide as N, and it is known that in such settings (e.g. Nguyen & Hein 17\\u2019) it is easy to attain global minima.\\\"\\n\\nThe proof of our main result is simple and elegant, as also noted by AnonReviewer2. Simple proofs are often generalizable better to complex models. Thus we think that it is actually an advantage of this work. \\nCan the reviewer elaborate on why the statement of Lemma 3.2 is just as expected? Given that said, does the reviewer have in mind an easier proof for this lemma? - which we would be very happy to know We would like to note that the class of networks analyzed in this Lemma is quite general and hence the mathematical proof is non-trivial. We agree that one can view the N skip-connections as an implicit wide layer, but this is just an intuition and very weak argument to conclude that the statements are just as expected. There are things that might look \\\"intuitive\\\" and \\\"as expected\\\" but it's completely wrong, for instance, a deep linear network with N skip-connections to the output does not satisfy our conditions and results if the training data has very low rank.\\n\\n* \\\"I also think that having more than N skip connections can be problematic if N is very large, for example N>10^6. Then the network requires at least 1M nodes to fall in this class of networks without bad local valleys. If it is possible to remove this N-hidden-node requirement, it will be much more impressive.\\\"\\n\\nWe agree that the current condition on the number of skip-connections is quite strong. But on the other hand, it's not necessarily too restrictive at the level as mentioned by the reviewer. We would like to refer to Table 1 in [1] for some information on the number of neurons of the first layer of several existing networks. For instance, the first hidden layer of original VGG-Nets has already more than 3M nodes, and so if one sum up this number for all the hidden layers the total will be much than that. Moreover, in the literature it is common to find theoretical work which requires extremely larger number of neurons than the number of training samples, see e.g. https://openreview.net/forum?id=S1eK3i09YQ which requires N^6 neurons for gradient descent to find a zero training error solution for one hidden layer networks. Nevertheless, we agree with the reviewer that it would be interesting to relax this condition in future work.\\n\\n[1] Nguyen & Hein. Optimization landscape and expressivity of deep cnns. 2017.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you very much for the support. Below are our answers to your comments/questions in the order that they appear.\\n\\nRegarding the failure of original VGG with sigmoid activation, we have added a discussion on this issue under Section F in the appendix (please see also our response to AnonReviewer3 on the 10% accuracy matter).\\nBasically, we have observed that the network in this case converges to a constant zero classifier, regardless of our effort in tuning the learning rate. This behavior is actually not restricted to the specific architecture of VGG, but has been shown before as an issue of sigmoid activation when training plain networks with depth > 5, see e.g. [1].\", \"answers_to_minor_issues\": \"Actually the definition of bad local valleys has previously appeared just above Theorem 3.3 in the text. However we follow the reviewer's suggestion by putting this in a formal definition 3.3 now.\\n\\n\\\"In proof number 4 (of Theorem 3.3), the statement should be \\u201cany *principle* submatrices of negative semi-definite matrices are also NSD\\u201d, and it\\u2019s not true otherwise. But this typo doesn\\u2019t influence the proof.\\\"\\nYes, the reviewer is completely right. We fixed this typo. Thanks!\\n\\n\\\"Also, it seems the proof of 3 is somewhat redundant, since local minimum is a special case of your \\u201cbad local valley\\u201d.\\\"\\nWe agree. We keep it there as we wanted to make all our statements and results become clear and as rigorous as possible.\\n\\n\\\"It seems the analysis could not possibly be extended to the ReLU activation, since it will break the analytical property of the function. Just out of curiosity, do the authors have some further thoughts on non-differentiable activations?\\\"\\nThank you for an interesting question. At the moment, we do not really have a clear clue how to extend the result to general non-differentiable activations, so this could be an interesting question for future research. \\nFor ReLU, we think that it might be possible to exploit the fact that softplus can approximate ReLU arbitrarily well, and so perhaps a limiting argument on their corresponding loss functions can be helpful..\\n\\n[1] Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio. ICML 2010.\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for the feedback. Below are answers to your comments/questions by their numbering.\\n\\n1) We agree with the reviewer about the training error matter. Thus we have added Section F in the appendix to discuss training error in details. As expected, the training error is zero except the case where sigmoid activation is used with original VGGs from Table 2 or original CNN13 from Table 1.\\nMoreover, we show in this section that adding skip-connections to the output is also helpful for training extremely deep (narrow) networks with softplus activation. This together show that skip-connections are helpful for training deep networks with both sigmoid and softplus activation. In Section E in the appendix, we provide a visual example of the loss landscape of a small network, before and after adding skip-connections, where one can see that adding skip-connections to the output layer help to smooth the loss surface and get rid of bad local valleys, which is helpful for local search algorithms like SGD to succeed.\\n\\n2) As described in our experiments, the number of skip-connections is fixed to M=N in both cases (with and without data-augmentation), where N is the size of the original data set. We quote the following sentence from our experimental section for the convenience of the reviewer:\\n\\\"...we aggregate all neurons of all the hidden layers in a pool and randomly choose from there a subset of N neurons to be connected to the output layer...\\\".\\nIn the setting of data-augmentation, at each training iteration the network uses additional examples (randomly) generated from the original dataset, and thus it is not clear in this case how the number of training samples should be defined. That's why we fixed the number of skip-connections in both cases to be the size of the original data set.\\n\\n3) We agree that this might be overlook by non-optimizers. Nevertheless we want to keep our abstract short and precise. Thus we have added the following sentence in the introduction to make this further clear: \\n\\\"We note that this implies the loss landscape has no strict local minima, but theoretically non-strict local minima can still exist.\\\"\\n\\n4) We have included the references suggested by the reviewer, and can add more detailed comparisons if the reviewer think that it's necessary.\\n\\nRegarding 10% test accuracy, we added a discussion on this issue under Section F in the appendix. Briefly, the reason, as observed in our experiments, is that the network converges quickly to a constant zero classifier (i.e. the output of last hidden layer converges quickly to zero), and thus the training/test accuracy converge to 10% and the cross-entropy loss in Equation (2) converges to \\u2212 log(1/10). We realized later that this is actually a known issue of sigmoid activation when training plain networks with depth > 5, as pointed out earlier by Glorot & Bengio [1].\\n\\n[1] Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio. ICML 2010.\"}", "{\"title\": \"good progress; but simulation requires some work\", \"review\": \"This paper shows that a class of deep neural networks have no spurious local valleys \\u2013--implying no strict local-minima. The family of neural networks studied includes a wide variety of network structure such as (a variant of) DenseNet. Overall, this paper makes some progress, improving previous results on over-parametrized networks.\", \"pros\": \"The flexibility of the network structure is an interesting point.\", \"cons\": \"CNN was covered in previous related works (so weight sharing is not a new contribution); DenseNet is not explicitly covered in this work (I mean current DenseNet does not have N skip-connections to output; correct me if wrong).\\n The simulation part is not that clear, and I have a few questions that I hope the authors can answer. \\n\\nSome comments/suggestions:\\n1) Training error needs to be discussed.\\n Page 8 says \\u201cThis effect can be directly related to our result of Theorem 3.3 that the loss landscape of skip-networks has no bad local valley and thus it is not difficult to reach a solution with zero training error\\u201d. This relation is not justified. The implication of Thm 3.3 is that getting zero training error is easier, but the tables are only for test error. Showing training error is the only way to connect to Thm 3.3. I expect to see a high training error for C-10, original VGG and sigmoid activation functions, and zero training error for both skip-SGD (rand) and skip-SGD (SGD). \\n This paper has no theory on generalization, thus if a whole section is just about \\u201cinvestigating generalization error\\u201d, then the connection to theoretical parts is weak --btw, one connection is the comparison of two algorithms, which fits the context well, and thus interesting (though comparison result itself probably not surprising). \\n\\n2) Data augmentation.\\n \\u201cNote that the rand algorithm cannot be used with data augmentation in a straightforward way and thus we skip it for this part.\\u201d Why? \\n With data augmentation, is M still larger than N? If yes, then the number of added skip connection is different for C-10 and C-10-plus, which is not mentioned in the instruction of Table 2. \\n\\n3)It may be better to mention explicitly that \\\"it is possible to have bad local min\\\" \\u2013perhaps in abstract and/or introduction. \\n --Although \\u201cno sub-optimal strict local minima\\u201d is mentioned, readers, especially non-optimizers, might not notice \\\"strict\\\".\\n --In fact, in the 1st round read, I do not have a strong impression of \\\"strict\\\". Later I realized it. Mentioning this can be helpful. \\n\\n4) Some references I suggest to include:\\n [R1] Yu, X. and Chen, G. On the local minima free condition of backpropagation learning. 1995. --related work. \\n [R2] Lu, H., Kawaguchi, K. Depth creates no bad local minima. 2017. --also deep nets.\\n [R3] Liang, S., Sun, R., Li, Y., & Srikant, R. \\\"Understanding the loss surface of neural networks for binary classification.\\\" 2018. --Also study SoftPlus neurons.\\n [R4] Nouiehed, M., & Razaviyayn, M. Learning Deep Models: Critical Points and Local Openness. 2018. --also deep nets.\", \"minor_questions\": \"--Exact 10% test accuracy for a few cases. Why exact 10%?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"a breakthrough paper on the loss landscape of neural networks\", \"review\": \"The paper analyzes the loss landscape of a class of deep neural networks with skip connections added to the output layer. It proves that with the proposed structure of DNN, there are uncountably many solutions with zero training error, and the landscape has no bad local valley or local extrema.\\n\\nOverall I really enjoy reading the paper. \\nThe assumptions to aid the proof are very natural and much softer than the existing literature. As far as I\\u2019m concerned, the setting is very close to real deep neural networks and the paper is a breakthrough in the area. The experiments also consolidate that the theoretical settings are natural and useful, namely, with enough skip connections and specially chosen activation functions. \\nThe presentation of the paper is intuitive and easy to follow. I\\u2019ve also checked all the proof and think it\\u2019s brilliantly and elegantly written. \\n\\nMy only complaint is about the experiments. As we all know that both VGG and the sigmoid activation are commonly used DL tools, and why do they fail to generalize when used together? Does the network fail to converge or is it overfitting? The authors should try tuning the parameters and present a proper result. With that said, since the paper is more about theoretical findings, this issue doesn\\u2019t influence my recommendation to accept the paper.\", \"minor_issues\": \"I think it\\u2019s better to formally define \\u201cbad local valley\\u201d somewhere in the paper. From what I read, the definition of \\u201cbad local valley\\u201d is implied by the abstract and in the proof of Theorem 3.3(2), but I did not find a formal definition anywhere else. \\nIn proof number 4 (of Theorem 3.3), the statement should be \\u201cany *principle* submatrices of negative semi-definite matrices are also NSD\\u201d, and it\\u2019s not true otherwise. But this typo doesn\\u2019t influence the proof. \\nAlso, it seems the proof of 3 is somewhat redundant, since local minimum is a special case of your \\u201cbad local valley\\u201d. \\nIt seems the analysis could not possibly be extended to the ReLU activation, since it will break the analytical property of the function. Just out of curiosity, do the authors have some further thoughts on non-differentiable activations?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting experimental results, but less significant theoretical contribution\", \"review\": [\"This paper presents a class of neural networks that does not have bad local valleys. The \\u201cno bad local valleys\\u201d implies that for any point on the loss surface there exists a continuous path starting from it, on which the loss doesn\\u2019t increase and gets arbitrarily smaller and close to zero. The key idea is to add direct skip connections from hidden nodes (from any hidden layer) to the output.\", \"The good property of loss surface for networks with skip connections is impressive and the authors present interesting experimental results pointing out that\", \"adding skip connections doesn\\u2019t harm the generalization.\", \"adding skip connections sometimes enables training for networks with sigmoid activation functions, while the networks without skip connections fail to achieve reasonable performance.\", \"comparison of the generalization performance for the random sampling algorithm vs SGD and its connection to implicit bias is interesting.\", \"However, from a theoretical point of view, I would say the contribution of this work doesn\\u2019t seem to be very significant, for the following reasons:\", \"In the first place, figuring out \\u201cwhy existing models work\\u201d would be more meaningful than suggesting a new architecture which is on par with existing ones, unless one can show a significant performance improvement over the other ones.\", \"The proof of the main theorem (Thm 3.3) is not very interesting, nor develops novel proof techniques. It heavily relies on Lemma 3.2, which I think is the main technical contribution of this paper. Apart from its technicality in the proof, the statement of Lemma 3.2 is just as expected and gives me little surprise, because having more than N hidden nodes connected directly to the output looks morally \\u201cequivalent\\u201d to having a layer as wide as N, and it is known that in such settings (e.g. Nguyen & Hein 17\\u2019) it is easy to attain global minima.\", \"I also think that having more than N skip connections can be problematic if N is very large, for example N>10^6. Then the network requires at least 1M nodes to fall in this class of networks without bad local valleys. If it is possible to remove this N-hidden-node requirement, it will be much more impressive.\", \"Below, I\\u2019ll list specific comments/questions about the paper.\", \"Assumption 3.1.2 doesn\\u2019t make sense. Assumption 3.1.2 says \\u201cthere exists N neurons satisfying\\u2026\\u201d and then the first bullet point says \\u201cfor all j = 1, \\u2026, M\\u201d. Also, the statement \\u201cone of the following conditions\\u201d is unclear. Does it mean that we must have either \\u201cN satisfying the first bullet\\u201d or \\u201cN satisfying the second bullet\\u201d, or does it mean we can have N/2 satisfying the first and N/2 satisfying the second?\", \"The paper does not describe where the assumptions are used. They are never used in the proof of Theorem 3.3, are they? I believe that they are used in the proof of Lemma 3.2 in the appendix, but if you can sketch/mention how the assumptions come into play in the proofs, that will be more helpful in understanding the meaning of the assumptions.\", \"Are there any specific reasons for considering cross-entropy loss only? Lemma 3.2 looks general, so this result seems to be applicable to other losses. I wonder if there is any difficulty with different losses.\", \"Are hidden nodes with skip connections connected to ALL m output nodes or just some of the output nodes? I think it\\u2019s implicitly assumed in the proof that they are connected to all output nodes, but in this case Figure 2 is a bit misleading because there are hidden nodes with skip connections to only one of the output nodes.\", \"For the experiments, how did you deal with pooling layers in the VGG and DenseNet architectures? Does max-pooling satisfy the assumptions? Or the experimental setting doesn\\u2019t necessarily satisfy the assumptions?\", \"Can you show the \\u201cimprovement\\u201d of loss surface by adding skip connections? Maybe coming up with a toy dataset and network WITH bad local valleys will be sufficient, because after adding N skip connections the network will be free of bad local valleys.\", \"Minor points\", \"In the Assumption 3.1.3, the $N$ in $r \\\\neq s \\\\in N$ means $[N]$?\", \"In the introduction, there is a sentence \\u201cpotentially has many local minima, even for simple models like deep linear networks (Kawaguchi, 2016),\\u201d which is not true. Deep linear networks have only global minima and saddle points, even for general differentiable convex losses (Laurent & von Brecht 18\\u2019 and Yun et al. 18\\u2019).\", \"Assumption 3.1.3 looked a bit confusing to me at first glance. You might want to add some clarification such as \\u201cfor example, in the fully connected network case, this means that all data points are distinct.\\u201d\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
ryemosC9tm
Representation-Constrained Autoencoders and an Application to Wireless Positioning
[ "Pengzhi Huang", "Emre Gonultas", "Said Medjkouh", "Oscar Castaneda", "Olav Tirkkonen", "Tom Goldstein", "Christoph Studer" ]
In a number of practical applications that rely on dimensionality reduction, the dataset or measurement process provides valuable side information that can be incorporated when learning low-dimensional embeddings. We propose the inclusion of pairwise representation constraints into autoencoders (AEs) with the goal of promoting application-specific structure. We use synthetic results to show that only a small amount of AE representation constraints are required to substantially improve the local and global neighborhood preserving properties of the learned embeddings. To demonstrate the efficacy of our approach and to illustrate a practical application that naturally provides such representation constraints, we focus on wireless positioning using a recently proposed channel charting framework. We show that representation-constrained AEs recover the global geometry of the learned low-dimensional representations, which enables channel charting to perform approximate positioning without access to global navigation satellite systems or supervised learning methods that rely on extensive measurement campaigns.
[ "Autoencoder", "dimensionality reduction", "wireless positioning", "channel charting", "localization" ]
https://openreview.net/pdf?id=ryemosC9tm
https://openreview.net/forum?id=ryemosC9tm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HyxgI74eeN", "SyemCpCF07", "BJe4Oxm5pX", "B1eOc1zw6Q", "Syxkf7kMa7", "BJeOGpKe6X", "r1xT1HuC3X", "HyghFgZh37" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1544729416511, 1543265738678, 1542234220266, 1542033295989, 1541694214903, 1541606672254, 1541469412944, 1541308547870 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper606/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper606/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper606/Authors" ], [ "ICLR.cc/2019/Conference/Paper606/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper606/Authors" ], [ "ICLR.cc/2019/Conference/Paper606/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper606/Authors" ], [ "ICLR.cc/2019/Conference/Paper606/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers found the work interesting and sensible. The application of latent space constrained autoencoders to wireless positioning certainly seems novel. Applications can certainly be exciting additions to the conference program. However, the reviewers weren't convinced that the technical content of the paper was sufficiently novel to be interesting to the ICLR community. In particular, the reviewers seem concerned that there are no comparisons to more recent methods for dimensionality reduction and learning latent embeddings, such as variational auto-encoders. Certainly a comparison to more recent work constraining latent representations seems warranted to justify this particular approach.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting application to wireless positioning, but lacks novelty and empirical comparisons\"}", "{\"title\": \"Replies to comments\", \"comment\": \"I don't find the distinction with CVAE very convincing. I'd grant that this is not a variational model. But it's stretching to suggest autoencoders are not generative models (they are simply deterministic ones). The probabilistic interpretation is that we have a prior of the latents being 'nearby' to other latents from nearby in time. While there may not be particular emphasis on the probabilistic interpretation for each penalty metric, using a squared distance specifically suggests a Gaussian latent space of representations.\"}", "{\"title\": \"Differences to conditional VAEs and related approaches\", \"comment\": \"Thank you for your comment.\\n\\nWe agree that the general idea of constrained representations in autoencoders is not novel per se. However, the type of constraints we are imposing are novel and the application to wireless positioning is new as well.\\n\\nIn the references you mentioned, constraints are added for the purpose of conditionally generating data. The constraints are used to (i) learn a latent variable that would offer a better output reconstruction, (ii) allow for a realistic (and diverse) data generation, or (iii) enable more control over the characteristics of the generated data (e.g., controlling attributes). For example, conditional GANs (CGAN) and conditional VAEs (CVAE) introduce conditioning using extra information (e.g., attribute labels) during training phase to influence the distribution in the latent space with the goal of learning better structure in the outputs. In our case, we are not trying to train a generative model but rather perform dimensionality reduction to learn a \\u201cmeaningful\\u201d low dimensional embedding. Since we care about the geometry of the latent variable rather than the reconstructed output of the AE, we impose side-information constraints on pairs of data points in the low-dimensional representation (i.e, latent variable). We do not put a specific emphasis on the probabilistic inference between the latent variable and the output of the autoencoder, and thus on the distribution of the latent variable. Also, our constraints (pairwise distances) are not of probabilistic nature. This is because the primary goal is to find a low dimensional representation that preserve a (hidden) geometrical structure of the input data (e.g., finding relative positions from channel state information features in wireless positioning).\\n\\nOur results are useful for applications where side information about the low dimensional representation arise naturally from the data or the application. For example, for positioning we wish to learn a 2D representation from high-dimensional channel state information features. Knowing (from the original channel charting paper), that autoencoders can find better embeddings (in terms of preserving the neighborhood) than other dimensionality reduction methods, we show that adding side information that arise from the user equipments\\u2019 finite velocity, or from some labeled positions (in a semi-supervised fashion), can help to significantly improve positioning.\\n\\nWe will include the suggested references on latent-variable constrained generative models as a reference to representation constrained models and highlight their differences to our work.\"}", "{\"title\": \"Applies a distance constraint to the latent space of auto-encoders\", \"review\": \"[I'm a fallback reviewer assigned after initial reviewer failed to submit]\\n\\nQuality/Clarity:\\nThe work is fine. The presentation is clear enough. The experiments are all on simulated data, with 2GHz scattering simulation derived from more sophisticated software suites than the 4 toy manifold problems initially considered.\\n\\n\\nOriginality/Significance:\\nThe work does not seem particularly novel. Perhaps the specific application of regularized autoencoders to the channel charting problem is novel. The regularizers end up looking a lot like a variety of margin losses. The idea of imposing some structure on the latent space of an autoencoder is not particularly new either. Consider, for example, conditional VAEs. Or this work from last year's ICLR https://openreview.net/forum?id=Sy8XvGb0- This work is straightforward multi-task learning with dimensionality reduction with similarity loss tasks.\\n\\nOn the whole, I don't think there is enough novel work for the venue.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Thanks for the detailed set of comments\", \"comment\": \"In what follows, C: stands for the reviewer's comment and R: for our response.\", \"c\": \"\\\"It is unclear to me where the absolute distance constraints (FAD or MAD) arise from in the synthetic experiments. [...]\\\"\", \"r\": \"Our synthetic experiments demonstrate that only a small amount of side information is sufficient to (often significantly) improve autoencoders. To showcase this behavior on synthetic data, we have used knowledge available from the true low-dimensional point set (used to generate the high-dimensional dataset). For example, for the fixed absolute distance (FAD) constraint we extract the distance between points in the low-dimensional representation and use a small fraction of information while training the autoencoder. Clearly, this would not be possible in many dimensionality reduction applications. However, in our wireless positioning application, we can get the maximum distance between two points from the physics: a mobile terminal can only move at finite velocity which implies that subsequent CSI measurements cannot be too far apart in representation space.\"}", "{\"title\": \"Useful approach, but insufficient experimental validation, and somewhat weak on novelty\", \"review\": \"Description:\\n\\nThis paper presents a variant of deep neural network autoencoders for low-dimensional embedding, where pairwise constraints are incorporated, and applies it to wireless positioning.\\n\\nThe four constraint types are about enforcing pairwise distance between low-dimensional points to be close to a desired value or below a maximal desired value, either as an \\\"absolute\\\" constraint where one point is fixed or a \\\"relative\\\" constraint where both points are optimized. The constraints are encoded as nonconvex regularization terms. In addition to the constraints the method has a standard autoencoder cost function.\\n\\nAuthors point out that if a suitable importance weighting is done, one constraint type yields a parametric version of Sammon\\u2019s mapping.\\n\\nThe method is tested on four simple artificial manifolds and on a wireless positioning task.\", \"evaluation\": \"Combining autoencoders with suitable additional regularizers can be a meaningful approach. However, I find the evaluation of the proposed method very insufficient: there are no comparisons to any other dimensionality reduction methods. For example, Sammon's mapping is mentioned several times but is not compared to, and a parametric version of t-SNE is also mentioned but not compared to even though it is parametric like the authors' proposed method. I consider that to be a severe problem in a situation where numerous such methods have been proposed previously and would be applicable to the data used here.\", \"in_terms_of_novelty_i_find_the_method_somewhat_lacking\": \"essentially it is close to simply a weighted combination of an AE cost function and a Sammon's mapping cost function when using the FRD constraints. The other types of constraints add some more novelty, however.\", \"detailed_comments\": \"\\\"Autoencoders have been shown to consistently outperform other dimensionality-reduction algorithms on real-world datasets (van der Maaten et al., 2009)\\\": this is too old a reference, nine years old, and it does not contain numerous dimensionality reduction algorithms proposed more recently, such as any neighbor embedding based dimensionality reduction methods. Moreover, the test in van der Maate et al. 2009 was only on five data sets and in terms of a continuity measure only, too little evidence to claim consistent outperforming of other algorithms.\\n\\n\\\"van der Maaten (2009) proposes the use of AEs to learn a parametric mapping between high-dimensional datapoints and low-dimensional representations by enforcing structure obtained via Student-t stochastic neighborhood embedding (t-SNE)\\\": this is not a correct description, van der Maaten (2009) optimizes the AE using t-SNE cost function (instead of running some separate t-SNE step to yield structural constraints as the description seems to say).\\n\\n\\\"the FRD regularizer resembles that of Sammon's mapping\\\": actually in the general form it resembles the multidimensional scaling stress; it only becomes close to Sammon's mapping if you additionally weight each constraint by the inverse of the original distance as you suggest.\\n\\nIt is unclear to me where the absolute distance constraints (FAD or MAD) arise from in the synthetic experiments. You write \\\"for FAD one of the two representations... is a constant known prior to AE learning\\\": how can you know the desired low-dimensional output coordinate (or distance from such a coordinate) in the synthetic data case?\", \"this_reference_is_incorrect\": \"\\\"Laurens van der Maaten, Eric Postma, and Jaap Van den Herik. Dimensionality reduction: A comparative review. In Journal of Machine Learning Research, volume 10, pp. 66\\u201371, 2009.\\\" This article has not been published in Journal of Machine Learning Research. It is only available as a technical report of Tilburg University.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Thanks a lot for the comments!\", \"comment\": \"1. We are only using one type of regularizer at once (but a sum over many pairs of points). Hence, we only end up with one regularization parameter that must be tuned. To select the best parameter in practice, we use a simple grid search. In cases where one wants to use multiple different types of regularizers, selecting the best parameters requires a multidimensional grid search. We have not investigated more efficient (or even automated) ways to select the regularization parameter in practice.\\n\\n2. Thanks for pointing out these references. Whether these manifold constraints would help the performance of our algorithms is indeed an interesting question as we have not experimented with such methods in the context of our paper. In our application, we would like to impose such constraints not on the weights but rather on the embedded points. For example, enforcing orthogonal rows on a batch of embedded points may help to learn a more meaningful low-dimensional representation, similar to PCA. We will include the suggested references and outline potential applications of such manifold constraints on either the weights or the embedded points (which is more challenging, but may be more relevant in our scenario).\"}", "{\"title\": \"Learning representation-constrained autoencoders\", \"review\": \"The paper propose to learn autoencoders which incorporate pairwise constraints while learning the representation. Such constraints are motivated from the available side information in a given application. Inducing application-specific structure while training autoencoders allows to learn embeddings with better neighborhood preserving properties. In wireless positioning application, the paper proposes fixed absolute/relative distance and maximum absolute/relative distance constraints. Experiments on synthetic and real-word datasets show improved performance with the proposed approach.\\n\\nSome comments/questions:\\n\\n1. Table 1 shows different constraints along with the corresponding regularizers which are employed while training autoencoders. How is the regularization parameter set for (so many) regularizers?\\n\\n2. Employing constraints (e.g. manifolds) while learning representation has recently attracted attention (see the references below). The proposed approach may benefit from learning the constraints directly on the manifolds (than via regularizers). Some of the constraints discussed in the paper can be modeled on manifolds.\\n\\nArjovsky et al (2016). Unitary evolution recurrent neural networks\\nHuang et al (2017). Orthogonal weight normalization: Solution to optimization over multiple dependent Stiefel manifolds in deep neural networks.\\nHuang et al (2018). Building deep networks on Grassmann manifolds\\nOzay and Okatani (2018). Training CNNs with normalized kernels.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
rJgfjjC9Ym
Backprop with Approximate Activations for Memory-efficient Network Training
[ "Ayan Chakrabarti", "Benjamin Moseley" ]
With innovations in architecture design, deeper and wider neural network models deliver improved performance on a diverse variety of tasks. But the increased memory footprint of these models presents a challenge during training, when all intermediate layer activations need to be stored for back-propagation. Limited GPU memory forces practitioners to make sub-optimal choices: either train inefficiently with smaller batches of examples; or limit the architecture to have lower depth and width, and fewer layers at higher spatial resolutions. This work introduces an approximation strategy that significantly reduces a network's memory footprint during training, but has negligible effect on training performance and computational expense. During the forward pass, we replace activations with lower-precision approximations immediately after they have been used by subsequent layers, thus freeing up memory. The approximate activations are then used during the backward pass. This approach limits the accumulation of errors across the forward and backward pass---because the forward computation across the network still happens at full precision, and the approximation has a limited effect when computing gradients to a layer's input. Experiments, on CIFAR and ImageNet, show that using our approach with 8- and even 4-bit fixed-point approximations of 32-bit floating-point activations has only a minor effect on training and validation performance, while affording significant savings in memory usage.
[ "Back-propagation", "Memory Efficient Training", "Approximate Gradients", "Deep Learning" ]
https://openreview.net/pdf?id=rJgfjjC9Ym
https://openreview.net/forum?id=rJgfjjC9Ym
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Skxx0cPye4", "SJlXOXV90X", "ryebisuxRm", "SkgPcXOxAm", "Bkx3-_UxAX", "ByeHHdok0m", "HJg3vx5kAX", "r1lNLuXI6X", "SJg9-d7ITQ", "SJxqBk7767", "Bkl6xGbW67", "SJgfA87927", "SJlbSuiI37" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1544678088440, 1543287658647, 1542650776953, 1542648718996, 1542641667881, 1542596668567, 1542590564156, 1541974092247, 1541974017988, 1541775169917, 1541636596712, 1541187273734, 1540958264835 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper604/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper604/Authors" ], [ "ICLR.cc/2019/Conference/Paper604/Authors" ], [ "ICLR.cc/2019/Conference/Paper604/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper604/Authors" ], [ "ICLR.cc/2019/Conference/Paper604/Authors" ], [ "ICLR.cc/2019/Conference/Paper604/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper604/Authors" ], [ "ICLR.cc/2019/Conference/Paper604/Authors" ], [ "ICLR.cc/2019/Conference/Paper604/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper604/Authors" ], [ "ICLR.cc/2019/Conference/Paper604/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper604/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This work proposes to reduce memory use in network training by quantizing the activations during backprop. It shows that this leads to only small drops in accuracy for resnets on CIFAR-10 and Imagenet for factors up to 8. The reviewers raised concerns about comparison to other approaches such as checkpointing, and questioned the technical novelty of the approach. The authors were able to properly address the concerns around comparisons, but the issue around novelty remained. This could be compensated by strengthening the experimental results and leveraging the memory saving for instance to train larger networks. Resubmission is encouraged.\", \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Reject\", \"title\": \"Meta-review\"}", "{\"title\": \"Revision\", \"comment\": \"We have uploaded a revised version of the paper incorporating the comments received so far by the revision deadline. We are of course happy to continue to respond to any further comments and questions.\\n\\nWe have responded to individual reviewers below. Here is a brief summary:\\n\\n-Rev 1 has a positive view of our paper and suggested it could be further improved with experiments that illustrate the tangible benefits of our approach. Accordingly, we have added experiments to show the much larger batch-size practically allowed by our method, and the corresponding benefits to computational efficiency from better utilizing available parallel cores on a GPU.\\n\\n-Rev 2\\u2019s main concern is that given the memory cost savings one gets from checkpointing, our method may not be practically needed. We have noted that our method can be used not just independently (it has lower computational overhead than the cost of a forward pass for checkpointing) but also along with checkpointing---since these are independent and complementary strategies for reducing memory use. (We have clarified this in the revision). \\n\\nWhile check-pointing has a sub-linear (sqrt) memory cost wrt the number of layers, incorporating our method provides a factor improvement on that asymptotic cost (sqrt(An) vs sqrt(n) for A =\\u215b or \\u00bc). We believe that there are many cases when checkpointing alone is not sufficient and further memory savings are needed, especially when a network is large not just because of depth (n), but also because of per-layer size: e.g., fully-convolutional networks with high-resolution images.\\n\\n-Rev 3 asked about our relationship to Hubara et al\\u2019s work which also deals with quantization. We have clarified that their work does not reduce memory usage during training, but instead, seeks to reduce memory required for inference. We have included this in the revised related work section.\\n\\nWe have also adopted Rev 3\\u2019s suggestion of moving tensorflow-specific implementation details to an appendix.\"}", "{\"title\": \"RE: Clarifications\", \"comment\": \"Thanks! One quick comment---while we recognize that technical novelty is a subjective evaluation, we'd like to point out that there is non-trivial aspect to our approach that is new and goes beyond simple quantization.\\n\\nWe don't quantize activations right away, but only **after** they have been used by subsequent layers in the forward pass. This is key in ensuring the forward pass is computed at full precision, and that the errors in the backward pass and to weight gradients are limited and do not accumulate.\"}", "{\"title\": \"RE: Clarifications\", \"comment\": \"Thanks for the clarification, I have updated my reviews. I think the empirical results have a certain value, I would certainly like the result if it as an ICLR workshop paper or an arxiv.\\n\\nBut it is still a boarder-line paper due to the limited novelty in the techniques being proposed and limited improvements it can buy.\\n\\nOne way to improve the paper is to provide a more extensive study of numeric format(e.g. fp16, unums).\"}", "{\"title\": \"RE: Clarifications\", \"comment\": \"To further clarify, let\\u2019s consider Chen et al. 2016\\u2019s checkpointing approach.\\n\\nFollowing their derivation (based on a feed-forward network), they divide n layers into k segments of n/k layers each. Their memory cost (eq 1 in their paper) is O(n/k)+O(k), where the first term refers to the memory needed to do forward-backward on each segment (one at a time). The optimal k that minimizes this is sqrt(n), and so memory cost is O(sqrt(n)).\\n\\nIn our case, our method is applied for back-propagating within each segment to reduce per-segment memory cost, we would have O(An/k)+O(k) (where A is the \\\\alpha in our paper = \\u00bc or \\u215b). Now the optimal k = sqrt(An), and so memory cost is only O(sqrt(An)). Thus our factor improvement in memory cost carries over (within the sqrt).\\n\\nThe additional computation cost corresponds to the repeated forward computation for all but the last segment and checkpointed layers. So, O(n-n/k-k). In regular checkpointing, this is O(n-2*sqrt(n)). In our case, it will be O(n-(1/sqrt(A)+sqrt(A))*sqrt(n)), again an improvement.\\n\\nThus incorporating our method provides further benefits over and above checkpointing---in both memory and computation.\"}", "{\"title\": \"RE: Clarifications\", \"comment\": \"Thanks for the update! But, we would like to emphasize that our method is not an alternative, but **complementary**, to gradient checkpointing. One doesn\\u2019t preclude the use of the other.\\n \\nCheckpointing saves memory by dropping some activations and recomputing them during the backward pass. We reduce the amount of memory needed to save each set of activations. And so, within checkpointing, our method will require fewer activations to be dropped and recomputed, in turn improving the computational overhead of checkpointing.\\n\\nThus, checkpointing and our method are both ways of reducing the memory footprint of training, each with their own trade-off. More importantly, they can be used by themselves or together. And since state-of-the-art networks for most tasks are only getting deeper, researchers and practitioners will be interested in exploiting all possible avenues for saving memory. This is why we are certain our method will be practically useful in many cases.\\n\\nGradient checkpointing is an elegant solution, and likely the best possible one for exact computation---as we already say in our paper (third para, Sec 2). In the revised version, we will clarify that our method is complementary to it.\"}", "{\"title\": \"RE: Clarifications\", \"comment\": \"Thanks for the clarification, I have modified my reviews accordingly.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your encouraging comments and suggestions.\\n\\n- Based on the reviewer's suggestion, we ran an experiment to obtain an (indirect) measurement of real memory usage of our method. This was done by searching for the maximum batch size that could be fit in memory on a single GPU (i.e., b such that b+1 causes an out of memory error). We also measured the training times per sample---by measuring the wall clock training times per iteration and dividing it by the respective batch sizes.\\n\\nWe did this for both the baseline (i.e., no approximation) and our approach with 4-bit quantization with Resnets on CIFAR-10 with increasing number of layers and, for the deepest network, a version with 4x feature channels for all intermediate layers. The results are as follows:\\n\\nAll numbers are Baseline vs 4-bit Approximation, in that order.\", \"resnet_1001_4x\": \"[Max Batch Size: 26 vs 182 ] [Time per sample: 130.8 ms vs 101.6 ms]\", \"resnet_1001\": \"[Max Batch Size: 134 vs 876 ] [Time per sample: 31.3 ms vs 26.0 ms]\", \"resnet_488\": \"[Max Batch Size: 264 vs 1468] [Time per sample: 13.3 ms vs 12.7 ms]\", \"resnet_254\": \"[Max Batch Size: 474 vs 2154] [Time per sample: 6.5 ms vs 6.7 ms]\", \"resnet_164\": \"[Max Batch Size: 688 vs 2582] [Time per sample: 4.1 ms vs 4.3 ms]\\n\\nThus our method allows significantly larger batches to be fit in memory. These are actual gains from our implementation, which will be released publicly with the paper. Moreover, for larger networks, our method provides us an advantage in wall-clock time. This is because the computation becomes memory bound when using lower batch sizes with regular training, and not all GPU cores are saturated. For smaller networks where the baseline is able to fit in a large enough batch to saturate the GPU, we have a small increase in the time. This increase corresponds to time for computing the approximation.\\n\\nWe sincerely thank the reviewer for this suggestion. We believe these experimental results give readers tangible numbers that illustrate the benefits of using our approach in practice. We will add them to the paper.\\n\\n\\n-Multi-GPU Training: Our implementation also supports multi-GPU training with data parallelism (i.e., splitting batches across GPUs). Here, our approximation allows for lower memory and therefore larger batches on each GPU. Note that the time per sample metric also applies to multi-GPU training, where it corresponds to time per sample per GPU. Thus, for a fixed number of GPUs, the wall-clock time advantage of our method for larger networks carries over.\\n\\nSince the original submission, we have run an experiment to train a larger 152 layer Resnet for Imagenet. These results were obtained by splitting the computation across two GPUs. The relative accuracy results were similar to the 34-layer version, with 10-crop Top-5 error rates being [Baseline: 7.2%], [8-bit: 7.7%], and [4-bit: 7.7%]. \\n\\nWhile our approximation method was able to fit the entire batch of 256 on two GPUs (128 on each), for the baseline we again had to do two forward-backward passes and average gradients (with 64 on each GPU in each pass). In this case too, we saw an advantage in wall-clock time because a batch of 64 for the baseline wasn't able to saturate all cores on each GPU. Our method took 17 seconds per iteration for the full batch (1 pass parallelized over two GPUs), while baseline training took 20 seconds (total of 2 passes over two GPUs).\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for the review. Please find answers to specific concerns below:\\n\\n- Regarding the Hubara et al. paper and comparisons to it:\\n\\nHubara et al. address a very different problem than we do. Crucially, their method does not reduce memory usage **during training**, which is the goal of our work. Instead, they reduce the amount of memory and computation the network would need for inference (i.e., after training).\\n\\nHubara et al.\\u2019s goal is to enable the use of quantized weights and activations at \\u201ctest time\\u201d to reduce memory usage and computation cost in deployed networks. They train networks that can work with binary activations during inference, because it reduces model size and saves computation by turning floating point multiplications into binary operations. The paper addresses the challenge of how to train such quantized models, even though they are technically non-differentiable.\\n\\nTheir approach provides no memory advantage during training itself (unlike us, this is not their goal). This is because their training method still relies on full-precision real-valued versions of the weights and activations, with discretization interspersed to match test time performance. Specifically, \\u201cAlgorithm 1\\u201d in their paper clearly describes how their backward computation uses the real-valued versions of their binarized weights and activations. These are stored in memory at full precision during training.\", \"our_paper_has_a_different_goal\": [\"to train standard network models that will be used with full precision activations and weights for inference, using approximations to reduce their memory footprint during training. This is useful as training requires substantially more memory than inference, especially for deeper networks, due to the need for storing all intermediate activations. To clarify this, we will add a discussion of the Hubara et al. paper in our related work section.\", \"Regarding Description of Experimental Setup: We will adopt the reviewer's suggestion and split the description of the implementation. We will first describe the general approach, and later specify the relationship to the Tensorflow toolkit. We note that we rely on Tensorflow simply as a matter of convenience. We use it because it allows us to call the efficient GPU routines for per-layer forward and gradient computation.\", \"Regarding memory management: There is typically no loss or gain in efficiency due to memory allocation calls since these allocation calls are made once at the beginning of training and not at every iteration. This is true for our implementation, as well as regular training in most toolkits (including Tensorflow). This is because the structure of the network does not change from iteration to iteration, and so the toolkit is able to allocate all required buffers a-priori (or during the first iteration). Thus the main advantage of our method is in the reduction of total allocated memory.\"]}", "{\"title\": \"Clear description , lacking in comparison with relevant prior work.\", \"review\": \"In this paper the authors describe a quantization approach for activations of the neural network computation to improve the memory efficiency of neural network training and thus training efficiency of a single worker.\\n\\nPrior work\\n-----------------\\nThey compare the proposed method with other approaches involving the quantization of gradients or recomputation of activations in a sub-graph during back-propagation. However the literature survey lacks survey of more relevant quantization techniques e.g. [1]. \\n[1] : Hubara, Itay, et al. \\\"Quantized neural networks: Training neural networks with low precision weights and activations.\\\" The Journal of Machine Learning Research 18.1 (2017): 6869-6898.\\n\\nexperimental setup\\n-----------------------------\\nA more formal description of experimental setup assuming a general reader not familiar with the specific toolkits is advised. Any toolkit specific details like how the layer-wise forward & backward propagation is done via separate sess.run calls can be delegated to an appendix or footnote. Further given that the authors have chosen not to utilize the auto-diff functionality or other computation graph optimization features provided by Tensorflow; and given that they are even manually managing the memory allocation it is not clear why they are relying on this toolkit. Irrespective of this choice, this section could be re-written to make the implementation description more accessible to a general reader and toolkit specific details could specified separately.\\n\\nReg. manual memory management - The authors specify how common buffers are being used for storing activations and gradients across layers. Given that typical neural network models need not be composed of homogenous layer types which can actually share the buffers it would be useful to add a detail on how much efficiency is achieved by reducing the memory allocation calls for the architectures being used in this paper.\\n\\n\\nresults\\n-----------\\nComparisons with prior work using other quantization methods to achieve memory efficiency is lacking.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Clarifications\", \"comment\": \"- There has been a misunderstanding. We would like to clarify that the baseline (without approximation) that we compare to does in fact include the \\\"cheap\\\" form of checkpointing that the reviewer suggests (we agree it is the right baseline to compare to). We indicate this in the end of the first para of Sec 3.2 and in the third para of Sec 4---but we realize that it could have been stated more clearly (i.e., without notation), and we shall do so in the revised version.\\n\\nThus, our baseline in fact does only store one set of activations for the set of batchnorm, relu, and conv (we count this as one layer in our definition of L). And our approximation strategy provides us a saving (of ~ 4x to 8x) over and above those of this basic form of checkpointing. Instead of storing that one set of activations in full floating point precision, we approximate it (after using the full precision version for the forward pass).\\n\\n- Also we want to clarify that compared to the more expensive forms of checkpointing---which permit sub-linear memory usage but require expensive recomputation of groups of conv layers--our method is nearly free in terms of computation cost---the only additional cost is elementwise rounding of activations, which is relatively negligible in a typical network as noted in the experiments. And even though our savings are linear, we believe a factor of 4x or 8x savings with nearly identical computational cost can be extremely useful in many settings.\\n\\nMoreover, in cases where memory is especially at a premium, our approximation-based method can be _combined_ with checkpointing. When breaking up the network into groups of layers, our method can be used to reduce the memory footprint even further for back-propagating within each group, thus allowing larger groups, fewer checkpoints, and hence less computational cost for the same memory budget. Essentially, our strategy is orthogonal (and therefore, potentially complementary) to checkpointing.\"}", "{\"title\": \"Clear explanation and execution of good idea\", \"review\": \"The authors detail a procedure to reduce the memory footprint of deep networks by quantization of the activations only on back propagation. While this scheme does not benefit from computational speedups of activation quantization on both passes (and indeed has a slight computational overhead), the authors demonstrate that for common convolutional architectures it nicely preserves the accuracy of computation by computing the forward pass at full accuracy and limiting propagation of errors in the backward pass. This is possible because the majority of errors are introduced in gradient calculation of the weights and not the inputs each layer. The authors also wisely perform quantization after batch normalization and use the known mean and variance of the activations to scale the quantization and reduce errors. They demonstrate very slight drops in performance accuracy for ResNets on Cifar10, Cifar100, and ImageNet with memory compression factors up to 8. They also point to natural future directions such as using vector quantization to better leverage the activation statistics. The paper is also very clearly written with appropriate references to the relevant literature.\\n\\nAn area of improvement I could see for the paper would be to demonstrate the utility of the reduced memory footprint. Their motivation clearly outlines that reducing memory can allow for larger batch sizes and larger networks that can improve the performance of training, but the authors do not demonstrate an example of this principle. They do mention that they are able to train with a larger batch size on ImageNet without combining batches, but more quantitative evidence of improvements in wall clock time (for different batch sizes) or improvement in performance (for larger networks) would help support the arguments of the paper. Given that the authors are focusing on single device training, they don't have to necessarily improve the state of the art, but a relative comparison would be illustrative. Also, specific measurements of the change in memory footprint for real networks would be helpful.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Borderline paper\", \"review\": \"This paper proposes to use 8/4-bit approximation of activations to save the memory cost during gradient computation. The proposed technique is simple and straightforward. On the other hand, the proposed method only saves up to a constant cost of the storage. With the constant factor (4x, 8x) depending on whether fp16 or fp32 is used during computation. Notably, there is a small but noticeable accuracy drop in the final trained model using this mechanism.\\n\\nThe alternative method, gradient checkpointing, can bring sublinear memory improvement, with at most 25% compute overhead, with no loss of accuracy drop.\\n\\nAs a result, the proposed method has a limited use case. The author did mention, during the response that the method could be combined further with the sublinear checkpointing. However, since sublinear checkpointing already brings in significant savings, it is unclear whether low bit compression is necessary.\\n\\nGiven the limited technical novelty(can be described as oneliner \\\"store forward pass in 4/8 bit fixed point\\\"), limited applicable scenarios, and limited improvement it can buy(4x memory saving with accuracy drop), I think this is a boarder-line paper\\n\\nOn the positive side, the empirical result could still be interesting to some readers in the ICLR community, the paper could be further improved by comparing more numerical representations, such as fp16 and other floating point formats such as unum.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }